file_name
large_stringlengths 4
69
| prefix
large_stringlengths 0
26.7k
| suffix
large_stringlengths 0
24.8k
| middle
large_stringlengths 0
2.12k
| fim_type
large_stringclasses 4
values |
---|---|---|---|---|
main.rs | fn main() {
//Rust deals with stack and heaps for memory managment no gc or direct memory management
//The stack memory is a first in last off type queue
//Stack data must take up a known and fixed size
//In rust the heap is used for when we don't know the size of the vector at compile time
//or if the memory to be allocated is dynamic
//Heap memory is not really organized data just kinda gets thrown where the os has space for it
//Therefore, the program has to jump around to get data which can slow things down.
//Function local variables get pushed onto the stack and then popped off when
//it's done
//A value is assigned an owner which is it's owner. Only one owner can exist at a time
//When the owner is out of scope the value disappears
//Examples to go over variable scope
let s = "hello";
{
let s = "hello2";
println!("s: {}", s);
}
println!("Previous s is out of scope but the one defined earlier isn't");
println!("s: {}", s);
//Onto the next example which goes over the rules of ownership
//It's going to be using the String type aka StringBuffers
let mut s = String::from("hello");
s.push_str(", world!");// s must be mutable for this to work
println!("{}", s);
//Note: In C++, this pattern of deallocating resources at the end of an item’s lifetime is sometimes
//called Resource Acquisition Is Initialization (RAII). The drop function in Rust will be familiar
//to you if you’ve used RAII patterns.
let x = 5;
let y = x;// y is just a copy of x since they are simple types and have a fixed size
let s1 = String::from("hello");
let s2 = s1; // s2 is a copy of the pointer to the data that s1 points to
// this errors out because s1 does not have a copy trait which meanse
//we made a shallow copy instead of a deep copy. Rust does not like this
// if we tried to use s1. If we use s2 we are fine since s1 is invalidated
//after we assign s2 to s1 values. This operation is called a move.
// println!("{}", s2);
let s1 = String::from("hello");
let s2 = s1.clone(); // This creates a deep copy of of s1. We can now use s1 in other places with out
// it being invalid
// println!("{}",s1);
//Info about what things that make a deep copy when you do let x = something; let y = x;
// Rust has a special annotation called the Copy trait that we can place on types like integers that are
// stored on the stack (we’ll talk more about traits in Chapter 10). If a type has the Copy trait, an older
// variable is still usable after assignment. Rust won’t let us annotate a type with the Copy trait if the
// type, or any of its parts, has implemented the Drop trait. If the type needs something special to happen
// when the value goes out of scope and we add the Copy annotation to that type, we’ll get a compile time error.
// To learn about how to add the Copy annotation to your type, see Appendix C on Derivable Traits.
// So what types are Copy? You can check the documentation for the given type to be sure, but as a general rule,
// any group of simple scalar values can be Copy, and nothing that requires allocation or is some form of resource
// is Copy. Here are some of the types that are Copy:
// All the integer types, like u32.
// The boolean type, bool, with values true and false.
// All the floating point types, like f64.
// Tuples, but only if they contain types that are also Copy. (i32, i32) is Copy, but (i32, String) is not.
let s = String::from("hello"); // s comes into scope.
//So in rust if we pass a variable into a function it loses it's ownership to the
//function. Then once the function is over that variable no longer exists
//because it is now out of scope.
takes_ownership(s); // s's value moves into the function...
//... and so is no longer valid here.
let x = 5; // x comes into scope.
//If a variable has the copy trait then only a copy is made to the function and
//we can still use the variable afterwards even though all the variables in the
//function are now out of scope.
makes_copy(x); // x would move into the function,
// but i32 is Copy, so it’s okay to still
// use x afterward.
//we can give ownership of a variable from a function by having an expression at the end.
//We could pass in a variable and then take back its ownership by doing this. However, I think this
//is kinda of a pain. The people at Rust feel the same.
let s1 = gives_ownership();
//Rust also let's return variables as tuples so which we can then can deconstruct this when
//we get the returned values.
//Now it's time to go over references and borrowing!
let s1 = String::from("hello");
//The & creates a reference to a variable. They can be thought of a pointer to the original data.
//By doing this we do not pass ownership of the variable to the function
//Therefore when we go out of scope of the function we still have ownership of the variable
//where the function call was made.
//References as function parameters is called borrowing.
let len = calculate_length(&s1);
println!("The length of '{}' is {}.", s1, len);
//We can not modify a borrowed variable.
//change(&s1);
let mut s1 = String::from("hello");
//We can fix this by making a mutable reference
//We also need to make sure that our variable we're passing in is also mutable.
change(&mut s1);
println!("{}", s1);
//You are only allowed one mutable reference to a particular piece of data in a particular scope.
//This insures that we don't have any aliasing with our references refering to the same data.
//The benefit of having this restriction is that Rust can prevent data races at compile time.
//From the rust book
//Whew! We also cannot have a mutable reference while we have an immutable one.
//Users of an immutable reference don’t expect the values to suddenly change out from under them!
//However, multiple immutable references are okay because no one who is just reading the data has
//the ability to affect anyone else’s reading of the data.
//let mut s = String::from("Hello");
//let r1 = &s; //Immutable reference
//let r2 = &s; //Immutable reference
//let r3 = &s; //Mutable reference -- big no no
//The compiler does not dangling pointers/references. It therefore will error out on us.
// let refernece_to_nothing = dangle();
//We are now going to go over slices.
//From the rust book: Another data type that does not have ownership is the slice.
//Slices let you reference a contiguous sequence of elements in a collection rather than the whole collection.
// let mut s = String::from("hello world");
// let word = first_word(&s); // word will get the value 5.
// s.clear(); // This empties the String, making it equal to "".
// word still has the value 5 here, but there's no more string that
// we could meaningfully use the value 5 with. word is now totally invalid!
//The index we got is now completely out of sync with our original string.
//If we end up having more indices we could get even more out of sync with our data.
//For strings we can take advantage of a built in feature called string slices.
//They create a reference to portions of a string.
let s = String::from("hello world");
//Slicing is similar to slicing in python where you have a starting index and then
//the ending value is +1 of the data you actually care about.
let hello = &s[0..5];
// let hello = &s[..5]; //Equivalent to the above
let world = &s[6..11];
// let world = &s[6..]; //Equivalent to the above
let len = s.len();
let slice = &s[0..len];
// let slice = &s[..]; //Equivalent to the above
// We now have a straightforward API that’s much harder to mess up, since the compiler will
//ensure the references into the String remain valid. Remember the bug in the program in Listing 4-11,
//when we got the index to the end of the first word but then cleared the string so our index was invalid?
//That code was logically incorrect but didn’t show any immediate errors. The problems would show up later
//if we kept trying to use the first word index with an emptied string. Slices make this bug impossible
//and let us know we have a problem with our code much sooner. Using the slice version of first_word
//will throw a compile time error:
// let mut s = String::from("hello world");
// let word = first_word(&s);
// s.clear(); // Error!
// Recall from the borrowing rules that if we have an immutable reference to something, we cannot also
// take a mutable reference. Because clear needs to truncate the String, it tries to take a mutable reference,
// which fails. Not only has Rust made our API easier to use, but it has also eliminated an entire class of errors
// at compile time!
let s = "Hello, world!";
// The type of s here is &str: it’s a slice pointing to that specific point of the binary. This is also why string
// literals are immutable; &str is an immutable reference.
let my_string = String::from("hello world");
// first_word works on slices of `String`s
let word = first_word(&my_string[..]);
let my_string_literal = "hello world";
// first_word works on slices of string literals
let word = first_word(&my_string_literal[..]);
// since string literals *are* string slices already,
// this works too, without the slice syntax!
let word = first_word(my_string_literal);
let a = [1, 2, 3, 4, 5];
let slice = &a[1..3];
// This slice has the type &[i32]. It works the same way as string slices do, by storing a reference to the
// first element and a length. You’ll use this kind of slice for all sorts of other collections. We’ll discuss
// these collections in detail when we talk about vectors in Chapter 8
} // Here, x goes out of scope, then s. But since s's value was moved, nothing
// special happens.
fn takes_ownership(some_string: String) { // some_string comes into scope.
println!("{}", some_string);
} // Here, some_string goes out of scope and `drop` is called. The backing
// memory is freed.
fn makes_copy(some_integer: i32) { // some_integer comes into scope.
println!("{}", some_integer);
} // Here, some_integer goes out of scope. Nothing special happens.
//Tell what type the function will return
fn gives_ownership() -> String { // gives_ownership will move its
// return value into the function
// that calls it.
let some_string = String::from("hello"); // some_string comes into scope.
some_string // some_string is returned and
// moves out to the calling
// function.
}
fn calculate_length(s: &String) -> usize {
s.len()
}
//This function will error on us since we are trying to
//modify a borrowed variable. We will always get an
//error for this function even if we never call it.
// fn change(some_string: &String) {
// some_string.push_str(", world");
// }
//This fixes the above code by making a mutable reference that we can now modify.
fn change(some_string: &mut S | {
some_string.push_str(", world");
}
//The below code creates a dangling pointer/reference.
//So when the data goes out of scope at the end of the function
//our reference now points to memory that has been freed.
//The compiler catches this and errors out on us.
// fn dangle() -> &String {
// let s = String::from("hello");
// &s
// }
//This version doesn't create slices of the data so things become out of index with each other
//We are going to rewrite it with a new version
// fn first_word(s: &String) -> usize {
// //We are converting our string into a byte
// let bytes = s.as_bytes();
// //We now iterate through the string using iter.
// //the enumerate function packages up each part of the
// //iterator as a tuple with an index and a reference to the value
// for (i, &item) in bytes.iter().enumerate() {
// //We check to see if the byte literal of the space is
// //equal to our item.
// //If it is then we return that index.
// if item == b''{
// return i;
// }
// }
// //If we don't run across a space at all then we return the length of the string.
// s.len()
// }
//We can change the following to the current function signature
// fn first_word(s: &String) -> &str {
//The new signature now allows us to operate on both Strings and str types
fn first_word(s: &str) -> &str {
let bytes = s.as_bytes();
for (i, &item) in bytes.iter().enumerate() {
if item == b''{
return &s[0..i];
}
}
&s[..]
} | tring) | identifier_name |
main.rs | fn main() {
//Rust deals with stack and heaps for memory managment no gc or direct memory management
//The stack memory is a first in last off type queue
//Stack data must take up a known and fixed size
//In rust the heap is used for when we don't know the size of the vector at compile time
//or if the memory to be allocated is dynamic
//Heap memory is not really organized data just kinda gets thrown where the os has space for it
//Therefore, the program has to jump around to get data which can slow things down.
//Function local variables get pushed onto the stack and then popped off when
//it's done
//A value is assigned an owner which is it's owner. Only one owner can exist at a time
//When the owner is out of scope the value disappears
//Examples to go over variable scope
let s = "hello";
{
let s = "hello2";
println!("s: {}", s);
}
println!("Previous s is out of scope but the one defined earlier isn't");
println!("s: {}", s);
//Onto the next example which goes over the rules of ownership
//It's going to be using the String type aka StringBuffers
let mut s = String::from("hello");
s.push_str(", world!");// s must be mutable for this to work |
//Note: In C++, this pattern of deallocating resources at the end of an item’s lifetime is sometimes
//called Resource Acquisition Is Initialization (RAII). The drop function in Rust will be familiar
//to you if you’ve used RAII patterns.
let x = 5;
let y = x;// y is just a copy of x since they are simple types and have a fixed size
let s1 = String::from("hello");
let s2 = s1; // s2 is a copy of the pointer to the data that s1 points to
// this errors out because s1 does not have a copy trait which meanse
//we made a shallow copy instead of a deep copy. Rust does not like this
// if we tried to use s1. If we use s2 we are fine since s1 is invalidated
//after we assign s2 to s1 values. This operation is called a move.
// println!("{}", s2);
let s1 = String::from("hello");
let s2 = s1.clone(); // This creates a deep copy of of s1. We can now use s1 in other places with out
// it being invalid
// println!("{}",s1);
//Info about what things that make a deep copy when you do let x = something; let y = x;
// Rust has a special annotation called the Copy trait that we can place on types like integers that are
// stored on the stack (we’ll talk more about traits in Chapter 10). If a type has the Copy trait, an older
// variable is still usable after assignment. Rust won’t let us annotate a type with the Copy trait if the
// type, or any of its parts, has implemented the Drop trait. If the type needs something special to happen
// when the value goes out of scope and we add the Copy annotation to that type, we’ll get a compile time error.
// To learn about how to add the Copy annotation to your type, see Appendix C on Derivable Traits.
// So what types are Copy? You can check the documentation for the given type to be sure, but as a general rule,
// any group of simple scalar values can be Copy, and nothing that requires allocation or is some form of resource
// is Copy. Here are some of the types that are Copy:
// All the integer types, like u32.
// The boolean type, bool, with values true and false.
// All the floating point types, like f64.
// Tuples, but only if they contain types that are also Copy. (i32, i32) is Copy, but (i32, String) is not.
let s = String::from("hello"); // s comes into scope.
//So in rust if we pass a variable into a function it loses it's ownership to the
//function. Then once the function is over that variable no longer exists
//because it is now out of scope.
takes_ownership(s); // s's value moves into the function...
//... and so is no longer valid here.
let x = 5; // x comes into scope.
//If a variable has the copy trait then only a copy is made to the function and
//we can still use the variable afterwards even though all the variables in the
//function are now out of scope.
makes_copy(x); // x would move into the function,
// but i32 is Copy, so it’s okay to still
// use x afterward.
//we can give ownership of a variable from a function by having an expression at the end.
//We could pass in a variable and then take back its ownership by doing this. However, I think this
//is kinda of a pain. The people at Rust feel the same.
let s1 = gives_ownership();
//Rust also let's return variables as tuples so which we can then can deconstruct this when
//we get the returned values.
//Now it's time to go over references and borrowing!
let s1 = String::from("hello");
//The & creates a reference to a variable. They can be thought of a pointer to the original data.
//By doing this we do not pass ownership of the variable to the function
//Therefore when we go out of scope of the function we still have ownership of the variable
//where the function call was made.
//References as function parameters is called borrowing.
let len = calculate_length(&s1);
println!("The length of '{}' is {}.", s1, len);
//We can not modify a borrowed variable.
//change(&s1);
let mut s1 = String::from("hello");
//We can fix this by making a mutable reference
//We also need to make sure that our variable we're passing in is also mutable.
change(&mut s1);
println!("{}", s1);
//You are only allowed one mutable reference to a particular piece of data in a particular scope.
//This insures that we don't have any aliasing with our references refering to the same data.
//The benefit of having this restriction is that Rust can prevent data races at compile time.
//From the rust book
//Whew! We also cannot have a mutable reference while we have an immutable one.
//Users of an immutable reference don’t expect the values to suddenly change out from under them!
//However, multiple immutable references are okay because no one who is just reading the data has
//the ability to affect anyone else’s reading of the data.
//let mut s = String::from("Hello");
//let r1 = &s; //Immutable reference
//let r2 = &s; //Immutable reference
//let r3 = &s; //Mutable reference -- big no no
//The compiler does not dangling pointers/references. It therefore will error out on us.
// let refernece_to_nothing = dangle();
//We are now going to go over slices.
//From the rust book: Another data type that does not have ownership is the slice.
//Slices let you reference a contiguous sequence of elements in a collection rather than the whole collection.
// let mut s = String::from("hello world");
// let word = first_word(&s); // word will get the value 5.
// s.clear(); // This empties the String, making it equal to "".
// word still has the value 5 here, but there's no more string that
// we could meaningfully use the value 5 with. word is now totally invalid!
//The index we got is now completely out of sync with our original string.
//If we end up having more indices we could get even more out of sync with our data.
//For strings we can take advantage of a built in feature called string slices.
//They create a reference to portions of a string.
let s = String::from("hello world");
//Slicing is similar to slicing in python where you have a starting index and then
//the ending value is +1 of the data you actually care about.
let hello = &s[0..5];
// let hello = &s[..5]; //Equivalent to the above
let world = &s[6..11];
// let world = &s[6..]; //Equivalent to the above
let len = s.len();
let slice = &s[0..len];
// let slice = &s[..]; //Equivalent to the above
// We now have a straightforward API that’s much harder to mess up, since the compiler will
//ensure the references into the String remain valid. Remember the bug in the program in Listing 4-11,
//when we got the index to the end of the first word but then cleared the string so our index was invalid?
//That code was logically incorrect but didn’t show any immediate errors. The problems would show up later
//if we kept trying to use the first word index with an emptied string. Slices make this bug impossible
//and let us know we have a problem with our code much sooner. Using the slice version of first_word
//will throw a compile time error:
// let mut s = String::from("hello world");
// let word = first_word(&s);
// s.clear(); // Error!
// Recall from the borrowing rules that if we have an immutable reference to something, we cannot also
// take a mutable reference. Because clear needs to truncate the String, it tries to take a mutable reference,
// which fails. Not only has Rust made our API easier to use, but it has also eliminated an entire class of errors
// at compile time!
let s = "Hello, world!";
// The type of s here is &str: it’s a slice pointing to that specific point of the binary. This is also why string
// literals are immutable; &str is an immutable reference.
let my_string = String::from("hello world");
// first_word works on slices of `String`s
let word = first_word(&my_string[..]);
let my_string_literal = "hello world";
// first_word works on slices of string literals
let word = first_word(&my_string_literal[..]);
// since string literals *are* string slices already,
// this works too, without the slice syntax!
let word = first_word(my_string_literal);
let a = [1, 2, 3, 4, 5];
let slice = &a[1..3];
// This slice has the type &[i32]. It works the same way as string slices do, by storing a reference to the
// first element and a length. You’ll use this kind of slice for all sorts of other collections. We’ll discuss
// these collections in detail when we talk about vectors in Chapter 8
} // Here, x goes out of scope, then s. But since s's value was moved, nothing
// special happens.
fn takes_ownership(some_string: String) { // some_string comes into scope.
println!("{}", some_string);
} // Here, some_string goes out of scope and `drop` is called. The backing
// memory is freed.
fn makes_copy(some_integer: i32) { // some_integer comes into scope.
println!("{}", some_integer);
} // Here, some_integer goes out of scope. Nothing special happens.
//Tell what type the function will return
fn gives_ownership() -> String { // gives_ownership will move its
// return value into the function
// that calls it.
let some_string = String::from("hello"); // some_string comes into scope.
some_string // some_string is returned and
// moves out to the calling
// function.
}
fn calculate_length(s: &String) -> usize {
s.len()
}
//This function will error on us since we are trying to
//modify a borrowed variable. We will always get an
//error for this function even if we never call it.
// fn change(some_string: &String) {
// some_string.push_str(", world");
// }
//This fixes the above code by making a mutable reference that we can now modify.
fn change(some_string: &mut String) {
some_string.push_str(", world");
}
//The below code creates a dangling pointer/reference.
//So when the data goes out of scope at the end of the function
//our reference now points to memory that has been freed.
//The compiler catches this and errors out on us.
// fn dangle() -> &String {
// let s = String::from("hello");
// &s
// }
//This version doesn't create slices of the data so things become out of index with each other
//We are going to rewrite it with a new version
// fn first_word(s: &String) -> usize {
// //We are converting our string into a byte
// let bytes = s.as_bytes();
// //We now iterate through the string using iter.
// //the enumerate function packages up each part of the
// //iterator as a tuple with an index and a reference to the value
// for (i, &item) in bytes.iter().enumerate() {
// //We check to see if the byte literal of the space is
// //equal to our item.
// //If it is then we return that index.
// if item == b''{
// return i;
// }
// }
// //If we don't run across a space at all then we return the length of the string.
// s.len()
// }
//We can change the following to the current function signature
// fn first_word(s: &String) -> &str {
//The new signature now allows us to operate on both Strings and str types
fn first_word(s: &str) -> &str {
let bytes = s.as_bytes();
for (i, &item) in bytes.iter().enumerate() {
if item == b''{
return &s[0..i];
}
}
&s[..]
} |
println!("{}", s); | random_line_split |
mod.rs | // Copyright 2015-2019 Parity Technologies (UK) Ltd.
// This file is part of Parity Ethereum.
// Parity Ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or |
// Parity Ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
//! Utilities and helpers for transaction dispatch.
pub(crate) mod light;
mod full;
mod prospective_signer;
#[cfg(any(test, feature = "accounts"))]
mod signing;
#[cfg(not(any(test, feature = "accounts")))]
mod signing {
use super::*;
use v1::helpers::errors;
/// Dummy signer implementation
#[derive(Debug, Clone)]
pub struct Signer;
impl Signer {
/// Create new instance of dummy signer (accept any AccountProvider)
pub fn new<T>(_ap: T) -> Self {
Signer
}
}
impl super::Accounts for Signer {
fn sign_transaction(&self, _filled: FilledTransactionRequest, _chain_id: Option<u64>, _nonce: U256, _password: SignWith) -> Result<WithToken<SignedTransaction>> {
Err(errors::account("Signing unsupported", "See #9997"))
}
fn sign_message(&self, _address: Address, _password: SignWith, _hash: SignMessage) -> Result<WithToken<Signature>> {
Err(errors::account("Signing unsupported", "See #9997"))
}
fn decrypt(&self, _address: Address, _password: SignWith, _data: Bytes) -> Result<WithToken<Bytes>> {
Err(errors::account("Signing unsupported", "See #9997"))
}
fn supports_prospective_signing(&self, _address: &Address, _password: &SignWith) -> bool {
false
}
fn default_account(&self) -> Address {
Default::default()
}
fn is_unlocked(&self, _address: &Address) -> bool {
false
}
}
}
pub use self::light::LightDispatcher;
pub use self::full::FullDispatcher;
pub use self::signing::Signer;
pub use v1::helpers::nonce::Reservations;
use std::fmt::Debug;
use std::ops::Deref;
use std::sync::Arc;
use bytes::Bytes;
use client_traits::BlockChainClient;
use ethcore::miner::MinerService;
use ethereum_types::{H520, H256, U256, Address};
use ethkey::{Password, Signature};
use hash::keccak;
use types::transaction::{SignedTransaction, PendingTransaction};
use jsonrpc_core::{BoxFuture, Result, Error};
use jsonrpc_core::futures::{future, Future, IntoFuture};
use v1::helpers::{TransactionRequest, FilledTransactionRequest, ConfirmationPayload};
use v1::types::{
Bytes as RpcBytes,
RichRawTransaction as RpcRichRawTransaction,
ConfirmationPayload as RpcConfirmationPayload,
ConfirmationResponse,
EthSignRequest as RpcEthSignRequest,
EIP191SignRequest as RpcSignRequest,
DecryptRequest as RpcDecryptRequest,
};
/// Has the capability to dispatch, sign, and decrypt.
///
/// Requires a clone implementation, with the implication that it be cheap;
/// usually just bumping a reference count or two.
pub trait Dispatcher: Send + Sync + Clone {
// TODO: when ATC exist, use zero-cost
// type Out<T>: IntoFuture<T, Error>
/// Fill optional fields of a transaction request, fetching gas price but not nonce.
fn fill_optional_fields(&self, request: TransactionRequest, default_sender: Address, force_nonce: bool)
-> BoxFuture<FilledTransactionRequest>;
/// Sign the given transaction request without dispatching, fetching appropriate nonce.
fn sign<P>(
&self,
filled: FilledTransactionRequest,
signer: &Arc<dyn Accounts>,
password: SignWith,
post_sign: P,
) -> BoxFuture<P::Item> where
P: PostSign +'static,
<P::Out as futures::future::IntoFuture>::Future: Send;
/// Converts a `SignedTransaction` into `RichRawTransaction`
fn enrich(&self, SignedTransaction) -> RpcRichRawTransaction;
/// "Dispatch" a local transaction.
fn dispatch_transaction(&self, signed_transaction: PendingTransaction) -> Result<H256>;
}
/// Payload to sign
pub enum SignMessage {
/// Eth-sign kind data (requires prefixing)
Data(Bytes),
/// Prefixed data hash
Hash(H256),
}
/// Abstract transaction signer.
///
/// NOTE This signer is semi-correct, it's a temporary measure to avoid moving too much code.
/// If accounts are ultimately removed all password-dealing endpoints will be wiped out.
pub trait Accounts: Send + Sync {
/// Sign given filled transaction request for the specified chain_id.
fn sign_transaction(&self, filled: FilledTransactionRequest, chain_id: Option<u64>, nonce: U256, password: SignWith) -> Result<WithToken<SignedTransaction>>;
/// Sign given message.
fn sign_message(&self, address: Address, password: SignWith, hash: SignMessage) -> Result<WithToken<Signature>>;
/// Decrypt given message.
fn decrypt(&self, address: Address, password: SignWith, data: Bytes) -> Result<WithToken<Bytes>>;
/// Returns `true` if the accounts can sign multiple times.
fn supports_prospective_signing(&self, address: &Address, password: &SignWith) -> bool;
/// Returns default account.
fn default_account(&self) -> Address;
/// Returns true if account is unlocked (i.e. can sign without a password)
fn is_unlocked(&self, address: &Address) -> bool;
}
/// action to execute after signing
/// e.g importing a transaction into the chain
pub trait PostSign: Send {
/// item that this PostSign returns
type Item: Send;
/// incase you need to perform async PostSign actions
type Out: IntoFuture<Item = Self::Item, Error = Error> + Send;
/// perform an action with the signed transaction
fn execute(self, signer: WithToken<SignedTransaction>) -> Self::Out;
}
impl PostSign for () {
type Item = WithToken<SignedTransaction>;
type Out = Result<Self::Item>;
fn execute(self, signed: WithToken<SignedTransaction>) -> Self::Out {
Ok(signed)
}
}
impl<F: Send, T: Send> PostSign for F
where F: FnOnce(WithToken<SignedTransaction>) -> Result<T>
{
type Item = T;
type Out = Result<Self::Item>;
fn execute(self, signed: WithToken<SignedTransaction>) -> Self::Out {
(self)(signed)
}
}
/// Single-use account token.
pub type AccountToken = Password;
/// Values used to unlock accounts for signing.
#[derive(Clone, PartialEq)]
pub enum SignWith {
/// Nothing -- implies the account is already unlocked.
Nothing,
/// Unlock with password.
Password(Password),
/// Unlock with single-use token.
Token(AccountToken),
}
impl SignWith {
#[cfg(any(test, feature = "accounts"))]
fn is_password(&self) -> bool {
if let SignWith::Password(_) = *self {
true
} else {
false
}
}
}
/// A value, potentially accompanied by a signing token.
pub enum WithToken<T> {
/// No token.
No(T),
/// With token.
Yes(T, AccountToken),
}
impl<T: Debug> Deref for WithToken<T> {
type Target = T;
fn deref(&self) -> &Self::Target {
match *self {
WithToken::No(ref v) => v,
WithToken::Yes(ref v, _) => v,
}
}
}
impl<T: Debug> WithToken<T> {
/// Map the value with the given closure, preserving the token.
pub fn map<S, F>(self, f: F) -> WithToken<S> where
S: Debug,
F: FnOnce(T) -> S,
{
match self {
WithToken::No(v) => WithToken::No(f(v)),
WithToken::Yes(v, token) => WithToken::Yes(f(v), token),
}
}
/// Convert into inner value, ignoring possible token.
pub fn into_value(self) -> T {
match self {
WithToken::No(v) => v,
WithToken::Yes(v, _) => v,
}
}
/// Convert the `WithToken` into a tuple.
pub fn into_tuple(self) -> (T, Option<AccountToken>) {
match self {
WithToken::No(v) => (v, None),
WithToken::Yes(v, token) => (v, Some(token))
}
}
}
impl<T: Debug> From<(T, AccountToken)> for WithToken<T> {
fn from(tuple: (T, AccountToken)) -> Self {
WithToken::Yes(tuple.0, tuple.1)
}
}
impl<T: Debug> From<(T, Option<AccountToken>)> for WithToken<T> {
fn from(tuple: (T, Option<AccountToken>)) -> Self {
match tuple.1 {
Some(token) => WithToken::Yes(tuple.0, token),
None => WithToken::No(tuple.0),
}
}
}
/// Execute a confirmation payload.
pub fn execute<D: Dispatcher +'static>(
dispatcher: D,
signer: &Arc<dyn Accounts>,
payload: ConfirmationPayload,
pass: SignWith
) -> BoxFuture<WithToken<ConfirmationResponse>> {
match payload {
ConfirmationPayload::SendTransaction(request) => {
let condition = request.condition.clone().map(Into::into);
let cloned_dispatcher = dispatcher.clone();
let post_sign = move |with_token_signed: WithToken<SignedTransaction>| {
let (signed, token) = with_token_signed.into_tuple();
let signed_transaction = PendingTransaction::new(signed, condition);
cloned_dispatcher.dispatch_transaction(signed_transaction)
.map(|hash| (hash, token))
};
Box::new(
dispatcher.sign(request, &signer, pass, post_sign).map(|(hash, token)| {
WithToken::from((ConfirmationResponse::SendTransaction(hash), token))
})
)
},
ConfirmationPayload::SignTransaction(request) => {
Box::new(dispatcher.sign(request, &signer, pass, ())
.map(move |result| result
.map(move |tx| dispatcher.enrich(tx))
.map(ConfirmationResponse::SignTransaction)
))
},
ConfirmationPayload::EthSignMessage(address, data) => {
let res = signer.sign_message(address, pass, SignMessage::Data(data))
.map(|result| result
.map(|s| H520(s.into_electrum()))
.map(ConfirmationResponse::Signature)
);
Box::new(future::done(res))
},
ConfirmationPayload::SignMessage(address, data) => {
let res = signer.sign_message(address, pass, SignMessage::Hash(data))
.map(|result| result
.map(|rsv| H520(rsv.into_electrum()))
.map(ConfirmationResponse::Signature)
);
Box::new(future::done(res))
},
ConfirmationPayload::Decrypt(address, data) => {
let res = signer.decrypt(address, pass, data)
.map(|result| result
.map(RpcBytes)
.map(ConfirmationResponse::Decrypt)
);
Box::new(future::done(res))
},
}
}
/// Returns a eth_sign-compatible hash of data to sign.
/// The data is prepended with special message to prevent
/// malicious DApps from using the function to sign forged transactions.
pub fn eth_data_hash(mut data: Bytes) -> H256 {
let mut message_data =
format!("\x19Ethereum Signed Message:\n{}", data.len())
.into_bytes();
message_data.append(&mut data);
keccak(message_data)
}
/// Extract the default gas price from a client and miner.
pub fn default_gas_price<C, M>(client: &C, miner: &M, percentile: usize) -> U256 where
C: BlockChainClient,
M: MinerService,
{
client.gas_price_corpus(100).percentile(percentile).cloned().unwrap_or_else(|| miner.sensible_gas_price())
}
/// Convert RPC confirmation payload to signer confirmation payload.
/// May need to resolve in the future to fetch things like gas price.
pub fn from_rpc<D>(payload: RpcConfirmationPayload, default_account: Address, dispatcher: &D) -> BoxFuture<ConfirmationPayload>
where D: Dispatcher
{
match payload {
RpcConfirmationPayload::SendTransaction(request) => {
Box::new(dispatcher.fill_optional_fields(request.into(), default_account, false)
.map(ConfirmationPayload::SendTransaction))
},
RpcConfirmationPayload::SignTransaction(request) => {
Box::new(dispatcher.fill_optional_fields(request.into(), default_account, false)
.map(ConfirmationPayload::SignTransaction))
},
RpcConfirmationPayload::Decrypt(RpcDecryptRequest { address, msg }) => {
Box::new(future::ok(ConfirmationPayload::Decrypt(address, msg.into())))
},
RpcConfirmationPayload::EthSignMessage(RpcEthSignRequest { address, data }) => {
Box::new(future::ok(ConfirmationPayload::EthSignMessage(address, data.into())))
},
RpcConfirmationPayload::EIP191SignMessage(RpcSignRequest { address, data }) => {
Box::new(future::ok(ConfirmationPayload::SignMessage(address, data)))
},
}
} | // (at your option) any later version. | random_line_split |
mod.rs | // Copyright 2015-2019 Parity Technologies (UK) Ltd.
// This file is part of Parity Ethereum.
// Parity Ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity Ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity Ethereum. If not, see <http://www.gnu.org/licenses/>.
//! Utilities and helpers for transaction dispatch.
pub(crate) mod light;
mod full;
mod prospective_signer;
#[cfg(any(test, feature = "accounts"))]
mod signing;
#[cfg(not(any(test, feature = "accounts")))]
mod signing {
use super::*;
use v1::helpers::errors;
/// Dummy signer implementation
#[derive(Debug, Clone)]
pub struct Signer;
impl Signer {
/// Create new instance of dummy signer (accept any AccountProvider)
pub fn new<T>(_ap: T) -> Self {
Signer
}
}
impl super::Accounts for Signer {
fn sign_transaction(&self, _filled: FilledTransactionRequest, _chain_id: Option<u64>, _nonce: U256, _password: SignWith) -> Result<WithToken<SignedTransaction>> {
Err(errors::account("Signing unsupported", "See #9997"))
}
fn sign_message(&self, _address: Address, _password: SignWith, _hash: SignMessage) -> Result<WithToken<Signature>> {
Err(errors::account("Signing unsupported", "See #9997"))
}
fn decrypt(&self, _address: Address, _password: SignWith, _data: Bytes) -> Result<WithToken<Bytes>> {
Err(errors::account("Signing unsupported", "See #9997"))
}
fn supports_prospective_signing(&self, _address: &Address, _password: &SignWith) -> bool {
false
}
fn default_account(&self) -> Address {
Default::default()
}
fn is_unlocked(&self, _address: &Address) -> bool {
false
}
}
}
pub use self::light::LightDispatcher;
pub use self::full::FullDispatcher;
pub use self::signing::Signer;
pub use v1::helpers::nonce::Reservations;
use std::fmt::Debug;
use std::ops::Deref;
use std::sync::Arc;
use bytes::Bytes;
use client_traits::BlockChainClient;
use ethcore::miner::MinerService;
use ethereum_types::{H520, H256, U256, Address};
use ethkey::{Password, Signature};
use hash::keccak;
use types::transaction::{SignedTransaction, PendingTransaction};
use jsonrpc_core::{BoxFuture, Result, Error};
use jsonrpc_core::futures::{future, Future, IntoFuture};
use v1::helpers::{TransactionRequest, FilledTransactionRequest, ConfirmationPayload};
use v1::types::{
Bytes as RpcBytes,
RichRawTransaction as RpcRichRawTransaction,
ConfirmationPayload as RpcConfirmationPayload,
ConfirmationResponse,
EthSignRequest as RpcEthSignRequest,
EIP191SignRequest as RpcSignRequest,
DecryptRequest as RpcDecryptRequest,
};
/// Has the capability to dispatch, sign, and decrypt.
///
/// Requires a clone implementation, with the implication that it be cheap;
/// usually just bumping a reference count or two.
pub trait Dispatcher: Send + Sync + Clone {
// TODO: when ATC exist, use zero-cost
// type Out<T>: IntoFuture<T, Error>
/// Fill optional fields of a transaction request, fetching gas price but not nonce.
fn fill_optional_fields(&self, request: TransactionRequest, default_sender: Address, force_nonce: bool)
-> BoxFuture<FilledTransactionRequest>;
/// Sign the given transaction request without dispatching, fetching appropriate nonce.
fn sign<P>(
&self,
filled: FilledTransactionRequest,
signer: &Arc<dyn Accounts>,
password: SignWith,
post_sign: P,
) -> BoxFuture<P::Item> where
P: PostSign +'static,
<P::Out as futures::future::IntoFuture>::Future: Send;
/// Converts a `SignedTransaction` into `RichRawTransaction`
fn enrich(&self, SignedTransaction) -> RpcRichRawTransaction;
/// "Dispatch" a local transaction.
fn dispatch_transaction(&self, signed_transaction: PendingTransaction) -> Result<H256>;
}
/// Payload to sign
pub enum SignMessage {
/// Eth-sign kind data (requires prefixing)
Data(Bytes),
/// Prefixed data hash
Hash(H256),
}
/// Abstract transaction signer.
///
/// NOTE This signer is semi-correct, it's a temporary measure to avoid moving too much code.
/// If accounts are ultimately removed all password-dealing endpoints will be wiped out.
pub trait Accounts: Send + Sync {
/// Sign given filled transaction request for the specified chain_id.
fn sign_transaction(&self, filled: FilledTransactionRequest, chain_id: Option<u64>, nonce: U256, password: SignWith) -> Result<WithToken<SignedTransaction>>;
/// Sign given message.
fn sign_message(&self, address: Address, password: SignWith, hash: SignMessage) -> Result<WithToken<Signature>>;
/// Decrypt given message.
fn decrypt(&self, address: Address, password: SignWith, data: Bytes) -> Result<WithToken<Bytes>>;
/// Returns `true` if the accounts can sign multiple times.
fn supports_prospective_signing(&self, address: &Address, password: &SignWith) -> bool;
/// Returns default account.
fn default_account(&self) -> Address;
/// Returns true if account is unlocked (i.e. can sign without a password)
fn is_unlocked(&self, address: &Address) -> bool;
}
/// action to execute after signing
/// e.g importing a transaction into the chain
pub trait PostSign: Send {
/// item that this PostSign returns
type Item: Send;
/// incase you need to perform async PostSign actions
type Out: IntoFuture<Item = Self::Item, Error = Error> + Send;
/// perform an action with the signed transaction
fn execute(self, signer: WithToken<SignedTransaction>) -> Self::Out;
}
impl PostSign for () {
type Item = WithToken<SignedTransaction>;
type Out = Result<Self::Item>;
fn execute(self, signed: WithToken<SignedTransaction>) -> Self::Out {
Ok(signed)
}
}
impl<F: Send, T: Send> PostSign for F
where F: FnOnce(WithToken<SignedTransaction>) -> Result<T>
{
type Item = T;
type Out = Result<Self::Item>;
fn execute(self, signed: WithToken<SignedTransaction>) -> Self::Out {
(self)(signed)
}
}
/// Single-use account token.
pub type AccountToken = Password;
/// Values used to unlock accounts for signing.
#[derive(Clone, PartialEq)]
pub enum SignWith {
/// Nothing -- implies the account is already unlocked.
Nothing,
/// Unlock with password.
Password(Password),
/// Unlock with single-use token.
Token(AccountToken),
}
impl SignWith {
#[cfg(any(test, feature = "accounts"))]
fn is_password(&self) -> bool {
if let SignWith::Password(_) = *self {
true
} else {
false
}
}
}
/// A value, potentially accompanied by a signing token.
pub enum WithToken<T> {
/// No token.
No(T),
/// With token.
Yes(T, AccountToken),
}
impl<T: Debug> Deref for WithToken<T> {
type Target = T;
fn | (&self) -> &Self::Target {
match *self {
WithToken::No(ref v) => v,
WithToken::Yes(ref v, _) => v,
}
}
}
impl<T: Debug> WithToken<T> {
/// Map the value with the given closure, preserving the token.
pub fn map<S, F>(self, f: F) -> WithToken<S> where
S: Debug,
F: FnOnce(T) -> S,
{
match self {
WithToken::No(v) => WithToken::No(f(v)),
WithToken::Yes(v, token) => WithToken::Yes(f(v), token),
}
}
/// Convert into inner value, ignoring possible token.
pub fn into_value(self) -> T {
match self {
WithToken::No(v) => v,
WithToken::Yes(v, _) => v,
}
}
/// Convert the `WithToken` into a tuple.
pub fn into_tuple(self) -> (T, Option<AccountToken>) {
match self {
WithToken::No(v) => (v, None),
WithToken::Yes(v, token) => (v, Some(token))
}
}
}
impl<T: Debug> From<(T, AccountToken)> for WithToken<T> {
fn from(tuple: (T, AccountToken)) -> Self {
WithToken::Yes(tuple.0, tuple.1)
}
}
impl<T: Debug> From<(T, Option<AccountToken>)> for WithToken<T> {
fn from(tuple: (T, Option<AccountToken>)) -> Self {
match tuple.1 {
Some(token) => WithToken::Yes(tuple.0, token),
None => WithToken::No(tuple.0),
}
}
}
/// Execute a confirmation payload.
pub fn execute<D: Dispatcher +'static>(
dispatcher: D,
signer: &Arc<dyn Accounts>,
payload: ConfirmationPayload,
pass: SignWith
) -> BoxFuture<WithToken<ConfirmationResponse>> {
match payload {
ConfirmationPayload::SendTransaction(request) => {
let condition = request.condition.clone().map(Into::into);
let cloned_dispatcher = dispatcher.clone();
let post_sign = move |with_token_signed: WithToken<SignedTransaction>| {
let (signed, token) = with_token_signed.into_tuple();
let signed_transaction = PendingTransaction::new(signed, condition);
cloned_dispatcher.dispatch_transaction(signed_transaction)
.map(|hash| (hash, token))
};
Box::new(
dispatcher.sign(request, &signer, pass, post_sign).map(|(hash, token)| {
WithToken::from((ConfirmationResponse::SendTransaction(hash), token))
})
)
},
ConfirmationPayload::SignTransaction(request) => {
Box::new(dispatcher.sign(request, &signer, pass, ())
.map(move |result| result
.map(move |tx| dispatcher.enrich(tx))
.map(ConfirmationResponse::SignTransaction)
))
},
ConfirmationPayload::EthSignMessage(address, data) => {
let res = signer.sign_message(address, pass, SignMessage::Data(data))
.map(|result| result
.map(|s| H520(s.into_electrum()))
.map(ConfirmationResponse::Signature)
);
Box::new(future::done(res))
},
ConfirmationPayload::SignMessage(address, data) => {
let res = signer.sign_message(address, pass, SignMessage::Hash(data))
.map(|result| result
.map(|rsv| H520(rsv.into_electrum()))
.map(ConfirmationResponse::Signature)
);
Box::new(future::done(res))
},
ConfirmationPayload::Decrypt(address, data) => {
let res = signer.decrypt(address, pass, data)
.map(|result| result
.map(RpcBytes)
.map(ConfirmationResponse::Decrypt)
);
Box::new(future::done(res))
},
}
}
/// Returns a eth_sign-compatible hash of data to sign.
/// The data is prepended with special message to prevent
/// malicious DApps from using the function to sign forged transactions.
pub fn eth_data_hash(mut data: Bytes) -> H256 {
let mut message_data =
format!("\x19Ethereum Signed Message:\n{}", data.len())
.into_bytes();
message_data.append(&mut data);
keccak(message_data)
}
/// Extract the default gas price from a client and miner.
pub fn default_gas_price<C, M>(client: &C, miner: &M, percentile: usize) -> U256 where
C: BlockChainClient,
M: MinerService,
{
client.gas_price_corpus(100).percentile(percentile).cloned().unwrap_or_else(|| miner.sensible_gas_price())
}
/// Convert RPC confirmation payload to signer confirmation payload.
/// May need to resolve in the future to fetch things like gas price.
pub fn from_rpc<D>(payload: RpcConfirmationPayload, default_account: Address, dispatcher: &D) -> BoxFuture<ConfirmationPayload>
where D: Dispatcher
{
match payload {
RpcConfirmationPayload::SendTransaction(request) => {
Box::new(dispatcher.fill_optional_fields(request.into(), default_account, false)
.map(ConfirmationPayload::SendTransaction))
},
RpcConfirmationPayload::SignTransaction(request) => {
Box::new(dispatcher.fill_optional_fields(request.into(), default_account, false)
.map(ConfirmationPayload::SignTransaction))
},
RpcConfirmationPayload::Decrypt(RpcDecryptRequest { address, msg }) => {
Box::new(future::ok(ConfirmationPayload::Decrypt(address, msg.into())))
},
RpcConfirmationPayload::EthSignMessage(RpcEthSignRequest { address, data }) => {
Box::new(future::ok(ConfirmationPayload::EthSignMessage(address, data.into())))
},
RpcConfirmationPayload::EIP191SignMessage(RpcSignRequest { address, data }) => {
Box::new(future::ok(ConfirmationPayload::SignMessage(address, data)))
},
}
}
| deref | identifier_name |
mod.rs | use std::collections::{HashMap, HashSet};
use std::process::exit;
use ansi_term::Colour::{Red, Green, Cyan};
use petgraph::{
graph::{EdgeIndex, NodeIndex},
Directed, Graph, Incoming, Outgoing,
};
/// The edge of a DelfGraph is a DelfEdge
pub mod edge;
/// The node of a DelfGraph is a DelfObject
pub mod object;
use crate::storage::{get_connection, DelfStorageConnection};
use crate::DelfYamls;
/// The DelfGraph is the core structure for delf's functionality. It contains the algorithm to traverse the graph, as well as metadata to perform the deletions.
#[derive(Debug)]
pub struct DelfGraph {
pub(crate) nodes: HashMap<String, NodeIndex>,
pub(crate) edges: HashMap<String, EdgeIndex>,
graph: Graph<object::DelfObject, edge::DelfEdge, Directed>,
storages: HashMap<String, Box<dyn DelfStorageConnection>>,
}
impl DelfGraph {
/// Create a new DelfGraph from a schema and a config. See [yaml_rust](../../yaml_rust/index.html) for information on creating the Yaml structs, or alternately use the helper functions: [read_files](../fn.read_files.html), [read_yamls](../fn.read_yamls.html) for constructing a DelfGraph from either paths or `&str` of yaml.
pub fn new(yamls: &DelfYamls) -> DelfGraph {
let schema = &yamls.schema;
let config = &yamls.config;
let mut edges_to_insert = Vec::new();
let mut nodes = HashMap::<String, NodeIndex>::new();
let mut edges = HashMap::<String, EdgeIndex>::new();
let mut graph = Graph::<object::DelfObject, edge::DelfEdge>::new();
// each yaml is an object
for yaml in schema.iter() {
let obj_name = String::from(yaml["object_type"]["name"].as_str().unwrap());
let obj_node = object::DelfObject::from(&yaml["object_type"]);
let node_id = graph.add_node(obj_node);
nodes.insert(obj_name.clone(), node_id);
// need to make sure all the nodes exist before edges can be added to the graph
for e in yaml["object_type"]["edge_types"].as_vec().unwrap().iter() {
let delf_edge = edge::DelfEdge::from(e);
edges_to_insert.push((obj_name.clone(), delf_edge));
}
}
// add all the edges to the graph
for (from, e) in edges_to_insert.iter_mut() {
if!nodes.contains_key(&e.to.object_type) {
eprintln!("Error creating edge {:#?}: No object with name {:#?}", e.name, e.to.object_type);
exit(1);
}
let edge_id = graph.add_edge(nodes[from], nodes[&e.to.object_type], e.clone());
edges.insert(String::from(&e.name), edge_id);
}
// create the storage map
let mut storages = HashMap::<String, Box<dyn DelfStorageConnection>>::new();
for yaml in config.iter() {
for storage in yaml["storages"].as_vec().unwrap().iter() {
let storage_name = String::from(storage["name"].as_str().unwrap());
storages.insert(
storage_name,
get_connection(
storage["plugin"].as_str().unwrap(),
storage["url"].as_str().unwrap(),
),
);
}
}
return DelfGraph {
nodes,
edges,
graph,
storages,
};
}
/// Pretty print the graph's contents.
pub fn print(&self) {
println!("{:#?}", self.graph);
}
/// Given an edge name, get the corresponding DelfEdge
pub fn get_edge(&self, edge_name: &String) -> &edge::DelfEdge {
let edge_id = self.edges.get(edge_name).unwrap();
return self.graph.edge_weight(*edge_id).unwrap();
}
/// Given an edge name and the ids of the to/from object instances, delete the edge
pub fn delete_edge(&self, edge_name: &String, from_id: &String, to_id: &String) {
let e = self.get_edge(edge_name);
e.delete_one(from_id, to_id, self);
}
/// Given an object name, get the corresponding DelfObject
pub fn | (&self, object_name: &String) -> &object::DelfObject {
let object_id = self.nodes.get(object_name).unwrap();
return self.graph.node_weight(*object_id).unwrap();
}
/// Given the object name and the id of the instance, delete the object
pub fn delete_object(&self, object_name: &String, id: &String) {
self._delete_object(object_name, id, None);
}
fn _delete_object(
&self,
object_name: &String,
id: &String,
from_edge: Option<&edge::DelfEdge>,
) {
let obj = self.get_object(object_name);
let deleted = obj.delete(id, from_edge, &self.storages);
if deleted {
let edges = self.graph.edges_directed(self.nodes[&obj.name], Outgoing);
for e in edges {
e.weight().delete_all(id, &obj.id_type, self);
}
}
}
/// Validate that the objects and edges described in the schema exist in the corresponding storage as expected. Additionally, ensure that all objects in the graph are reachable by traversal via `deep` or `refcount` edges starting at an object with deletion type of `directly`, `directly_only`, `short_ttl`, or `not_deleted`. This ensures that all objects are deletable and accounted for.
pub fn validate(&self) {
println!("\u{1f50d} {}", Cyan.bold().paint("Validating DelF graph..."));
let mut errs = Vec::new();
let mut passed = true;
for (_, node_id) in self.nodes.iter() {
match self.graph
.node_weight(*node_id)
.unwrap()
.validate(&self.storages) {
Err(e) => errs.push(e),
_ => ()
}
}
if errs.len() > 0 {
passed = false;
println!("\u{274c} {}", Red.paint("Not all objects found in storage"));
for err in errs.drain(..) {
println!(" {}", err);
}
} else {
println!("\u{2705} {}", Green.paint("Objects exist in storage"));
}
for (_, edge_id) in self.edges.iter() {
match self.graph.edge_weight(*edge_id).unwrap().validate(self) {
Err(e) => errs.push(e),
_ => ()
}
}
if errs.len() > 0 {
passed = false;
println!("\u{274c} {}", Red.paint("Not all edges found in storage"));
for err in errs.drain(..) {
println!(" {}", err);
}
} else {
println!("\u{2705} {}", Green.paint("Edges exist in storage"));
}
match self.reachability_analysis() {
Err(e) => errs.push(e),
_ => ()
}
if errs.len() > 0 {
passed = false;
println!("\u{274c} {}", Red.paint("Not all objects deletable"));
for err in errs.drain(..) {
println!(" {}", err);
}
} else {
println!("\u{2705} {}", Green.paint("All objects deletable"));
}
if passed {
println!("\u{1F680} {} \u{1F680}", Green.bold().paint("Validation successful!"));
} else {
println!("\u{26a0} {} \u{26a0}", Red.bold().paint("Validation errors found"));
}
}
// Starting from a directly deletable (or excepted) node, ensure all ndoes are reached.
fn reachability_analysis(&self) -> Result<(), String> {
let mut visited_nodes = HashSet::new();
for (_, node_id) in self.nodes.iter() {
let obj = self.graph.node_weight(*node_id).unwrap();
match obj.deletion {
object::DeleteType::ShortTTL
| object::DeleteType::Directly
| object::DeleteType::DirectlyOnly
| object::DeleteType::NotDeleted => {
// this object is a starting point in traversal, start traversal
self.visit_node(&obj.name, &mut visited_nodes);
}
_ => (),
}
}
if visited_nodes.len()!= self.nodes.len() {
let node_set: HashSet<String> = self.nodes.keys().cloned().collect();
return Err(format!(
"Not all objects are deletable: {:?}",
node_set.difference(&visited_nodes)
));
} else {
return Ok(());
}
}
// Recursively visit all un-visited nodes that are connected via depp or refcounte edges from the starting node with the passed in name
fn visit_node(&self, name: &String, visited_nodes: &mut HashSet<String>) {
visited_nodes.insert(name.clone());
let edges = self.graph.edges_directed(self.nodes[name], Outgoing);
for e in edges {
let ew = e.weight();
match ew.deletion {
edge::DeleteType::Deep | edge::DeleteType::RefCount => {
if!visited_nodes.contains(&ew.to.object_type) {
self.visit_node(&ew.to.object_type, visited_nodes);
}
}
_ => (),
}
}
}
// find all the inbound edges for a given object
fn get_inbound_edges(&self, obj: &object::DelfObject) -> Vec<&edge::DelfEdge> {
let object_id = self.nodes.get(&obj.name).unwrap();
let edges = self.graph.edges_directed(*object_id, Incoming);
let mut res = Vec::new();
for edge in edges {
res.push(edge.weight());
}
return res;
}
/// Check all objects in the DelfGraph with the deletion type of `short_ttl` if there are instances of the object which are past their expiration time. If so, delete the objects.
pub fn check_short_ttl(&self) {
for (_, node_id) in self.nodes.iter() {
let obj = self.graph.node_weight(*node_id).unwrap();
for obj_id in obj.check_short_ttl(&self.storages).iter() {
self.delete_object(&obj.name, obj_id);
}
}
}
}
| get_object | identifier_name |
mod.rs | use std::collections::{HashMap, HashSet};
use std::process::exit;
use ansi_term::Colour::{Red, Green, Cyan};
use petgraph::{
graph::{EdgeIndex, NodeIndex},
Directed, Graph, Incoming, Outgoing,
};
/// The edge of a DelfGraph is a DelfEdge
pub mod edge;
/// The node of a DelfGraph is a DelfObject
pub mod object;
use crate::storage::{get_connection, DelfStorageConnection};
use crate::DelfYamls;
/// The DelfGraph is the core structure for delf's functionality. It contains the algorithm to traverse the graph, as well as metadata to perform the deletions.
#[derive(Debug)]
pub struct DelfGraph {
pub(crate) nodes: HashMap<String, NodeIndex>,
pub(crate) edges: HashMap<String, EdgeIndex>,
graph: Graph<object::DelfObject, edge::DelfEdge, Directed>,
storages: HashMap<String, Box<dyn DelfStorageConnection>>,
}
impl DelfGraph {
/// Create a new DelfGraph from a schema and a config. See [yaml_rust](../../yaml_rust/index.html) for information on creating the Yaml structs, or alternately use the helper functions: [read_files](../fn.read_files.html), [read_yamls](../fn.read_yamls.html) for constructing a DelfGraph from either paths or `&str` of yaml.
pub fn new(yamls: &DelfYamls) -> DelfGraph {
let schema = &yamls.schema;
let config = &yamls.config;
let mut edges_to_insert = Vec::new();
let mut nodes = HashMap::<String, NodeIndex>::new();
let mut edges = HashMap::<String, EdgeIndex>::new();
let mut graph = Graph::<object::DelfObject, edge::DelfEdge>::new();
// each yaml is an object
for yaml in schema.iter() {
let obj_name = String::from(yaml["object_type"]["name"].as_str().unwrap());
let obj_node = object::DelfObject::from(&yaml["object_type"]);
let node_id = graph.add_node(obj_node);
nodes.insert(obj_name.clone(), node_id);
// need to make sure all the nodes exist before edges can be added to the graph
for e in yaml["object_type"]["edge_types"].as_vec().unwrap().iter() {
let delf_edge = edge::DelfEdge::from(e);
edges_to_insert.push((obj_name.clone(), delf_edge));
}
}
// add all the edges to the graph
for (from, e) in edges_to_insert.iter_mut() {
if!nodes.contains_key(&e.to.object_type) {
eprintln!("Error creating edge {:#?}: No object with name {:#?}", e.name, e.to.object_type);
exit(1);
}
let edge_id = graph.add_edge(nodes[from], nodes[&e.to.object_type], e.clone());
edges.insert(String::from(&e.name), edge_id);
}
// create the storage map
let mut storages = HashMap::<String, Box<dyn DelfStorageConnection>>::new();
for yaml in config.iter() {
for storage in yaml["storages"].as_vec().unwrap().iter() {
let storage_name = String::from(storage["name"].as_str().unwrap());
storages.insert(
storage_name,
get_connection(
storage["plugin"].as_str().unwrap(),
storage["url"].as_str().unwrap(),
),
);
}
}
return DelfGraph {
nodes,
edges,
graph,
storages,
};
}
/// Pretty print the graph's contents.
pub fn print(&self) {
println!("{:#?}", self.graph);
}
/// Given an edge name, get the corresponding DelfEdge
pub fn get_edge(&self, edge_name: &String) -> &edge::DelfEdge {
let edge_id = self.edges.get(edge_name).unwrap();
return self.graph.edge_weight(*edge_id).unwrap();
}
/// Given an edge name and the ids of the to/from object instances, delete the edge
pub fn delete_edge(&self, edge_name: &String, from_id: &String, to_id: &String) {
let e = self.get_edge(edge_name);
e.delete_one(from_id, to_id, self);
}
/// Given an object name, get the corresponding DelfObject
pub fn get_object(&self, object_name: &String) -> &object::DelfObject {
let object_id = self.nodes.get(object_name).unwrap();
return self.graph.node_weight(*object_id).unwrap();
}
/// Given the object name and the id of the instance, delete the object
pub fn delete_object(&self, object_name: &String, id: &String) {
self._delete_object(object_name, id, None);
}
fn _delete_object(
&self,
object_name: &String,
id: &String,
from_edge: Option<&edge::DelfEdge>,
) {
let obj = self.get_object(object_name);
let deleted = obj.delete(id, from_edge, &self.storages);
if deleted {
let edges = self.graph.edges_directed(self.nodes[&obj.name], Outgoing);
for e in edges {
e.weight().delete_all(id, &obj.id_type, self);
}
}
}
/// Validate that the objects and edges described in the schema exist in the corresponding storage as expected. Additionally, ensure that all objects in the graph are reachable by traversal via `deep` or `refcount` edges starting at an object with deletion type of `directly`, `directly_only`, `short_ttl`, or `not_deleted`. This ensures that all objects are deletable and accounted for.
pub fn validate(&self) {
println!("\u{1f50d} {}", Cyan.bold().paint("Validating DelF graph..."));
let mut errs = Vec::new();
let mut passed = true;
for (_, node_id) in self.nodes.iter() {
match self.graph
.node_weight(*node_id)
.unwrap()
.validate(&self.storages) {
Err(e) => errs.push(e),
_ => ()
}
}
if errs.len() > 0 {
passed = false;
println!("\u{274c} {}", Red.paint("Not all objects found in storage"));
for err in errs.drain(..) {
println!(" {}", err);
}
} else {
println!("\u{2705} {}", Green.paint("Objects exist in storage"));
}
for (_, edge_id) in self.edges.iter() {
match self.graph.edge_weight(*edge_id).unwrap().validate(self) {
Err(e) => errs.push(e),
_ => ()
}
}
if errs.len() > 0 {
passed = false;
println!("\u{274c} {}", Red.paint("Not all edges found in storage"));
for err in errs.drain(..) {
println!(" {}", err);
}
} else {
println!("\u{2705} {}", Green.paint("Edges exist in storage"));
}
match self.reachability_analysis() {
Err(e) => errs.push(e),
_ => ()
}
if errs.len() > 0 {
passed = false;
println!("\u{274c} {}", Red.paint("Not all objects deletable"));
for err in errs.drain(..) {
println!(" {}", err);
}
} else {
println!("\u{2705} {}", Green.paint("All objects deletable"));
}
if passed {
println!("\u{1F680} {} \u{1F680}", Green.bold().paint("Validation successful!"));
} else {
println!("\u{26a0} {} \u{26a0}", Red.bold().paint("Validation errors found"));
}
}
| for (_, node_id) in self.nodes.iter() {
let obj = self.graph.node_weight(*node_id).unwrap();
match obj.deletion {
object::DeleteType::ShortTTL
| object::DeleteType::Directly
| object::DeleteType::DirectlyOnly
| object::DeleteType::NotDeleted => {
// this object is a starting point in traversal, start traversal
self.visit_node(&obj.name, &mut visited_nodes);
}
_ => (),
}
}
if visited_nodes.len()!= self.nodes.len() {
let node_set: HashSet<String> = self.nodes.keys().cloned().collect();
return Err(format!(
"Not all objects are deletable: {:?}",
node_set.difference(&visited_nodes)
));
} else {
return Ok(());
}
}
// Recursively visit all un-visited nodes that are connected via depp or refcounte edges from the starting node with the passed in name
fn visit_node(&self, name: &String, visited_nodes: &mut HashSet<String>) {
visited_nodes.insert(name.clone());
let edges = self.graph.edges_directed(self.nodes[name], Outgoing);
for e in edges {
let ew = e.weight();
match ew.deletion {
edge::DeleteType::Deep | edge::DeleteType::RefCount => {
if!visited_nodes.contains(&ew.to.object_type) {
self.visit_node(&ew.to.object_type, visited_nodes);
}
}
_ => (),
}
}
}
// find all the inbound edges for a given object
fn get_inbound_edges(&self, obj: &object::DelfObject) -> Vec<&edge::DelfEdge> {
let object_id = self.nodes.get(&obj.name).unwrap();
let edges = self.graph.edges_directed(*object_id, Incoming);
let mut res = Vec::new();
for edge in edges {
res.push(edge.weight());
}
return res;
}
/// Check all objects in the DelfGraph with the deletion type of `short_ttl` if there are instances of the object which are past their expiration time. If so, delete the objects.
pub fn check_short_ttl(&self) {
for (_, node_id) in self.nodes.iter() {
let obj = self.graph.node_weight(*node_id).unwrap();
for obj_id in obj.check_short_ttl(&self.storages).iter() {
self.delete_object(&obj.name, obj_id);
}
}
}
} | // Starting from a directly deletable (or excepted) node, ensure all ndoes are reached.
fn reachability_analysis(&self) -> Result<(), String> {
let mut visited_nodes = HashSet::new(); | random_line_split |
mod.rs | use std::collections::{HashMap, HashSet};
use std::process::exit;
use ansi_term::Colour::{Red, Green, Cyan};
use petgraph::{
graph::{EdgeIndex, NodeIndex},
Directed, Graph, Incoming, Outgoing,
};
/// The edge of a DelfGraph is a DelfEdge
pub mod edge;
/// The node of a DelfGraph is a DelfObject
pub mod object;
use crate::storage::{get_connection, DelfStorageConnection};
use crate::DelfYamls;
/// The DelfGraph is the core structure for delf's functionality. It contains the algorithm to traverse the graph, as well as metadata to perform the deletions.
#[derive(Debug)]
pub struct DelfGraph {
pub(crate) nodes: HashMap<String, NodeIndex>,
pub(crate) edges: HashMap<String, EdgeIndex>,
graph: Graph<object::DelfObject, edge::DelfEdge, Directed>,
storages: HashMap<String, Box<dyn DelfStorageConnection>>,
}
impl DelfGraph {
/// Create a new DelfGraph from a schema and a config. See [yaml_rust](../../yaml_rust/index.html) for information on creating the Yaml structs, or alternately use the helper functions: [read_files](../fn.read_files.html), [read_yamls](../fn.read_yamls.html) for constructing a DelfGraph from either paths or `&str` of yaml.
pub fn new(yamls: &DelfYamls) -> DelfGraph | edges_to_insert.push((obj_name.clone(), delf_edge));
}
}
// add all the edges to the graph
for (from, e) in edges_to_insert.iter_mut() {
if!nodes.contains_key(&e.to.object_type) {
eprintln!("Error creating edge {:#?}: No object with name {:#?}", e.name, e.to.object_type);
exit(1);
}
let edge_id = graph.add_edge(nodes[from], nodes[&e.to.object_type], e.clone());
edges.insert(String::from(&e.name), edge_id);
}
// create the storage map
let mut storages = HashMap::<String, Box<dyn DelfStorageConnection>>::new();
for yaml in config.iter() {
for storage in yaml["storages"].as_vec().unwrap().iter() {
let storage_name = String::from(storage["name"].as_str().unwrap());
storages.insert(
storage_name,
get_connection(
storage["plugin"].as_str().unwrap(),
storage["url"].as_str().unwrap(),
),
);
}
}
return DelfGraph {
nodes,
edges,
graph,
storages,
};
}
/// Pretty print the graph's contents.
pub fn print(&self) {
println!("{:#?}", self.graph);
}
/// Given an edge name, get the corresponding DelfEdge
pub fn get_edge(&self, edge_name: &String) -> &edge::DelfEdge {
let edge_id = self.edges.get(edge_name).unwrap();
return self.graph.edge_weight(*edge_id).unwrap();
}
/// Given an edge name and the ids of the to/from object instances, delete the edge
pub fn delete_edge(&self, edge_name: &String, from_id: &String, to_id: &String) {
let e = self.get_edge(edge_name);
e.delete_one(from_id, to_id, self);
}
/// Given an object name, get the corresponding DelfObject
pub fn get_object(&self, object_name: &String) -> &object::DelfObject {
let object_id = self.nodes.get(object_name).unwrap();
return self.graph.node_weight(*object_id).unwrap();
}
/// Given the object name and the id of the instance, delete the object
pub fn delete_object(&self, object_name: &String, id: &String) {
self._delete_object(object_name, id, None);
}
fn _delete_object(
&self,
object_name: &String,
id: &String,
from_edge: Option<&edge::DelfEdge>,
) {
let obj = self.get_object(object_name);
let deleted = obj.delete(id, from_edge, &self.storages);
if deleted {
let edges = self.graph.edges_directed(self.nodes[&obj.name], Outgoing);
for e in edges {
e.weight().delete_all(id, &obj.id_type, self);
}
}
}
/// Validate that the objects and edges described in the schema exist in the corresponding storage as expected. Additionally, ensure that all objects in the graph are reachable by traversal via `deep` or `refcount` edges starting at an object with deletion type of `directly`, `directly_only`, `short_ttl`, or `not_deleted`. This ensures that all objects are deletable and accounted for.
pub fn validate(&self) {
println!("\u{1f50d} {}", Cyan.bold().paint("Validating DelF graph..."));
let mut errs = Vec::new();
let mut passed = true;
for (_, node_id) in self.nodes.iter() {
match self.graph
.node_weight(*node_id)
.unwrap()
.validate(&self.storages) {
Err(e) => errs.push(e),
_ => ()
}
}
if errs.len() > 0 {
passed = false;
println!("\u{274c} {}", Red.paint("Not all objects found in storage"));
for err in errs.drain(..) {
println!(" {}", err);
}
} else {
println!("\u{2705} {}", Green.paint("Objects exist in storage"));
}
for (_, edge_id) in self.edges.iter() {
match self.graph.edge_weight(*edge_id).unwrap().validate(self) {
Err(e) => errs.push(e),
_ => ()
}
}
if errs.len() > 0 {
passed = false;
println!("\u{274c} {}", Red.paint("Not all edges found in storage"));
for err in errs.drain(..) {
println!(" {}", err);
}
} else {
println!("\u{2705} {}", Green.paint("Edges exist in storage"));
}
match self.reachability_analysis() {
Err(e) => errs.push(e),
_ => ()
}
if errs.len() > 0 {
passed = false;
println!("\u{274c} {}", Red.paint("Not all objects deletable"));
for err in errs.drain(..) {
println!(" {}", err);
}
} else {
println!("\u{2705} {}", Green.paint("All objects deletable"));
}
if passed {
println!("\u{1F680} {} \u{1F680}", Green.bold().paint("Validation successful!"));
} else {
println!("\u{26a0} {} \u{26a0}", Red.bold().paint("Validation errors found"));
}
}
// Starting from a directly deletable (or excepted) node, ensure all ndoes are reached.
fn reachability_analysis(&self) -> Result<(), String> {
let mut visited_nodes = HashSet::new();
for (_, node_id) in self.nodes.iter() {
let obj = self.graph.node_weight(*node_id).unwrap();
match obj.deletion {
object::DeleteType::ShortTTL
| object::DeleteType::Directly
| object::DeleteType::DirectlyOnly
| object::DeleteType::NotDeleted => {
// this object is a starting point in traversal, start traversal
self.visit_node(&obj.name, &mut visited_nodes);
}
_ => (),
}
}
if visited_nodes.len()!= self.nodes.len() {
let node_set: HashSet<String> = self.nodes.keys().cloned().collect();
return Err(format!(
"Not all objects are deletable: {:?}",
node_set.difference(&visited_nodes)
));
} else {
return Ok(());
}
}
// Recursively visit all un-visited nodes that are connected via depp or refcounte edges from the starting node with the passed in name
fn visit_node(&self, name: &String, visited_nodes: &mut HashSet<String>) {
visited_nodes.insert(name.clone());
let edges = self.graph.edges_directed(self.nodes[name], Outgoing);
for e in edges {
let ew = e.weight();
match ew.deletion {
edge::DeleteType::Deep | edge::DeleteType::RefCount => {
if!visited_nodes.contains(&ew.to.object_type) {
self.visit_node(&ew.to.object_type, visited_nodes);
}
}
_ => (),
}
}
}
// find all the inbound edges for a given object
fn get_inbound_edges(&self, obj: &object::DelfObject) -> Vec<&edge::DelfEdge> {
let object_id = self.nodes.get(&obj.name).unwrap();
let edges = self.graph.edges_directed(*object_id, Incoming);
let mut res = Vec::new();
for edge in edges {
res.push(edge.weight());
}
return res;
}
/// Check all objects in the DelfGraph with the deletion type of `short_ttl` if there are instances of the object which are past their expiration time. If so, delete the objects.
pub fn check_short_ttl(&self) {
for (_, node_id) in self.nodes.iter() {
let obj = self.graph.node_weight(*node_id).unwrap();
for obj_id in obj.check_short_ttl(&self.storages).iter() {
self.delete_object(&obj.name, obj_id);
}
}
}
}
| {
let schema = &yamls.schema;
let config = &yamls.config;
let mut edges_to_insert = Vec::new();
let mut nodes = HashMap::<String, NodeIndex>::new();
let mut edges = HashMap::<String, EdgeIndex>::new();
let mut graph = Graph::<object::DelfObject, edge::DelfEdge>::new();
// each yaml is an object
for yaml in schema.iter() {
let obj_name = String::from(yaml["object_type"]["name"].as_str().unwrap());
let obj_node = object::DelfObject::from(&yaml["object_type"]);
let node_id = graph.add_node(obj_node);
nodes.insert(obj_name.clone(), node_id);
// need to make sure all the nodes exist before edges can be added to the graph
for e in yaml["object_type"]["edge_types"].as_vec().unwrap().iter() {
let delf_edge = edge::DelfEdge::from(e); | identifier_body |
context.rs | // Copyright (C) 2018-2020 Sebastian Dröge <[email protected]>
// Copyright (C) 2019-2022 François Laignel <[email protected]>
//
// Take a look at the license at the top of the repository in the LICENSE file.
use futures::prelude::*;
use gst::glib::once_cell::sync::Lazy;
use std::collections::HashMap;
use std::io;
use std::pin::Pin;
use std::sync::{Arc, Mutex};
use std::task::{self, Poll};
use std::time::Duration;
use super::{Handle, HandleWeak, JoinHandle, Scheduler, SubTaskOutput, TaskId};
use crate::runtime::RUNTIME_CAT;
// We are bound to using `sync` for the `runtime` `Mutex`es. Attempts to use `async` `Mutex`es
// lead to the following issues:
//
// * `CONTEXTS`: can't `spawn` a `Future` when called from a `Context` thread via `ffi`.
// * `timers`: can't automatically `remove` the timer from `BinaryHeap` because `async drop`
// is not available.
// * `task_queues`: can't `add` a pending task when called from a `Context` thread via `ffi`.
//
// Also, we want to be able to `acquire` a `Context` outside of an `async` context.
// These `Mutex`es must be `lock`ed for a short period.
static CONTEXTS: Lazy<Mutex<HashMap<Arc<str>, ContextWeak>>> =
Lazy::new(|| Mutex::new(HashMap::new()));
/// Blocks on `future` in one way or another if possible.
///
/// IO & time related `Future`s must be handled within their own [`Context`].
/// Wait for the result using a [`JoinHandle`] or a `channel`.
///
/// If there's currently an active `Context` with a task, then the future is only queued up as a
/// pending sub task for that task.
///
/// Otherwise the current thread is blocking and the passed in future is executed.
///
/// Note that you must not pass any futures here that wait for the currently active task in one way
/// or another as this would deadlock!
#[track_caller]
pub fn block_on_or_add_sub_task<Fut>(future: Fut) -> Option<Fut::Output>
where
Fut: Future + Send +'static,
Fut::Output: Send +'static,
{
if let Some((cur_context, cur_task_id)) = Context::current_task() {
gst::debug!(
RUNTIME_CAT,
"Adding subtask to task {:?} on context {}",
cur_task_id,
cur_context.name()
);
let _ = cur_context.add_sub_task(cur_task_id, async move {
future.await;
Ok(())
});
return None;
}
// Not running in a Context thread so we can block
Some(block_on(future))
}
/// Blocks on `future`.
///
/// IO & time related `Future`s must be handled within their own [`Context`].
/// Wait for the result using a [`JoinHandle`] or a `channel`.
///
/// The current thread is blocking and the passed in future is executed.
///
/// # Panics
///
/// This function panics if called within a [`Context`] thread.
#[track_caller]
pub fn block_on<Fut>(future: Fut) -> Fut::Output
where
Fut: Future + Send +'static,
Fut::Output: Send +'static,
{
if let Some(context) = Context::current() {
let msg = format!("Attempt to block within Context {}", context.name());
gst::error!(RUNTIME_CAT, "{}", msg);
panic!("{}", msg);
}
// Not running in a Context thread so we can block
gst::debug!(RUNTIME_CAT, "Blocking on new dummy context");
Scheduler::block_on(future)
}
/// Yields execution back to the runtime.
#[inline]
pub fn yield_now() -> YieldNow {
YieldNow::default()
}
#[derive(Debug, Default)]
#[must_use = "futures do nothing unless you `.await` or poll them"]
pub struct YieldNow(bool);
impl Future for YieldNow {
type Output = ();
fn poll(mut self: Pin<&mut Self>, cx: &mut task::Context<'_>) -> Poll<Self::Output> {
if!self.0 {
self.0 = true;
cx.waker().wake_by_ref();
Poll::Pending
} else {
Poll::Ready(())
}
}
}
#[derive(Clone, Debug)]
pub struct ContextWeak(HandleWeak);
impl ContextWeak {
pub fn upgrade(&self) -> Option<Context> {
self.0.upgrade().map(Context)
}
}
/// A `threadshare` `runtime` `Context`.
///
/// The `Context` provides low-level asynchronous processing features to
/// multiplex task execution on a single thread.
///
/// `Element` implementations should use [`PadSrc`] and [`PadSink`] which
/// provide high-level features.
///
/// [`PadSrc`]:../pad/struct.PadSrc.html
/// [`PadSink`]:../pad/struct.PadSink.html
#[derive(Clone, Debug)]
pub struct Context(Handle);
impl PartialEq for Context {
fn eq(&self, other: &Self) -> bool {
self.0.eq(&other.0)
}
}
impl Eq for Context {}
impl Context {
pub fn acquire(context_name: &str, wait: Duration) -> Result<Self, io::Error> {
assert_ne!(context_name, Scheduler::DUMMY_NAME);
let mut contexts = CONTEXTS.lock().unwrap();
if let Some(context_weak) = contexts.get(context_name) {
if let Some(context) = context_weak.upgrade() {
gst::debug!(RUNTIME_CAT, "Joining Context '{}'", context.name());
return Ok(context);
}
}
let context = Context(Scheduler::start(context_name, wait));
contexts.insert(context_name.into(), context.downgrade());
gst::debug!(
RUNTIME_CAT,
"New Context '{}' throttling {:?}",
context.name(),
wait,
);
Ok(context)
}
pub fn downgrade(&self) -> ContextWeak {
ContextWeak(self.0.downgrade())
}
pub fn name(&self) -> &str {
self.0.context_name()
}
// FIXME this could be renamed as max_throttling
// but then, all elements should also change their
// wait variables and properties to max_throttling.
pub fn wait_duration(&self) -> Duration {
self.0.max_throttling()
}
/// Total duration the scheduler spent parked.
///
/// This is only useful for performance evaluation.
#[cfg(feature = "tuning")]
pub fn parked_duration(&self) -> Duration {
self.0.parked_duration()
}
/// Returns `true` if a `Context` is running on current thread.
pub fn is_context_thread() -> bool {
Scheduler::is_scheduler_thread()
}
/// Returns the `Context` running on current thread, if any.
pub fn current() -> Option<Context> {
Scheduler::current().map(Context)
}
/// Returns the `TaskId` running on current thread, if any.
pub fn current_task() -> Option<(Context, TaskId)> {
Scheduler::current().map(|scheduler| {
// Context users always operate on a Task
(Context(scheduler), TaskId::current().unwrap())
})
} | /// of a [`Context`].
///
/// # Panic
///
/// This will block current thread and would panic if run
/// from the [`Context`].
#[track_caller]
pub fn enter<'a, F, O>(&'a self, f: F) -> O
where
F: FnOnce() -> O + Send + 'a,
O: Send + 'a,
{
if let Some(cur) = Context::current().as_ref() {
if cur == self {
panic!(
"Attempt to enter Context {} within itself, this would deadlock",
self.name()
);
} else {
gst::warning!(
RUNTIME_CAT,
"Entering Context {} within {}",
self.name(),
cur.name()
);
}
} else {
gst::debug!(RUNTIME_CAT, "Entering Context {}", self.name());
}
self.0.enter(f)
}
pub fn spawn<Fut>(&self, future: Fut) -> JoinHandle<Fut::Output>
where
Fut: Future + Send +'static,
Fut::Output: Send +'static,
{
self.0.spawn(future)
}
pub fn spawn_and_unpark<Fut>(&self, future: Fut) -> JoinHandle<Fut::Output>
where
Fut: Future + Send +'static,
Fut::Output: Send +'static,
{
self.0.spawn_and_unpark(future)
}
/// Forces the scheduler to unpark.
///
/// This is not needed by elements implementors as they are
/// supposed to call [`Self::spawn_and_unpark`] when needed.
/// However, it's useful for lower level implementations such as
/// `runtime::Task` so as to make sure the iteration loop yields
/// as soon as possible when a transition is requested.
pub(in crate::runtime) fn unpark(&self) {
self.0.unpark();
}
pub fn add_sub_task<T>(&self, task_id: TaskId, sub_task: T) -> Result<(), T>
where
T: Future<Output = SubTaskOutput> + Send +'static,
{
self.0.add_sub_task(task_id, sub_task)
}
pub async fn drain_sub_tasks() -> SubTaskOutput {
let (ctx, task_id) = match Context::current_task() {
Some(task) => task,
None => return Ok(()),
};
ctx.0.drain_sub_tasks(task_id).await
}
}
impl From<Handle> for Context {
fn from(handle: Handle) -> Self {
Context(handle)
}
}
#[cfg(test)]
mod tests {
use futures::channel::mpsc;
use futures::lock::Mutex;
use futures::prelude::*;
use std::net::{IpAddr, Ipv4Addr, SocketAddr, UdpSocket};
use std::sync::Arc;
use std::time::{Duration, Instant};
use super::super::Scheduler;
use super::Context;
use crate::runtime::Async;
type Item = i32;
const SLEEP_DURATION_MS: u64 = 2;
const SLEEP_DURATION: Duration = Duration::from_millis(SLEEP_DURATION_MS);
const DELAY: Duration = Duration::from_millis(SLEEP_DURATION_MS * 10);
#[test]
fn block_on_task_id() {
gst::init().unwrap();
assert!(!Context::is_context_thread());
crate::runtime::executor::block_on(async {
let (ctx, task_id) = Context::current_task().unwrap();
assert_eq!(ctx.name(), Scheduler::DUMMY_NAME);
assert_eq!(task_id, super::TaskId(0));
let res = ctx.add_sub_task(task_id, async move {
let (_ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, super::TaskId(0));
Ok(())
});
assert!(res.is_ok());
assert!(Context::is_context_thread());
});
assert!(!Context::is_context_thread());
}
#[test]
fn block_on_timer() {
gst::init().unwrap();
let elapsed = crate::runtime::executor::block_on(async {
let now = Instant::now();
crate::runtime::timer::delay_for(DELAY).await;
now.elapsed()
});
assert!(elapsed >= DELAY);
}
#[test]
fn context_task_id() {
use super::TaskId;
gst::init().unwrap();
let context = Context::acquire("context_task_id", SLEEP_DURATION).unwrap();
let join_handle = context.spawn(async {
let (ctx, task_id) = Context::current_task().unwrap();
assert_eq!(ctx.name(), "context_task_id");
assert_eq!(task_id, TaskId(0));
});
futures::executor::block_on(join_handle).unwrap();
// TaskId(0) is vacant again
let ctx_weak = context.downgrade();
let join_handle = context.spawn(async move {
let (ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(0));
let res = ctx.add_sub_task(task_id, async move {
let (_ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(0));
Ok(())
});
assert!(res.is_ok());
ctx_weak
.upgrade()
.unwrap()
.spawn(async {
let (ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(1));
let res = ctx.add_sub_task(task_id, async move {
let (_ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(1));
Ok(())
});
assert!(res.is_ok());
assert!(Context::drain_sub_tasks().await.is_ok());
let (_ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(1));
})
.await
.unwrap();
assert!(Context::drain_sub_tasks().await.is_ok());
let (_ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(0));
});
futures::executor::block_on(join_handle).unwrap();
}
#[test]
fn drain_sub_tasks() {
// Setup
gst::init().unwrap();
let context = Context::acquire("drain_sub_tasks", SLEEP_DURATION).unwrap();
let join_handle = context.spawn(async {
let (sender, mut receiver) = mpsc::channel(1);
let sender: Arc<Mutex<mpsc::Sender<Item>>> = Arc::new(Mutex::new(sender));
let add_sub_task = move |item| {
let sender = sender.clone();
Context::current_task()
.ok_or(())
.and_then(|(ctx, task_id)| {
ctx.add_sub_task(task_id, async move {
sender
.lock()
.await
.send(item)
.await
.map_err(|_| gst::FlowError::Error)
})
.map_err(drop)
})
};
// Tests
// Drain empty queue
let drain_fut = Context::drain_sub_tasks();
drain_fut.await.unwrap();
// Add a subtask
add_sub_task(0).unwrap();
// Check that it was not executed yet
receiver.try_next().unwrap_err();
// Drain it now and check that it was executed
let drain_fut = Context::drain_sub_tasks();
drain_fut.await.unwrap();
assert_eq!(receiver.try_next().unwrap(), Some(0));
// Add another task and check that it's not executed yet
add_sub_task(1).unwrap();
receiver.try_next().unwrap_err();
// Return the receiver
receiver
});
let mut receiver = futures::executor::block_on(join_handle).unwrap();
// The last sub task should be simply dropped at this point
match receiver.try_next() {
Ok(None) | Err(_) => (),
other => panic!("Unexpected {other:?}"),
}
}
#[test]
fn block_on_from_sync() {
gst::init().unwrap();
let context = Context::acquire("block_on_from_sync", SLEEP_DURATION).unwrap();
let bytes_sent = crate::runtime::executor::block_on(context.spawn(async {
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 5001);
let socket = Async::<UdpSocket>::bind(saddr).unwrap();
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 4001);
socket.send_to(&[0; 10], saddr).await.unwrap()
}))
.unwrap();
assert_eq!(bytes_sent, 10);
let elapsed = crate::runtime::executor::block_on(context.spawn(async {
let start = Instant::now();
crate::runtime::timer::delay_for(DELAY).await;
start.elapsed()
}))
.unwrap();
// Due to throttling, `Delay` may be fired earlier
assert!(elapsed + SLEEP_DURATION / 2 >= DELAY);
}
#[test]
#[should_panic]
fn block_on_from_context() {
gst::init().unwrap();
let context = Context::acquire("block_on_from_context", SLEEP_DURATION).unwrap();
// Panic: attempt to `runtime::executor::block_on` within a `Context` thread
let join_handle = context.spawn(async {
crate::runtime::executor::block_on(crate::runtime::timer::delay_for(DELAY));
});
// Panic: task has failed
// (enforced by `async-task`, see comment in `Future` impl for `JoinHandle`).
futures::executor::block_on(join_handle).unwrap_err();
}
#[test]
fn enter_context_from_scheduler() {
gst::init().unwrap();
let elapsed = crate::runtime::executor::block_on(async {
let context = Context::acquire("enter_context_from_executor", SLEEP_DURATION).unwrap();
let socket = context
.enter(|| {
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 5002);
Async::<UdpSocket>::bind(saddr)
})
.unwrap();
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 4002);
let bytes_sent = socket.send_to(&[0; 10], saddr).await.unwrap();
assert_eq!(bytes_sent, 10);
let (start, timer) =
context.enter(|| (Instant::now(), crate::runtime::timer::delay_for(DELAY)));
timer.await;
start.elapsed()
});
// Due to throttling, `Delay` may be fired earlier
assert!(elapsed + SLEEP_DURATION / 2 >= DELAY);
}
#[test]
fn enter_context_from_sync() {
gst::init().unwrap();
let context = Context::acquire("enter_context_from_sync", SLEEP_DURATION).unwrap();
let socket = context
.enter(|| {
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 5003);
Async::<UdpSocket>::bind(saddr)
})
.unwrap();
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 4003);
let bytes_sent = futures::executor::block_on(socket.send_to(&[0; 10], saddr)).unwrap();
assert_eq!(bytes_sent, 10);
let (start, timer) =
context.enter(|| (Instant::now(), crate::runtime::timer::delay_for(DELAY)));
let elapsed = crate::runtime::executor::block_on(async move {
timer.await;
start.elapsed()
});
// Due to throttling, `Delay` may be fired earlier
assert!(elapsed + SLEEP_DURATION / 2 >= DELAY);
}
} |
/// Executes the provided function relatively to this [`Context`].
///
/// Usefull to initialize i/o sources and timers from outside | random_line_split |
context.rs | // Copyright (C) 2018-2020 Sebastian Dröge <[email protected]>
// Copyright (C) 2019-2022 François Laignel <[email protected]>
//
// Take a look at the license at the top of the repository in the LICENSE file.
use futures::prelude::*;
use gst::glib::once_cell::sync::Lazy;
use std::collections::HashMap;
use std::io;
use std::pin::Pin;
use std::sync::{Arc, Mutex};
use std::task::{self, Poll};
use std::time::Duration;
use super::{Handle, HandleWeak, JoinHandle, Scheduler, SubTaskOutput, TaskId};
use crate::runtime::RUNTIME_CAT;
// We are bound to using `sync` for the `runtime` `Mutex`es. Attempts to use `async` `Mutex`es
// lead to the following issues:
//
// * `CONTEXTS`: can't `spawn` a `Future` when called from a `Context` thread via `ffi`.
// * `timers`: can't automatically `remove` the timer from `BinaryHeap` because `async drop`
// is not available.
// * `task_queues`: can't `add` a pending task when called from a `Context` thread via `ffi`.
//
// Also, we want to be able to `acquire` a `Context` outside of an `async` context.
// These `Mutex`es must be `lock`ed for a short period.
static CONTEXTS: Lazy<Mutex<HashMap<Arc<str>, ContextWeak>>> =
Lazy::new(|| Mutex::new(HashMap::new()));
/// Blocks on `future` in one way or another if possible.
///
/// IO & time related `Future`s must be handled within their own [`Context`].
/// Wait for the result using a [`JoinHandle`] or a `channel`.
///
/// If there's currently an active `Context` with a task, then the future is only queued up as a
/// pending sub task for that task.
///
/// Otherwise the current thread is blocking and the passed in future is executed.
///
/// Note that you must not pass any futures here that wait for the currently active task in one way
/// or another as this would deadlock!
#[track_caller]
pub fn block_on_or_add_sub_task<Fut>(future: Fut) -> Option<Fut::Output>
where
Fut: Future + Send +'static,
Fut::Output: Send +'static,
{
if let Some((cur_context, cur_task_id)) = Context::current_task() {
gst::debug!(
RUNTIME_CAT,
"Adding subtask to task {:?} on context {}",
cur_task_id,
cur_context.name()
);
let _ = cur_context.add_sub_task(cur_task_id, async move {
future.await;
Ok(())
});
return None;
}
// Not running in a Context thread so we can block
Some(block_on(future))
}
/// Blocks on `future`.
///
/// IO & time related `Future`s must be handled within their own [`Context`].
/// Wait for the result using a [`JoinHandle`] or a `channel`.
///
/// The current thread is blocking and the passed in future is executed.
///
/// # Panics
///
/// This function panics if called within a [`Context`] thread.
#[track_caller]
pub fn block_on<Fut>(future: Fut) -> Fut::Output
where
Fut: Future + Send +'static,
Fut::Output: Send +'static,
{
if let Some(context) = Context::current() {
let msg = format!("Attempt to block within Context {}", context.name());
gst::error!(RUNTIME_CAT, "{}", msg);
panic!("{}", msg);
}
// Not running in a Context thread so we can block
gst::debug!(RUNTIME_CAT, "Blocking on new dummy context");
Scheduler::block_on(future)
}
/// Yields execution back to the runtime.
#[inline]
pub fn yield_now() -> YieldNow {
YieldNow::default()
}
#[derive(Debug, Default)]
#[must_use = "futures do nothing unless you `.await` or poll them"]
pub struct YieldNow(bool);
impl Future for YieldNow {
type Output = ();
fn poll(mut self: Pin<&mut Self>, cx: &mut task::Context<'_>) -> Poll<Self::Output> {
if!self.0 {
self.0 = true;
cx.waker().wake_by_ref();
Poll::Pending
} else {
Poll::Ready(())
}
}
}
#[derive(Clone, Debug)]
pub struct ContextWeak(HandleWeak);
impl ContextWeak {
pub fn upgrade(&self) -> Option<Context> {
self.0.upgrade().map(Context)
}
}
/// A `threadshare` `runtime` `Context`.
///
/// The `Context` provides low-level asynchronous processing features to
/// multiplex task execution on a single thread.
///
/// `Element` implementations should use [`PadSrc`] and [`PadSink`] which
/// provide high-level features.
///
/// [`PadSrc`]:../pad/struct.PadSrc.html
/// [`PadSink`]:../pad/struct.PadSink.html
#[derive(Clone, Debug)]
pub struct Context(Handle);
impl PartialEq for Context {
fn eq(&self, other: &Self) -> bool {
self.0.eq(&other.0)
}
}
impl Eq for Context {}
impl Context {
pub fn acquire(context_name: &str, wait: Duration) -> Result<Self, io::Error> {
assert_ne!(context_name, Scheduler::DUMMY_NAME);
let mut contexts = CONTEXTS.lock().unwrap();
if let Some(context_weak) = contexts.get(context_name) {
if let Some(context) = context_weak.upgrade() {
gst::debug!(RUNTIME_CAT, "Joining Context '{}'", context.name());
return Ok(context);
}
}
let context = Context(Scheduler::start(context_name, wait));
contexts.insert(context_name.into(), context.downgrade());
gst::debug!(
RUNTIME_CAT,
"New Context '{}' throttling {:?}",
context.name(),
wait,
);
Ok(context)
}
pub fn downgrade(&self) -> ContextWeak {
ContextWeak(self.0.downgrade())
}
pub fn name(&self) -> &str {
self.0.context_name()
}
// FIXME this could be renamed as max_throttling
// but then, all elements should also change their
// wait variables and properties to max_throttling.
pub fn wait_duration(&self) -> Duration {
self.0.max_throttling()
}
/// Total duration the scheduler spent parked.
///
/// This is only useful for performance evaluation.
#[cfg(feature = "tuning")]
pub fn parked_duration(&self) -> Duration {
self.0.parked_duration()
}
/// Returns `true` if a `Context` is running on current thread.
pub fn is_context_thread() -> bool {
Scheduler::is_scheduler_thread()
}
/// Returns the `Context` running on current thread, if any.
pub fn current() -> Option<Context> {
Scheduler::current().map(Context)
}
/// Returns the `TaskId` running on current thread, if any.
pub fn current_task() -> Option<(Context, TaskId)> {
Scheduler::current().map(|scheduler| {
// Context users always operate on a Task
(Context(scheduler), TaskId::current().unwrap())
})
}
/// Executes the provided function relatively to this [`Context`].
///
/// Usefull to initialize i/o sources and timers from outside
/// of a [`Context`].
///
/// # Panic
///
/// This will block current thread and would panic if run
/// from the [`Context`].
#[track_caller]
pub fn enter<'a, F, O>(&'a self, f: F) -> O
where
F: FnOnce() -> O + Send + 'a,
O: Send + 'a,
{
if let Some(cur) = Context::current().as_ref() {
if cur == self {
panic!(
"Attempt to enter Context {} within itself, this would deadlock",
self.name()
);
} else {
| } else {
gst::debug!(RUNTIME_CAT, "Entering Context {}", self.name());
}
self.0.enter(f)
}
pub fn spawn<Fut>(&self, future: Fut) -> JoinHandle<Fut::Output>
where
Fut: Future + Send +'static,
Fut::Output: Send +'static,
{
self.0.spawn(future)
}
pub fn spawn_and_unpark<Fut>(&self, future: Fut) -> JoinHandle<Fut::Output>
where
Fut: Future + Send +'static,
Fut::Output: Send +'static,
{
self.0.spawn_and_unpark(future)
}
/// Forces the scheduler to unpark.
///
/// This is not needed by elements implementors as they are
/// supposed to call [`Self::spawn_and_unpark`] when needed.
/// However, it's useful for lower level implementations such as
/// `runtime::Task` so as to make sure the iteration loop yields
/// as soon as possible when a transition is requested.
pub(in crate::runtime) fn unpark(&self) {
self.0.unpark();
}
pub fn add_sub_task<T>(&self, task_id: TaskId, sub_task: T) -> Result<(), T>
where
T: Future<Output = SubTaskOutput> + Send +'static,
{
self.0.add_sub_task(task_id, sub_task)
}
pub async fn drain_sub_tasks() -> SubTaskOutput {
let (ctx, task_id) = match Context::current_task() {
Some(task) => task,
None => return Ok(()),
};
ctx.0.drain_sub_tasks(task_id).await
}
}
impl From<Handle> for Context {
fn from(handle: Handle) -> Self {
Context(handle)
}
}
#[cfg(test)]
mod tests {
use futures::channel::mpsc;
use futures::lock::Mutex;
use futures::prelude::*;
use std::net::{IpAddr, Ipv4Addr, SocketAddr, UdpSocket};
use std::sync::Arc;
use std::time::{Duration, Instant};
use super::super::Scheduler;
use super::Context;
use crate::runtime::Async;
type Item = i32;
const SLEEP_DURATION_MS: u64 = 2;
const SLEEP_DURATION: Duration = Duration::from_millis(SLEEP_DURATION_MS);
const DELAY: Duration = Duration::from_millis(SLEEP_DURATION_MS * 10);
#[test]
fn block_on_task_id() {
gst::init().unwrap();
assert!(!Context::is_context_thread());
crate::runtime::executor::block_on(async {
let (ctx, task_id) = Context::current_task().unwrap();
assert_eq!(ctx.name(), Scheduler::DUMMY_NAME);
assert_eq!(task_id, super::TaskId(0));
let res = ctx.add_sub_task(task_id, async move {
let (_ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, super::TaskId(0));
Ok(())
});
assert!(res.is_ok());
assert!(Context::is_context_thread());
});
assert!(!Context::is_context_thread());
}
#[test]
fn block_on_timer() {
gst::init().unwrap();
let elapsed = crate::runtime::executor::block_on(async {
let now = Instant::now();
crate::runtime::timer::delay_for(DELAY).await;
now.elapsed()
});
assert!(elapsed >= DELAY);
}
#[test]
fn context_task_id() {
use super::TaskId;
gst::init().unwrap();
let context = Context::acquire("context_task_id", SLEEP_DURATION).unwrap();
let join_handle = context.spawn(async {
let (ctx, task_id) = Context::current_task().unwrap();
assert_eq!(ctx.name(), "context_task_id");
assert_eq!(task_id, TaskId(0));
});
futures::executor::block_on(join_handle).unwrap();
// TaskId(0) is vacant again
let ctx_weak = context.downgrade();
let join_handle = context.spawn(async move {
let (ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(0));
let res = ctx.add_sub_task(task_id, async move {
let (_ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(0));
Ok(())
});
assert!(res.is_ok());
ctx_weak
.upgrade()
.unwrap()
.spawn(async {
let (ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(1));
let res = ctx.add_sub_task(task_id, async move {
let (_ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(1));
Ok(())
});
assert!(res.is_ok());
assert!(Context::drain_sub_tasks().await.is_ok());
let (_ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(1));
})
.await
.unwrap();
assert!(Context::drain_sub_tasks().await.is_ok());
let (_ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(0));
});
futures::executor::block_on(join_handle).unwrap();
}
#[test]
fn drain_sub_tasks() {
// Setup
gst::init().unwrap();
let context = Context::acquire("drain_sub_tasks", SLEEP_DURATION).unwrap();
let join_handle = context.spawn(async {
let (sender, mut receiver) = mpsc::channel(1);
let sender: Arc<Mutex<mpsc::Sender<Item>>> = Arc::new(Mutex::new(sender));
let add_sub_task = move |item| {
let sender = sender.clone();
Context::current_task()
.ok_or(())
.and_then(|(ctx, task_id)| {
ctx.add_sub_task(task_id, async move {
sender
.lock()
.await
.send(item)
.await
.map_err(|_| gst::FlowError::Error)
})
.map_err(drop)
})
};
// Tests
// Drain empty queue
let drain_fut = Context::drain_sub_tasks();
drain_fut.await.unwrap();
// Add a subtask
add_sub_task(0).unwrap();
// Check that it was not executed yet
receiver.try_next().unwrap_err();
// Drain it now and check that it was executed
let drain_fut = Context::drain_sub_tasks();
drain_fut.await.unwrap();
assert_eq!(receiver.try_next().unwrap(), Some(0));
// Add another task and check that it's not executed yet
add_sub_task(1).unwrap();
receiver.try_next().unwrap_err();
// Return the receiver
receiver
});
let mut receiver = futures::executor::block_on(join_handle).unwrap();
// The last sub task should be simply dropped at this point
match receiver.try_next() {
Ok(None) | Err(_) => (),
other => panic!("Unexpected {other:?}"),
}
}
#[test]
fn block_on_from_sync() {
gst::init().unwrap();
let context = Context::acquire("block_on_from_sync", SLEEP_DURATION).unwrap();
let bytes_sent = crate::runtime::executor::block_on(context.spawn(async {
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 5001);
let socket = Async::<UdpSocket>::bind(saddr).unwrap();
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 4001);
socket.send_to(&[0; 10], saddr).await.unwrap()
}))
.unwrap();
assert_eq!(bytes_sent, 10);
let elapsed = crate::runtime::executor::block_on(context.spawn(async {
let start = Instant::now();
crate::runtime::timer::delay_for(DELAY).await;
start.elapsed()
}))
.unwrap();
// Due to throttling, `Delay` may be fired earlier
assert!(elapsed + SLEEP_DURATION / 2 >= DELAY);
}
#[test]
#[should_panic]
fn block_on_from_context() {
gst::init().unwrap();
let context = Context::acquire("block_on_from_context", SLEEP_DURATION).unwrap();
// Panic: attempt to `runtime::executor::block_on` within a `Context` thread
let join_handle = context.spawn(async {
crate::runtime::executor::block_on(crate::runtime::timer::delay_for(DELAY));
});
// Panic: task has failed
// (enforced by `async-task`, see comment in `Future` impl for `JoinHandle`).
futures::executor::block_on(join_handle).unwrap_err();
}
#[test]
fn enter_context_from_scheduler() {
gst::init().unwrap();
let elapsed = crate::runtime::executor::block_on(async {
let context = Context::acquire("enter_context_from_executor", SLEEP_DURATION).unwrap();
let socket = context
.enter(|| {
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 5002);
Async::<UdpSocket>::bind(saddr)
})
.unwrap();
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 4002);
let bytes_sent = socket.send_to(&[0; 10], saddr).await.unwrap();
assert_eq!(bytes_sent, 10);
let (start, timer) =
context.enter(|| (Instant::now(), crate::runtime::timer::delay_for(DELAY)));
timer.await;
start.elapsed()
});
// Due to throttling, `Delay` may be fired earlier
assert!(elapsed + SLEEP_DURATION / 2 >= DELAY);
}
#[test]
fn enter_context_from_sync() {
gst::init().unwrap();
let context = Context::acquire("enter_context_from_sync", SLEEP_DURATION).unwrap();
let socket = context
.enter(|| {
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 5003);
Async::<UdpSocket>::bind(saddr)
})
.unwrap();
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 4003);
let bytes_sent = futures::executor::block_on(socket.send_to(&[0; 10], saddr)).unwrap();
assert_eq!(bytes_sent, 10);
let (start, timer) =
context.enter(|| (Instant::now(), crate::runtime::timer::delay_for(DELAY)));
let elapsed = crate::runtime::executor::block_on(async move {
timer.await;
start.elapsed()
});
// Due to throttling, `Delay` may be fired earlier
assert!(elapsed + SLEEP_DURATION / 2 >= DELAY);
}
}
| gst::warning!(
RUNTIME_CAT,
"Entering Context {} within {}",
self.name(),
cur.name()
);
}
| conditional_block |
context.rs | // Copyright (C) 2018-2020 Sebastian Dröge <[email protected]>
// Copyright (C) 2019-2022 François Laignel <[email protected]>
//
// Take a look at the license at the top of the repository in the LICENSE file.
use futures::prelude::*;
use gst::glib::once_cell::sync::Lazy;
use std::collections::HashMap;
use std::io;
use std::pin::Pin;
use std::sync::{Arc, Mutex};
use std::task::{self, Poll};
use std::time::Duration;
use super::{Handle, HandleWeak, JoinHandle, Scheduler, SubTaskOutput, TaskId};
use crate::runtime::RUNTIME_CAT;
// We are bound to using `sync` for the `runtime` `Mutex`es. Attempts to use `async` `Mutex`es
// lead to the following issues:
//
// * `CONTEXTS`: can't `spawn` a `Future` when called from a `Context` thread via `ffi`.
// * `timers`: can't automatically `remove` the timer from `BinaryHeap` because `async drop`
// is not available.
// * `task_queues`: can't `add` a pending task when called from a `Context` thread via `ffi`.
//
// Also, we want to be able to `acquire` a `Context` outside of an `async` context.
// These `Mutex`es must be `lock`ed for a short period.
static CONTEXTS: Lazy<Mutex<HashMap<Arc<str>, ContextWeak>>> =
Lazy::new(|| Mutex::new(HashMap::new()));
/// Blocks on `future` in one way or another if possible.
///
/// IO & time related `Future`s must be handled within their own [`Context`].
/// Wait for the result using a [`JoinHandle`] or a `channel`.
///
/// If there's currently an active `Context` with a task, then the future is only queued up as a
/// pending sub task for that task.
///
/// Otherwise the current thread is blocking and the passed in future is executed.
///
/// Note that you must not pass any futures here that wait for the currently active task in one way
/// or another as this would deadlock!
#[track_caller]
pub fn block_on_or_add_sub_task<Fut>(future: Fut) -> Option<Fut::Output>
where
Fut: Future + Send +'static,
Fut::Output: Send +'static,
{
if let Some((cur_context, cur_task_id)) = Context::current_task() {
gst::debug!(
RUNTIME_CAT,
"Adding subtask to task {:?} on context {}",
cur_task_id,
cur_context.name()
);
let _ = cur_context.add_sub_task(cur_task_id, async move {
future.await;
Ok(())
});
return None;
}
// Not running in a Context thread so we can block
Some(block_on(future))
}
/// Blocks on `future`.
///
/// IO & time related `Future`s must be handled within their own [`Context`].
/// Wait for the result using a [`JoinHandle`] or a `channel`.
///
/// The current thread is blocking and the passed in future is executed.
///
/// # Panics
///
/// This function panics if called within a [`Context`] thread.
#[track_caller]
pub fn block_on<Fut>(future: Fut) -> Fut::Output
where
Fut: Future + Send +'static,
Fut::Output: Send +'static,
{
if let Some(context) = Context::current() {
let msg = format!("Attempt to block within Context {}", context.name());
gst::error!(RUNTIME_CAT, "{}", msg);
panic!("{}", msg);
}
// Not running in a Context thread so we can block
gst::debug!(RUNTIME_CAT, "Blocking on new dummy context");
Scheduler::block_on(future)
}
/// Yields execution back to the runtime.
#[inline]
pub fn yield_now() -> YieldNow {
YieldNow::default()
}
#[derive(Debug, Default)]
#[must_use = "futures do nothing unless you `.await` or poll them"]
pub struct YieldNow(bool);
impl Future for YieldNow {
type Output = ();
fn poll(mut self: Pin<&mut Self>, cx: &mut task::Context<'_>) -> Poll<Self::Output> {
if!self.0 {
self.0 = true;
cx.waker().wake_by_ref();
Poll::Pending
} else {
Poll::Ready(())
}
}
}
#[derive(Clone, Debug)]
pub struct ContextWeak(HandleWeak);
impl ContextWeak {
pub fn upgrade(&self) -> Option<Context> {
self.0.upgrade().map(Context)
}
}
/// A `threadshare` `runtime` `Context`.
///
/// The `Context` provides low-level asynchronous processing features to
/// multiplex task execution on a single thread.
///
/// `Element` implementations should use [`PadSrc`] and [`PadSink`] which
/// provide high-level features.
///
/// [`PadSrc`]:../pad/struct.PadSrc.html
/// [`PadSink`]:../pad/struct.PadSink.html
#[derive(Clone, Debug)]
pub struct Context(Handle);
impl PartialEq for Context {
fn eq(&self, other: &Self) -> bool {
self.0.eq(&other.0)
}
}
impl Eq for Context {}
impl Context {
pub fn acquire(context_name: &str, wait: Duration) -> Result<Self, io::Error> {
assert_ne!(context_name, Scheduler::DUMMY_NAME);
let mut contexts = CONTEXTS.lock().unwrap();
if let Some(context_weak) = contexts.get(context_name) {
if let Some(context) = context_weak.upgrade() {
gst::debug!(RUNTIME_CAT, "Joining Context '{}'", context.name());
return Ok(context);
}
}
let context = Context(Scheduler::start(context_name, wait));
contexts.insert(context_name.into(), context.downgrade());
gst::debug!(
RUNTIME_CAT,
"New Context '{}' throttling {:?}",
context.name(),
wait,
);
Ok(context)
}
pub fn downgrade(&self) -> ContextWeak {
ContextWeak(self.0.downgrade())
}
pub fn name(&self) -> &str {
self.0.context_name()
}
// FIXME this could be renamed as max_throttling
// but then, all elements should also change their
// wait variables and properties to max_throttling.
pub fn wait_duration(&self) -> Duration {
self.0.max_throttling()
}
/// Total duration the scheduler spent parked.
///
/// This is only useful for performance evaluation.
#[cfg(feature = "tuning")]
pub fn parked_duration(&self) -> Duration {
self.0.parked_duration()
}
/// Returns `true` if a `Context` is running on current thread.
pub fn is_context_thread() -> bool {
Scheduler::is_scheduler_thread()
}
/// Returns the `Context` running on current thread, if any.
pub fn current() -> Option<Context> {
Scheduler::current().map(Context)
}
/// Returns the `TaskId` running on current thread, if any.
pub fn current_task() -> Option<(Context, TaskId)> {
Scheduler::current().map(|scheduler| {
// Context users always operate on a Task
(Context(scheduler), TaskId::current().unwrap())
})
}
/// Executes the provided function relatively to this [`Context`].
///
/// Usefull to initialize i/o sources and timers from outside
/// of a [`Context`].
///
/// # Panic
///
/// This will block current thread and would panic if run
/// from the [`Context`].
#[track_caller]
pub fn enter<'a, F, O>(&'a self, f: F) -> O
where
F: FnOnce() -> O + Send + 'a,
O: Send + 'a,
{
|
pub fn spawn<Fut>(&self, future: Fut) -> JoinHandle<Fut::Output>
where
Fut: Future + Send +'static,
Fut::Output: Send +'static,
{
self.0.spawn(future)
}
pub fn spawn_and_unpark<Fut>(&self, future: Fut) -> JoinHandle<Fut::Output>
where
Fut: Future + Send +'static,
Fut::Output: Send +'static,
{
self.0.spawn_and_unpark(future)
}
/// Forces the scheduler to unpark.
///
/// This is not needed by elements implementors as they are
/// supposed to call [`Self::spawn_and_unpark`] when needed.
/// However, it's useful for lower level implementations such as
/// `runtime::Task` so as to make sure the iteration loop yields
/// as soon as possible when a transition is requested.
pub(in crate::runtime) fn unpark(&self) {
self.0.unpark();
}
pub fn add_sub_task<T>(&self, task_id: TaskId, sub_task: T) -> Result<(), T>
where
T: Future<Output = SubTaskOutput> + Send +'static,
{
self.0.add_sub_task(task_id, sub_task)
}
pub async fn drain_sub_tasks() -> SubTaskOutput {
let (ctx, task_id) = match Context::current_task() {
Some(task) => task,
None => return Ok(()),
};
ctx.0.drain_sub_tasks(task_id).await
}
}
impl From<Handle> for Context {
fn from(handle: Handle) -> Self {
Context(handle)
}
}
#[cfg(test)]
mod tests {
use futures::channel::mpsc;
use futures::lock::Mutex;
use futures::prelude::*;
use std::net::{IpAddr, Ipv4Addr, SocketAddr, UdpSocket};
use std::sync::Arc;
use std::time::{Duration, Instant};
use super::super::Scheduler;
use super::Context;
use crate::runtime::Async;
type Item = i32;
const SLEEP_DURATION_MS: u64 = 2;
const SLEEP_DURATION: Duration = Duration::from_millis(SLEEP_DURATION_MS);
const DELAY: Duration = Duration::from_millis(SLEEP_DURATION_MS * 10);
#[test]
fn block_on_task_id() {
gst::init().unwrap();
assert!(!Context::is_context_thread());
crate::runtime::executor::block_on(async {
let (ctx, task_id) = Context::current_task().unwrap();
assert_eq!(ctx.name(), Scheduler::DUMMY_NAME);
assert_eq!(task_id, super::TaskId(0));
let res = ctx.add_sub_task(task_id, async move {
let (_ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, super::TaskId(0));
Ok(())
});
assert!(res.is_ok());
assert!(Context::is_context_thread());
});
assert!(!Context::is_context_thread());
}
#[test]
fn block_on_timer() {
gst::init().unwrap();
let elapsed = crate::runtime::executor::block_on(async {
let now = Instant::now();
crate::runtime::timer::delay_for(DELAY).await;
now.elapsed()
});
assert!(elapsed >= DELAY);
}
#[test]
fn context_task_id() {
use super::TaskId;
gst::init().unwrap();
let context = Context::acquire("context_task_id", SLEEP_DURATION).unwrap();
let join_handle = context.spawn(async {
let (ctx, task_id) = Context::current_task().unwrap();
assert_eq!(ctx.name(), "context_task_id");
assert_eq!(task_id, TaskId(0));
});
futures::executor::block_on(join_handle).unwrap();
// TaskId(0) is vacant again
let ctx_weak = context.downgrade();
let join_handle = context.spawn(async move {
let (ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(0));
let res = ctx.add_sub_task(task_id, async move {
let (_ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(0));
Ok(())
});
assert!(res.is_ok());
ctx_weak
.upgrade()
.unwrap()
.spawn(async {
let (ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(1));
let res = ctx.add_sub_task(task_id, async move {
let (_ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(1));
Ok(())
});
assert!(res.is_ok());
assert!(Context::drain_sub_tasks().await.is_ok());
let (_ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(1));
})
.await
.unwrap();
assert!(Context::drain_sub_tasks().await.is_ok());
let (_ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(0));
});
futures::executor::block_on(join_handle).unwrap();
}
#[test]
fn drain_sub_tasks() {
// Setup
gst::init().unwrap();
let context = Context::acquire("drain_sub_tasks", SLEEP_DURATION).unwrap();
let join_handle = context.spawn(async {
let (sender, mut receiver) = mpsc::channel(1);
let sender: Arc<Mutex<mpsc::Sender<Item>>> = Arc::new(Mutex::new(sender));
let add_sub_task = move |item| {
let sender = sender.clone();
Context::current_task()
.ok_or(())
.and_then(|(ctx, task_id)| {
ctx.add_sub_task(task_id, async move {
sender
.lock()
.await
.send(item)
.await
.map_err(|_| gst::FlowError::Error)
})
.map_err(drop)
})
};
// Tests
// Drain empty queue
let drain_fut = Context::drain_sub_tasks();
drain_fut.await.unwrap();
// Add a subtask
add_sub_task(0).unwrap();
// Check that it was not executed yet
receiver.try_next().unwrap_err();
// Drain it now and check that it was executed
let drain_fut = Context::drain_sub_tasks();
drain_fut.await.unwrap();
assert_eq!(receiver.try_next().unwrap(), Some(0));
// Add another task and check that it's not executed yet
add_sub_task(1).unwrap();
receiver.try_next().unwrap_err();
// Return the receiver
receiver
});
let mut receiver = futures::executor::block_on(join_handle).unwrap();
// The last sub task should be simply dropped at this point
match receiver.try_next() {
Ok(None) | Err(_) => (),
other => panic!("Unexpected {other:?}"),
}
}
#[test]
fn block_on_from_sync() {
gst::init().unwrap();
let context = Context::acquire("block_on_from_sync", SLEEP_DURATION).unwrap();
let bytes_sent = crate::runtime::executor::block_on(context.spawn(async {
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 5001);
let socket = Async::<UdpSocket>::bind(saddr).unwrap();
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 4001);
socket.send_to(&[0; 10], saddr).await.unwrap()
}))
.unwrap();
assert_eq!(bytes_sent, 10);
let elapsed = crate::runtime::executor::block_on(context.spawn(async {
let start = Instant::now();
crate::runtime::timer::delay_for(DELAY).await;
start.elapsed()
}))
.unwrap();
// Due to throttling, `Delay` may be fired earlier
assert!(elapsed + SLEEP_DURATION / 2 >= DELAY);
}
#[test]
#[should_panic]
fn block_on_from_context() {
gst::init().unwrap();
let context = Context::acquire("block_on_from_context", SLEEP_DURATION).unwrap();
// Panic: attempt to `runtime::executor::block_on` within a `Context` thread
let join_handle = context.spawn(async {
crate::runtime::executor::block_on(crate::runtime::timer::delay_for(DELAY));
});
// Panic: task has failed
// (enforced by `async-task`, see comment in `Future` impl for `JoinHandle`).
futures::executor::block_on(join_handle).unwrap_err();
}
#[test]
fn enter_context_from_scheduler() {
gst::init().unwrap();
let elapsed = crate::runtime::executor::block_on(async {
let context = Context::acquire("enter_context_from_executor", SLEEP_DURATION).unwrap();
let socket = context
.enter(|| {
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 5002);
Async::<UdpSocket>::bind(saddr)
})
.unwrap();
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 4002);
let bytes_sent = socket.send_to(&[0; 10], saddr).await.unwrap();
assert_eq!(bytes_sent, 10);
let (start, timer) =
context.enter(|| (Instant::now(), crate::runtime::timer::delay_for(DELAY)));
timer.await;
start.elapsed()
});
// Due to throttling, `Delay` may be fired earlier
assert!(elapsed + SLEEP_DURATION / 2 >= DELAY);
}
#[test]
fn enter_context_from_sync() {
gst::init().unwrap();
let context = Context::acquire("enter_context_from_sync", SLEEP_DURATION).unwrap();
let socket = context
.enter(|| {
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 5003);
Async::<UdpSocket>::bind(saddr)
})
.unwrap();
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 4003);
let bytes_sent = futures::executor::block_on(socket.send_to(&[0; 10], saddr)).unwrap();
assert_eq!(bytes_sent, 10);
let (start, timer) =
context.enter(|| (Instant::now(), crate::runtime::timer::delay_for(DELAY)));
let elapsed = crate::runtime::executor::block_on(async move {
timer.await;
start.elapsed()
});
// Due to throttling, `Delay` may be fired earlier
assert!(elapsed + SLEEP_DURATION / 2 >= DELAY);
}
}
| if let Some(cur) = Context::current().as_ref() {
if cur == self {
panic!(
"Attempt to enter Context {} within itself, this would deadlock",
self.name()
);
} else {
gst::warning!(
RUNTIME_CAT,
"Entering Context {} within {}",
self.name(),
cur.name()
);
}
} else {
gst::debug!(RUNTIME_CAT, "Entering Context {}", self.name());
}
self.0.enter(f)
} | identifier_body |
context.rs | // Copyright (C) 2018-2020 Sebastian Dröge <[email protected]>
// Copyright (C) 2019-2022 François Laignel <[email protected]>
//
// Take a look at the license at the top of the repository in the LICENSE file.
use futures::prelude::*;
use gst::glib::once_cell::sync::Lazy;
use std::collections::HashMap;
use std::io;
use std::pin::Pin;
use std::sync::{Arc, Mutex};
use std::task::{self, Poll};
use std::time::Duration;
use super::{Handle, HandleWeak, JoinHandle, Scheduler, SubTaskOutput, TaskId};
use crate::runtime::RUNTIME_CAT;
// We are bound to using `sync` for the `runtime` `Mutex`es. Attempts to use `async` `Mutex`es
// lead to the following issues:
//
// * `CONTEXTS`: can't `spawn` a `Future` when called from a `Context` thread via `ffi`.
// * `timers`: can't automatically `remove` the timer from `BinaryHeap` because `async drop`
// is not available.
// * `task_queues`: can't `add` a pending task when called from a `Context` thread via `ffi`.
//
// Also, we want to be able to `acquire` a `Context` outside of an `async` context.
// These `Mutex`es must be `lock`ed for a short period.
static CONTEXTS: Lazy<Mutex<HashMap<Arc<str>, ContextWeak>>> =
Lazy::new(|| Mutex::new(HashMap::new()));
/// Blocks on `future` in one way or another if possible.
///
/// IO & time related `Future`s must be handled within their own [`Context`].
/// Wait for the result using a [`JoinHandle`] or a `channel`.
///
/// If there's currently an active `Context` with a task, then the future is only queued up as a
/// pending sub task for that task.
///
/// Otherwise the current thread is blocking and the passed in future is executed.
///
/// Note that you must not pass any futures here that wait for the currently active task in one way
/// or another as this would deadlock!
#[track_caller]
pub fn block_on_or_add_sub_task<Fut>(future: Fut) -> Option<Fut::Output>
where
Fut: Future + Send +'static,
Fut::Output: Send +'static,
{
if let Some((cur_context, cur_task_id)) = Context::current_task() {
gst::debug!(
RUNTIME_CAT,
"Adding subtask to task {:?} on context {}",
cur_task_id,
cur_context.name()
);
let _ = cur_context.add_sub_task(cur_task_id, async move {
future.await;
Ok(())
});
return None;
}
// Not running in a Context thread so we can block
Some(block_on(future))
}
/// Blocks on `future`.
///
/// IO & time related `Future`s must be handled within their own [`Context`].
/// Wait for the result using a [`JoinHandle`] or a `channel`.
///
/// The current thread is blocking and the passed in future is executed.
///
/// # Panics
///
/// This function panics if called within a [`Context`] thread.
#[track_caller]
pub fn block_on<Fut>(future: Fut) -> Fut::Output
where
Fut: Future + Send +'static,
Fut::Output: Send +'static,
{
if let Some(context) = Context::current() {
let msg = format!("Attempt to block within Context {}", context.name());
gst::error!(RUNTIME_CAT, "{}", msg);
panic!("{}", msg);
}
// Not running in a Context thread so we can block
gst::debug!(RUNTIME_CAT, "Blocking on new dummy context");
Scheduler::block_on(future)
}
/// Yields execution back to the runtime.
#[inline]
pub fn yield_now() -> YieldNow {
YieldNow::default()
}
#[derive(Debug, Default)]
#[must_use = "futures do nothing unless you `.await` or poll them"]
pub struct YieldNow(bool);
impl Future for YieldNow {
type Output = ();
fn poll(mut self: Pin<&mut Self>, cx: &mut task::Context<'_>) -> Poll<Self::Output> {
if!self.0 {
self.0 = true;
cx.waker().wake_by_ref();
Poll::Pending
} else {
Poll::Ready(())
}
}
}
#[derive(Clone, Debug)]
pub struct ContextWeak(HandleWeak);
impl ContextWeak {
pub fn upgrade(&self) -> Option<Context> {
self.0.upgrade().map(Context)
}
}
/// A `threadshare` `runtime` `Context`.
///
/// The `Context` provides low-level asynchronous processing features to
/// multiplex task execution on a single thread.
///
/// `Element` implementations should use [`PadSrc`] and [`PadSink`] which
/// provide high-level features.
///
/// [`PadSrc`]:../pad/struct.PadSrc.html
/// [`PadSink`]:../pad/struct.PadSink.html
#[derive(Clone, Debug)]
pub struct Context(Handle);
impl PartialEq for Context {
fn eq(&self, other: &Self) -> bool {
self.0.eq(&other.0)
}
}
impl Eq for Context {}
impl Context {
pub fn acquire(context_name: &str, wait: Duration) -> Result<Self, io::Error> {
assert_ne!(context_name, Scheduler::DUMMY_NAME);
let mut contexts = CONTEXTS.lock().unwrap();
if let Some(context_weak) = contexts.get(context_name) {
if let Some(context) = context_weak.upgrade() {
gst::debug!(RUNTIME_CAT, "Joining Context '{}'", context.name());
return Ok(context);
}
}
let context = Context(Scheduler::start(context_name, wait));
contexts.insert(context_name.into(), context.downgrade());
gst::debug!(
RUNTIME_CAT,
"New Context '{}' throttling {:?}",
context.name(),
wait,
);
Ok(context)
}
pub fn downgrade(&self) -> ContextWeak {
ContextWeak(self.0.downgrade())
}
pub fn name(&self) -> &str {
self.0.context_name()
}
// FIXME this could be renamed as max_throttling
// but then, all elements should also change their
// wait variables and properties to max_throttling.
pub fn wait_duration(&self) -> Duration {
self.0.max_throttling()
}
/// Total duration the scheduler spent parked.
///
/// This is only useful for performance evaluation.
#[cfg(feature = "tuning")]
pub fn parked_duration(&self) -> Duration {
self.0.parked_duration()
}
/// Returns `true` if a `Context` is running on current thread.
pub fn is_context_thread() -> bool {
Scheduler::is_scheduler_thread()
}
/// Returns the `Context` running on current thread, if any.
pub fn current() -> Option<Context> {
Scheduler::current().map(Context)
}
/// Returns the `TaskId` running on current thread, if any.
pub fn current_task() -> Option<(Context, TaskId)> {
Scheduler::current().map(|scheduler| {
// Context users always operate on a Task
(Context(scheduler), TaskId::current().unwrap())
})
}
/// Executes the provided function relatively to this [`Context`].
///
/// Usefull to initialize i/o sources and timers from outside
/// of a [`Context`].
///
/// # Panic
///
/// This will block current thread and would panic if run
/// from the [`Context`].
#[track_caller]
pub fn enter<'a, F, O>(&'a self, f: F) -> O
where
F: FnOnce() -> O + Send + 'a,
O: Send + 'a,
{
if let Some(cur) = Context::current().as_ref() {
if cur == self {
panic!(
"Attempt to enter Context {} within itself, this would deadlock",
self.name()
);
} else {
gst::warning!(
RUNTIME_CAT,
"Entering Context {} within {}",
self.name(),
cur.name()
);
}
} else {
gst::debug!(RUNTIME_CAT, "Entering Context {}", self.name());
}
self.0.enter(f)
}
pub fn spawn<Fut>(&self, future: Fut) -> JoinHandle<Fut::Output>
where
Fut: Future + Send +'static,
Fut::Output: Send +'static,
{
self.0.spawn(future)
}
pub fn spawn_and_unpark<Fut>(&self, future: Fut) -> JoinHandle<Fut::Output>
where
Fut: Future + Send +'static,
Fut::Output: Send +'static,
{
self.0.spawn_and_unpark(future)
}
/// Forces the scheduler to unpark.
///
/// This is not needed by elements implementors as they are
/// supposed to call [`Self::spawn_and_unpark`] when needed.
/// However, it's useful for lower level implementations such as
/// `runtime::Task` so as to make sure the iteration loop yields
/// as soon as possible when a transition is requested.
pub(in crate::runtime) fn unpark(&self) {
self.0.unpark();
}
pub fn add_sub_task<T>(&self, task_id: TaskId, sub_task: T) -> Result<(), T>
where
T: Future<Output = SubTaskOutput> + Send +'static,
{
self.0.add_sub_task(task_id, sub_task)
}
pub async fn drain_sub_tasks() -> SubTaskOutput {
let (ctx, task_id) = match Context::current_task() {
Some(task) => task,
None => return Ok(()),
};
ctx.0.drain_sub_tasks(task_id).await
}
}
impl From<Handle> for Context {
fn fr | andle: Handle) -> Self {
Context(handle)
}
}
#[cfg(test)]
mod tests {
use futures::channel::mpsc;
use futures::lock::Mutex;
use futures::prelude::*;
use std::net::{IpAddr, Ipv4Addr, SocketAddr, UdpSocket};
use std::sync::Arc;
use std::time::{Duration, Instant};
use super::super::Scheduler;
use super::Context;
use crate::runtime::Async;
type Item = i32;
const SLEEP_DURATION_MS: u64 = 2;
const SLEEP_DURATION: Duration = Duration::from_millis(SLEEP_DURATION_MS);
const DELAY: Duration = Duration::from_millis(SLEEP_DURATION_MS * 10);
#[test]
fn block_on_task_id() {
gst::init().unwrap();
assert!(!Context::is_context_thread());
crate::runtime::executor::block_on(async {
let (ctx, task_id) = Context::current_task().unwrap();
assert_eq!(ctx.name(), Scheduler::DUMMY_NAME);
assert_eq!(task_id, super::TaskId(0));
let res = ctx.add_sub_task(task_id, async move {
let (_ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, super::TaskId(0));
Ok(())
});
assert!(res.is_ok());
assert!(Context::is_context_thread());
});
assert!(!Context::is_context_thread());
}
#[test]
fn block_on_timer() {
gst::init().unwrap();
let elapsed = crate::runtime::executor::block_on(async {
let now = Instant::now();
crate::runtime::timer::delay_for(DELAY).await;
now.elapsed()
});
assert!(elapsed >= DELAY);
}
#[test]
fn context_task_id() {
use super::TaskId;
gst::init().unwrap();
let context = Context::acquire("context_task_id", SLEEP_DURATION).unwrap();
let join_handle = context.spawn(async {
let (ctx, task_id) = Context::current_task().unwrap();
assert_eq!(ctx.name(), "context_task_id");
assert_eq!(task_id, TaskId(0));
});
futures::executor::block_on(join_handle).unwrap();
// TaskId(0) is vacant again
let ctx_weak = context.downgrade();
let join_handle = context.spawn(async move {
let (ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(0));
let res = ctx.add_sub_task(task_id, async move {
let (_ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(0));
Ok(())
});
assert!(res.is_ok());
ctx_weak
.upgrade()
.unwrap()
.spawn(async {
let (ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(1));
let res = ctx.add_sub_task(task_id, async move {
let (_ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(1));
Ok(())
});
assert!(res.is_ok());
assert!(Context::drain_sub_tasks().await.is_ok());
let (_ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(1));
})
.await
.unwrap();
assert!(Context::drain_sub_tasks().await.is_ok());
let (_ctx, task_id) = Context::current_task().unwrap();
assert_eq!(task_id, TaskId(0));
});
futures::executor::block_on(join_handle).unwrap();
}
#[test]
fn drain_sub_tasks() {
// Setup
gst::init().unwrap();
let context = Context::acquire("drain_sub_tasks", SLEEP_DURATION).unwrap();
let join_handle = context.spawn(async {
let (sender, mut receiver) = mpsc::channel(1);
let sender: Arc<Mutex<mpsc::Sender<Item>>> = Arc::new(Mutex::new(sender));
let add_sub_task = move |item| {
let sender = sender.clone();
Context::current_task()
.ok_or(())
.and_then(|(ctx, task_id)| {
ctx.add_sub_task(task_id, async move {
sender
.lock()
.await
.send(item)
.await
.map_err(|_| gst::FlowError::Error)
})
.map_err(drop)
})
};
// Tests
// Drain empty queue
let drain_fut = Context::drain_sub_tasks();
drain_fut.await.unwrap();
// Add a subtask
add_sub_task(0).unwrap();
// Check that it was not executed yet
receiver.try_next().unwrap_err();
// Drain it now and check that it was executed
let drain_fut = Context::drain_sub_tasks();
drain_fut.await.unwrap();
assert_eq!(receiver.try_next().unwrap(), Some(0));
// Add another task and check that it's not executed yet
add_sub_task(1).unwrap();
receiver.try_next().unwrap_err();
// Return the receiver
receiver
});
let mut receiver = futures::executor::block_on(join_handle).unwrap();
// The last sub task should be simply dropped at this point
match receiver.try_next() {
Ok(None) | Err(_) => (),
other => panic!("Unexpected {other:?}"),
}
}
#[test]
fn block_on_from_sync() {
gst::init().unwrap();
let context = Context::acquire("block_on_from_sync", SLEEP_DURATION).unwrap();
let bytes_sent = crate::runtime::executor::block_on(context.spawn(async {
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 5001);
let socket = Async::<UdpSocket>::bind(saddr).unwrap();
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 4001);
socket.send_to(&[0; 10], saddr).await.unwrap()
}))
.unwrap();
assert_eq!(bytes_sent, 10);
let elapsed = crate::runtime::executor::block_on(context.spawn(async {
let start = Instant::now();
crate::runtime::timer::delay_for(DELAY).await;
start.elapsed()
}))
.unwrap();
// Due to throttling, `Delay` may be fired earlier
assert!(elapsed + SLEEP_DURATION / 2 >= DELAY);
}
#[test]
#[should_panic]
fn block_on_from_context() {
gst::init().unwrap();
let context = Context::acquire("block_on_from_context", SLEEP_DURATION).unwrap();
// Panic: attempt to `runtime::executor::block_on` within a `Context` thread
let join_handle = context.spawn(async {
crate::runtime::executor::block_on(crate::runtime::timer::delay_for(DELAY));
});
// Panic: task has failed
// (enforced by `async-task`, see comment in `Future` impl for `JoinHandle`).
futures::executor::block_on(join_handle).unwrap_err();
}
#[test]
fn enter_context_from_scheduler() {
gst::init().unwrap();
let elapsed = crate::runtime::executor::block_on(async {
let context = Context::acquire("enter_context_from_executor", SLEEP_DURATION).unwrap();
let socket = context
.enter(|| {
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 5002);
Async::<UdpSocket>::bind(saddr)
})
.unwrap();
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 4002);
let bytes_sent = socket.send_to(&[0; 10], saddr).await.unwrap();
assert_eq!(bytes_sent, 10);
let (start, timer) =
context.enter(|| (Instant::now(), crate::runtime::timer::delay_for(DELAY)));
timer.await;
start.elapsed()
});
// Due to throttling, `Delay` may be fired earlier
assert!(elapsed + SLEEP_DURATION / 2 >= DELAY);
}
#[test]
fn enter_context_from_sync() {
gst::init().unwrap();
let context = Context::acquire("enter_context_from_sync", SLEEP_DURATION).unwrap();
let socket = context
.enter(|| {
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 5003);
Async::<UdpSocket>::bind(saddr)
})
.unwrap();
let saddr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 4003);
let bytes_sent = futures::executor::block_on(socket.send_to(&[0; 10], saddr)).unwrap();
assert_eq!(bytes_sent, 10);
let (start, timer) =
context.enter(|| (Instant::now(), crate::runtime::timer::delay_for(DELAY)));
let elapsed = crate::runtime::executor::block_on(async move {
timer.await;
start.elapsed()
});
// Due to throttling, `Delay` may be fired earlier
assert!(elapsed + SLEEP_DURATION / 2 >= DELAY);
}
}
| om(h | identifier_name |
google_spreadsheets.rs | use std::collections::{HashMap, HashSet};
use std::convert::TryFrom;
use std::sync::Arc;
use datafusion::arrow::array::{ArrayRef, BooleanArray, PrimitiveArray, StringArray};
use datafusion::arrow::datatypes::{DataType, Field, Schema};
use datafusion::arrow::datatypes::{Float64Type, Int64Type};
use datafusion::arrow::record_batch::RecordBatch;
use regex::Regex;
use reqwest::Client;
use serde_derive::Deserialize;
use uriparse::URIReference;
use crate::error::ColumnQError;
use crate::table::{TableOptionGoogleSpreasheet, TableSource};
// steps
// * Activate the Google Sheets API in the Google API Console.
//
// * Create service account: https://console.developers.google.com/apis/api/sheets.googleapis.com/credentials?project=roapi-302505
// * create key and save the json format somewhere safe
// * Share spreadsheet with service account
#[derive(Deserialize, Debug)]
struct SheetProperties {
#[serde(rename = "sheetId")]
sheet_id: usize,
title: String,
index: usize,
// other unused attributes:
//
// "sheetType": "GRID",
// "gridProperties": {
// "rowCount": 1000,
// "columnCount": 28
// }
//
// see: https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/sheets#SheetProperties
}
#[derive(Deserialize, Debug)]
struct Sheet {
properties: SheetProperties,
// for all available fields, see:
// https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets#Spreadsheet
}
#[derive(Deserialize, Debug)]
struct Spreadsheets {
sheets: Vec<Sheet>,
// other unused attributes:
// * spreadsheetId
// * properties
// * spreadsheetUrl
//
// see: https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets
}
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct SpreadsheetValues {
range: String,
#[serde(rename = "majorDimension")]
major_dimension: String,
values: Vec<Vec<String>>,
}
// TODO: should we support optional column?
fn infer_value_type(v: &str) -> DataType {
// match order matters
match v {
// TODO: support Date64 and Time64
_ if v.parse::<i64>().is_ok() => DataType::Int64,
_ if v.parse::<f64>().is_ok() => DataType::Float64,
_ => match v.to_lowercase().as_str() {
"false" | "true" => DataType::Boolean,
_ => DataType::Utf8,
},
}
}
// util wrapper for calling google spreadsheet API
async fn gs_api_get(token: &str, url: &str) -> Result<reqwest::Response, ColumnQError> {
Client::builder()
.build()
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Failed to initialize HTTP client: {}",
e.to_string()
))
})?
.get(url)
.bearer_auth(token)
.send()
.await
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Failed to send API request: {}",
e.to_string()
))
})
}
fn coerce_type(l: DataType, r: DataType) -> DataType {
match (l, r) {
(DataType::Boolean, DataType::Boolean) => DataType::Boolean,
(DataType::Date32, DataType::Date32) => DataType::Date32,
(DataType::Date64, DataType::Date64)
| (DataType::Date64, DataType::Date32)
| (DataType::Date32, DataType::Date64) => DataType::Date64,
(DataType::Int64, DataType::Int64) => DataType::Int64,
(DataType::Float64, DataType::Float64)
| (DataType::Float64, DataType::Int64)
| (DataType::Int64, DataType::Float64) => DataType::Float64,
_ => DataType::Utf8,
}
}
fn infer_schema(rows: &[Vec<String>]) -> Schema {
let mut col_types: HashMap<&str, HashSet<DataType>> = HashMap::new();
let col_names = &rows[0];
rows.iter().skip(1).for_each(|row| {
row.iter().enumerate().for_each(|(i, col_val)| {
let col_name = &col_names[i];
let col_type = infer_value_type(col_val);
let entry = col_types.entry(col_name).or_insert_with(HashSet::new);
entry.insert(col_type);
});
});
let fields: Vec<Field> = col_names
.iter()
.map(|col_name| {
let set = col_types.entry(col_name).or_insert_with(|| {
// TODO: this should never happen, maybe we should use panic instead?
let mut set = HashSet::new();
set.insert(DataType::Utf8);
set
});
let mut dt_iter = set.iter().cloned();
let dt_init = dt_iter.next().unwrap_or(DataType::Utf8);
let dt = dt_iter.fold(dt_init, coerce_type);
// normalize column name by replacing space with under score
Field::new(&col_name.replace(" ", "_"), dt, false)
})
.collect();
Schema::new(fields)
}
fn parse_boolean(s: &str) -> bool {
s.eq_ignore_ascii_case("true")
}
fn sheet_values_to_record_batch(values: &[Vec<String>]) -> Result<RecordBatch, ColumnQError> {
let schema = infer_schema(values);
let arrays = schema
.fields()
.iter()
.enumerate()
.map(|(i, field)| {
// skip header row
let rows_iter = values.iter().skip(1);
Ok(match field.data_type() {
DataType::Boolean => Arc::new(
rows_iter
.map(|row| Some(parse_boolean(&row[i])))
.collect::<BooleanArray>(),
) as ArrayRef,
DataType::Int64 => Arc::new(
rows_iter
.map(|row| {
Ok(Some(row[i].parse::<i64>().map_err(|_| {
ColumnQError::GoogleSpreadsheets(format!(
"Expect int64 value, got: {}",
row[i]
))
})?))
})
.collect::<Result<PrimitiveArray<Int64Type>, ColumnQError>>()?,
) as ArrayRef,
DataType::Float64 => Arc::new(
rows_iter
.map(|row| {
Ok(Some(row[i].parse::<f64>().map_err(|_| {
ColumnQError::GoogleSpreadsheets(format!(
"Expect float64 value, got: {}",
row[i]
))
})?))
})
.collect::<Result<PrimitiveArray<Float64Type>, ColumnQError>>()?,
) as ArrayRef,
_ => Arc::new(rows_iter.map(|row| Some(&row[i])).collect::<StringArray>())
as ArrayRef,
})
})
.collect::<Result<Vec<ArrayRef>, ColumnQError>>()?;
Ok(RecordBatch::try_new(Arc::new(schema), arrays)?)
}
async fn fetch_auth_token(
opt: &TableOptionGoogleSpreasheet,
) -> Result<yup_oauth2::AccessToken, ColumnQError> {
// Read application creds from a file.The clientsecret file contains JSON like
// `{"installed":{"client_id":... }}`
let creds = yup_oauth2::read_service_account_key(&opt.application_secret_path)
.await
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Error reading application secret from disk: {}",
e.to_string()
))
})?;
let sa = yup_oauth2::ServiceAccountAuthenticator::builder(creds)
.build()
.await
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Error building service account authenticator: {}",
e.to_string()
))
})?;
let scopes = &["https://www.googleapis.com/auth/spreadsheets.readonly"];
sa.token(scopes).await.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!("Failed to obtain OAuth2 token: {}", e.to_string()))
})
}
async fn resolve_sheet_title<'a, 'b, 'c, 'd>(
token: &'a str,
spreadsheet_id: &'b str,
uri: &'c URIReference<'d>,
) -> Result<String, ColumnQError> {
// look up sheet title by sheet id through API
let resp = gs_api_get(
token,
&format!(
"https://sheets.googleapis.com/v4/spreadsheets/{}",
spreadsheet_id
),
)
.await?
.error_for_status()
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Failed to resolve sheet title from API: {}",
e.to_string()
))
})?;
let spreadsheets = resp.json::<Spreadsheets>().await.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!("Failed to parse API response: {}", e.to_string()))
})?;
// when sheet id is not specified from config, try to parse it from URI
let sheet_id: Option<usize> = match uri.fragment() {
// if sheeit id is specified within the URI in the format of #gid=x
Some(fragment) => {
let s = fragment.as_str();
let parts: Vec<&str> = s.split('=').collect();
match parts.len() {
2 => match parts[0] {
"gid" => parts[1].parse().ok(),
_ => None,
},
_ => None,
}
}
None => None,
};
let sheet = match sheet_id {
Some(id) => spreadsheets
.sheets
.iter()
.find(|s| s.properties.sheet_id == id)
.ok_or_else(|| ColumnQError::GoogleSpreadsheets(format!("Invalid sheet id {}", id)))?,
// no sheet id specified, default to the first sheet
None => spreadsheets
.sheets
.iter()
.find(|s| s.properties.index == 0)
.ok_or_else(|| ColumnQError::GoogleSpreadsheets("spreadsheets is empty".to_string()))?,
};
Ok(sheet.properties.title.clone())
}
pub async fn to_mem_table(
t: &TableSource,
) -> Result<datafusion::datasource::MemTable, ColumnQError> {
lazy_static! {
static ref RE_GOOGLE_SHEET: Regex =
Regex::new(r"https://docs.google.com/spreadsheets/d/(.+)").unwrap();
}
let uri_str = t.get_uri_str();
if RE_GOOGLE_SHEET.captures(uri_str).is_none() |
let uri = URIReference::try_from(uri_str)?;
let spreadsheet_id = uri.path().segments()[2].as_str();
let opt = t
.option
.as_ref()
.ok_or(ColumnQError::MissingOption)?
.as_google_spreadsheet()?;
let token = fetch_auth_token(opt).await?;
let token_str = token.as_str();
let sheet_title = match &opt.sheet_title {
Some(t) => t.clone(),
None => resolve_sheet_title(token_str, spreadsheet_id, &uri).await?,
};
let resp = gs_api_get(
token_str,
&format!(
"https://sheets.googleapis.com/v4/spreadsheets/{}/values/{}",
spreadsheet_id, sheet_title,
),
)
.await?
.error_for_status()
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Failed to load sheet value from API: {}",
e.to_string()
))
})?;
let sheet = resp.json::<SpreadsheetValues>().await.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!("Failed to parse API response: {}", e.to_string()))
})?;
let batch = sheet_values_to_record_batch(&sheet.values)?;
let schema_ref = batch.schema();
let partitions = vec![vec![batch]];
Ok(datafusion::datasource::MemTable::try_new(
schema_ref, partitions,
)?)
}
#[cfg(test)]
mod tests {
use super::*;
use datafusion::arrow::array::{BooleanArray, Int64Array};
fn row(raw: &[&str]) -> Vec<String> {
raw.iter().map(|s| s.to_string()).collect()
}
fn property_sheet() -> SpreadsheetValues {
SpreadsheetValues {
range: "Properties!A1:AB1000".to_string(),
major_dimension: "ROWS".to_string(),
values: vec![
row(&[
"Address",
"Image",
"Landlord",
"Bed",
"Bath",
"Occupied",
"Monthly Rent",
"Lease Expiration Date",
"Days Until Expiration",
]),
row(&[
"Bothell, WA",
"https://a.com/1.jpeg",
"Roger",
"3",
"2",
"FALSE",
"$2,000",
"10/23/2020",
"Expired",
]),
row(&[
"Mill Creek, WA",
"https://a.com/2.jpeg",
"Sam",
"3",
"3",
"TRUE",
"$3,500",
"8/4/2021",
"193",
]),
row(&[
"Fremont, WA",
"",
"Daniel",
"5",
"3",
"FALSE",
"$4,500",
"7/13/2019",
"Expired",
]),
row(&[
"Shoreline, WA",
"https://a.com/3.jpeg",
"Roger",
"1",
"1",
"TRUE",
"$1,200",
"12/9/2021",
"320",
]),
],
}
}
#[test]
fn schema_inference() {
let sheet = property_sheet();
let schema = infer_schema(&sheet.values);
assert_eq!(
schema,
Schema::new(vec![
Field::new("Address", DataType::Utf8, false),
Field::new("Image", DataType::Utf8, false),
Field::new("Landlord", DataType::Utf8, false),
Field::new("Bed", DataType::Int64, false),
Field::new("Bath", DataType::Int64, false),
Field::new("Occupied", DataType::Boolean, false),
Field::new("Monthly_Rent", DataType::Utf8, false),
Field::new("Lease_Expiration_Date", DataType::Utf8, false),
Field::new("Days_Until_Expiration", DataType::Utf8, false),
])
);
}
#[test]
fn sheetvalue_to_record_batch() -> anyhow::Result<()> {
let sheet = property_sheet();
let batch = sheet_values_to_record_batch(&sheet.values)?;
assert_eq!(batch.num_columns(), 9);
assert_eq!(
batch.column(3).as_ref(),
Arc::new(Int64Array::from(vec![3, 3, 5, 1])).as_ref(),
);
assert_eq!(
batch.column(5).as_ref(),
Arc::new(BooleanArray::from(vec![false, true, false, true])).as_ref(),
);
assert_eq!(
batch.column(2).as_ref(),
Arc::new(StringArray::from(vec!["Roger", "Sam", "Daniel", "Roger"])).as_ref(),
);
Ok(())
}
}
| {
return Err(ColumnQError::InvalidUri(uri_str.to_string()));
} | conditional_block |
google_spreadsheets.rs | use std::collections::{HashMap, HashSet};
use std::convert::TryFrom;
use std::sync::Arc;
use datafusion::arrow::array::{ArrayRef, BooleanArray, PrimitiveArray, StringArray};
use datafusion::arrow::datatypes::{DataType, Field, Schema};
use datafusion::arrow::datatypes::{Float64Type, Int64Type};
use datafusion::arrow::record_batch::RecordBatch;
use regex::Regex;
use reqwest::Client;
use serde_derive::Deserialize;
use uriparse::URIReference;
use crate::error::ColumnQError;
use crate::table::{TableOptionGoogleSpreasheet, TableSource};
// steps
// * Activate the Google Sheets API in the Google API Console.
//
// * Create service account: https://console.developers.google.com/apis/api/sheets.googleapis.com/credentials?project=roapi-302505
// * create key and save the json format somewhere safe
// * Share spreadsheet with service account
#[derive(Deserialize, Debug)]
struct SheetProperties {
#[serde(rename = "sheetId")]
sheet_id: usize,
title: String,
index: usize,
// other unused attributes:
//
// "sheetType": "GRID",
// "gridProperties": {
// "rowCount": 1000,
// "columnCount": 28
// }
//
// see: https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/sheets#SheetProperties
}
#[derive(Deserialize, Debug)]
struct Sheet {
properties: SheetProperties,
// for all available fields, see:
// https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets#Spreadsheet
}
#[derive(Deserialize, Debug)]
struct Spreadsheets {
sheets: Vec<Sheet>,
// other unused attributes:
// * spreadsheetId
// * properties
// * spreadsheetUrl
//
// see: https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets
}
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct SpreadsheetValues {
range: String,
#[serde(rename = "majorDimension")]
major_dimension: String,
values: Vec<Vec<String>>,
}
// TODO: should we support optional column?
fn infer_value_type(v: &str) -> DataType {
// match order matters
match v {
// TODO: support Date64 and Time64
_ if v.parse::<i64>().is_ok() => DataType::Int64,
_ if v.parse::<f64>().is_ok() => DataType::Float64,
_ => match v.to_lowercase().as_str() {
"false" | "true" => DataType::Boolean,
_ => DataType::Utf8,
},
}
}
// util wrapper for calling google spreadsheet API
async fn gs_api_get(token: &str, url: &str) -> Result<reqwest::Response, ColumnQError> {
Client::builder()
.build()
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Failed to initialize HTTP client: {}",
e.to_string()
))
})?
.get(url)
.bearer_auth(token)
.send()
.await
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Failed to send API request: {}",
e.to_string()
))
})
}
fn coerce_type(l: DataType, r: DataType) -> DataType {
match (l, r) {
(DataType::Boolean, DataType::Boolean) => DataType::Boolean,
(DataType::Date32, DataType::Date32) => DataType::Date32,
(DataType::Date64, DataType::Date64)
| (DataType::Date64, DataType::Date32)
| (DataType::Date32, DataType::Date64) => DataType::Date64,
(DataType::Int64, DataType::Int64) => DataType::Int64,
(DataType::Float64, DataType::Float64)
| (DataType::Float64, DataType::Int64)
| (DataType::Int64, DataType::Float64) => DataType::Float64,
_ => DataType::Utf8,
}
}
fn infer_schema(rows: &[Vec<String>]) -> Schema {
let mut col_types: HashMap<&str, HashSet<DataType>> = HashMap::new();
let col_names = &rows[0];
rows.iter().skip(1).for_each(|row| {
row.iter().enumerate().for_each(|(i, col_val)| {
let col_name = &col_names[i];
let col_type = infer_value_type(col_val);
let entry = col_types.entry(col_name).or_insert_with(HashSet::new);
entry.insert(col_type);
});
}); | let fields: Vec<Field> = col_names
.iter()
.map(|col_name| {
let set = col_types.entry(col_name).or_insert_with(|| {
// TODO: this should never happen, maybe we should use panic instead?
let mut set = HashSet::new();
set.insert(DataType::Utf8);
set
});
let mut dt_iter = set.iter().cloned();
let dt_init = dt_iter.next().unwrap_or(DataType::Utf8);
let dt = dt_iter.fold(dt_init, coerce_type);
// normalize column name by replacing space with under score
Field::new(&col_name.replace(" ", "_"), dt, false)
})
.collect();
Schema::new(fields)
}
fn parse_boolean(s: &str) -> bool {
s.eq_ignore_ascii_case("true")
}
fn sheet_values_to_record_batch(values: &[Vec<String>]) -> Result<RecordBatch, ColumnQError> {
let schema = infer_schema(values);
let arrays = schema
.fields()
.iter()
.enumerate()
.map(|(i, field)| {
// skip header row
let rows_iter = values.iter().skip(1);
Ok(match field.data_type() {
DataType::Boolean => Arc::new(
rows_iter
.map(|row| Some(parse_boolean(&row[i])))
.collect::<BooleanArray>(),
) as ArrayRef,
DataType::Int64 => Arc::new(
rows_iter
.map(|row| {
Ok(Some(row[i].parse::<i64>().map_err(|_| {
ColumnQError::GoogleSpreadsheets(format!(
"Expect int64 value, got: {}",
row[i]
))
})?))
})
.collect::<Result<PrimitiveArray<Int64Type>, ColumnQError>>()?,
) as ArrayRef,
DataType::Float64 => Arc::new(
rows_iter
.map(|row| {
Ok(Some(row[i].parse::<f64>().map_err(|_| {
ColumnQError::GoogleSpreadsheets(format!(
"Expect float64 value, got: {}",
row[i]
))
})?))
})
.collect::<Result<PrimitiveArray<Float64Type>, ColumnQError>>()?,
) as ArrayRef,
_ => Arc::new(rows_iter.map(|row| Some(&row[i])).collect::<StringArray>())
as ArrayRef,
})
})
.collect::<Result<Vec<ArrayRef>, ColumnQError>>()?;
Ok(RecordBatch::try_new(Arc::new(schema), arrays)?)
}
async fn fetch_auth_token(
opt: &TableOptionGoogleSpreasheet,
) -> Result<yup_oauth2::AccessToken, ColumnQError> {
// Read application creds from a file.The clientsecret file contains JSON like
// `{"installed":{"client_id":... }}`
let creds = yup_oauth2::read_service_account_key(&opt.application_secret_path)
.await
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Error reading application secret from disk: {}",
e.to_string()
))
})?;
let sa = yup_oauth2::ServiceAccountAuthenticator::builder(creds)
.build()
.await
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Error building service account authenticator: {}",
e.to_string()
))
})?;
let scopes = &["https://www.googleapis.com/auth/spreadsheets.readonly"];
sa.token(scopes).await.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!("Failed to obtain OAuth2 token: {}", e.to_string()))
})
}
async fn resolve_sheet_title<'a, 'b, 'c, 'd>(
token: &'a str,
spreadsheet_id: &'b str,
uri: &'c URIReference<'d>,
) -> Result<String, ColumnQError> {
// look up sheet title by sheet id through API
let resp = gs_api_get(
token,
&format!(
"https://sheets.googleapis.com/v4/spreadsheets/{}",
spreadsheet_id
),
)
.await?
.error_for_status()
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Failed to resolve sheet title from API: {}",
e.to_string()
))
})?;
let spreadsheets = resp.json::<Spreadsheets>().await.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!("Failed to parse API response: {}", e.to_string()))
})?;
// when sheet id is not specified from config, try to parse it from URI
let sheet_id: Option<usize> = match uri.fragment() {
// if sheeit id is specified within the URI in the format of #gid=x
Some(fragment) => {
let s = fragment.as_str();
let parts: Vec<&str> = s.split('=').collect();
match parts.len() {
2 => match parts[0] {
"gid" => parts[1].parse().ok(),
_ => None,
},
_ => None,
}
}
None => None,
};
let sheet = match sheet_id {
Some(id) => spreadsheets
.sheets
.iter()
.find(|s| s.properties.sheet_id == id)
.ok_or_else(|| ColumnQError::GoogleSpreadsheets(format!("Invalid sheet id {}", id)))?,
// no sheet id specified, default to the first sheet
None => spreadsheets
.sheets
.iter()
.find(|s| s.properties.index == 0)
.ok_or_else(|| ColumnQError::GoogleSpreadsheets("spreadsheets is empty".to_string()))?,
};
Ok(sheet.properties.title.clone())
}
pub async fn to_mem_table(
t: &TableSource,
) -> Result<datafusion::datasource::MemTable, ColumnQError> {
lazy_static! {
static ref RE_GOOGLE_SHEET: Regex =
Regex::new(r"https://docs.google.com/spreadsheets/d/(.+)").unwrap();
}
let uri_str = t.get_uri_str();
if RE_GOOGLE_SHEET.captures(uri_str).is_none() {
return Err(ColumnQError::InvalidUri(uri_str.to_string()));
}
let uri = URIReference::try_from(uri_str)?;
let spreadsheet_id = uri.path().segments()[2].as_str();
let opt = t
.option
.as_ref()
.ok_or(ColumnQError::MissingOption)?
.as_google_spreadsheet()?;
let token = fetch_auth_token(opt).await?;
let token_str = token.as_str();
let sheet_title = match &opt.sheet_title {
Some(t) => t.clone(),
None => resolve_sheet_title(token_str, spreadsheet_id, &uri).await?,
};
let resp = gs_api_get(
token_str,
&format!(
"https://sheets.googleapis.com/v4/spreadsheets/{}/values/{}",
spreadsheet_id, sheet_title,
),
)
.await?
.error_for_status()
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Failed to load sheet value from API: {}",
e.to_string()
))
})?;
let sheet = resp.json::<SpreadsheetValues>().await.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!("Failed to parse API response: {}", e.to_string()))
})?;
let batch = sheet_values_to_record_batch(&sheet.values)?;
let schema_ref = batch.schema();
let partitions = vec![vec![batch]];
Ok(datafusion::datasource::MemTable::try_new(
schema_ref, partitions,
)?)
}
#[cfg(test)]
mod tests {
use super::*;
use datafusion::arrow::array::{BooleanArray, Int64Array};
fn row(raw: &[&str]) -> Vec<String> {
raw.iter().map(|s| s.to_string()).collect()
}
fn property_sheet() -> SpreadsheetValues {
SpreadsheetValues {
range: "Properties!A1:AB1000".to_string(),
major_dimension: "ROWS".to_string(),
values: vec![
row(&[
"Address",
"Image",
"Landlord",
"Bed",
"Bath",
"Occupied",
"Monthly Rent",
"Lease Expiration Date",
"Days Until Expiration",
]),
row(&[
"Bothell, WA",
"https://a.com/1.jpeg",
"Roger",
"3",
"2",
"FALSE",
"$2,000",
"10/23/2020",
"Expired",
]),
row(&[
"Mill Creek, WA",
"https://a.com/2.jpeg",
"Sam",
"3",
"3",
"TRUE",
"$3,500",
"8/4/2021",
"193",
]),
row(&[
"Fremont, WA",
"",
"Daniel",
"5",
"3",
"FALSE",
"$4,500",
"7/13/2019",
"Expired",
]),
row(&[
"Shoreline, WA",
"https://a.com/3.jpeg",
"Roger",
"1",
"1",
"TRUE",
"$1,200",
"12/9/2021",
"320",
]),
],
}
}
#[test]
fn schema_inference() {
let sheet = property_sheet();
let schema = infer_schema(&sheet.values);
assert_eq!(
schema,
Schema::new(vec![
Field::new("Address", DataType::Utf8, false),
Field::new("Image", DataType::Utf8, false),
Field::new("Landlord", DataType::Utf8, false),
Field::new("Bed", DataType::Int64, false),
Field::new("Bath", DataType::Int64, false),
Field::new("Occupied", DataType::Boolean, false),
Field::new("Monthly_Rent", DataType::Utf8, false),
Field::new("Lease_Expiration_Date", DataType::Utf8, false),
Field::new("Days_Until_Expiration", DataType::Utf8, false),
])
);
}
#[test]
fn sheetvalue_to_record_batch() -> anyhow::Result<()> {
let sheet = property_sheet();
let batch = sheet_values_to_record_batch(&sheet.values)?;
assert_eq!(batch.num_columns(), 9);
assert_eq!(
batch.column(3).as_ref(),
Arc::new(Int64Array::from(vec![3, 3, 5, 1])).as_ref(),
);
assert_eq!(
batch.column(5).as_ref(),
Arc::new(BooleanArray::from(vec![false, true, false, true])).as_ref(),
);
assert_eq!(
batch.column(2).as_ref(),
Arc::new(StringArray::from(vec!["Roger", "Sam", "Daniel", "Roger"])).as_ref(),
);
Ok(())
}
} | random_line_split |
|
google_spreadsheets.rs | use std::collections::{HashMap, HashSet};
use std::convert::TryFrom;
use std::sync::Arc;
use datafusion::arrow::array::{ArrayRef, BooleanArray, PrimitiveArray, StringArray};
use datafusion::arrow::datatypes::{DataType, Field, Schema};
use datafusion::arrow::datatypes::{Float64Type, Int64Type};
use datafusion::arrow::record_batch::RecordBatch;
use regex::Regex;
use reqwest::Client;
use serde_derive::Deserialize;
use uriparse::URIReference;
use crate::error::ColumnQError;
use crate::table::{TableOptionGoogleSpreasheet, TableSource};
// steps
// * Activate the Google Sheets API in the Google API Console.
//
// * Create service account: https://console.developers.google.com/apis/api/sheets.googleapis.com/credentials?project=roapi-302505
// * create key and save the json format somewhere safe
// * Share spreadsheet with service account
#[derive(Deserialize, Debug)]
struct SheetProperties {
#[serde(rename = "sheetId")]
sheet_id: usize,
title: String,
index: usize,
// other unused attributes:
//
// "sheetType": "GRID",
// "gridProperties": {
// "rowCount": 1000,
// "columnCount": 28
// }
//
// see: https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/sheets#SheetProperties
}
#[derive(Deserialize, Debug)]
struct Sheet {
properties: SheetProperties,
// for all available fields, see:
// https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets#Spreadsheet
}
#[derive(Deserialize, Debug)]
struct Spreadsheets {
sheets: Vec<Sheet>,
// other unused attributes:
// * spreadsheetId
// * properties
// * spreadsheetUrl
//
// see: https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets
}
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct SpreadsheetValues {
range: String,
#[serde(rename = "majorDimension")]
major_dimension: String,
values: Vec<Vec<String>>,
}
// TODO: should we support optional column?
fn infer_value_type(v: &str) -> DataType {
// match order matters
match v {
// TODO: support Date64 and Time64
_ if v.parse::<i64>().is_ok() => DataType::Int64,
_ if v.parse::<f64>().is_ok() => DataType::Float64,
_ => match v.to_lowercase().as_str() {
"false" | "true" => DataType::Boolean,
_ => DataType::Utf8,
},
}
}
// util wrapper for calling google spreadsheet API
async fn gs_api_get(token: &str, url: &str) -> Result<reqwest::Response, ColumnQError> |
fn coerce_type(l: DataType, r: DataType) -> DataType {
match (l, r) {
(DataType::Boolean, DataType::Boolean) => DataType::Boolean,
(DataType::Date32, DataType::Date32) => DataType::Date32,
(DataType::Date64, DataType::Date64)
| (DataType::Date64, DataType::Date32)
| (DataType::Date32, DataType::Date64) => DataType::Date64,
(DataType::Int64, DataType::Int64) => DataType::Int64,
(DataType::Float64, DataType::Float64)
| (DataType::Float64, DataType::Int64)
| (DataType::Int64, DataType::Float64) => DataType::Float64,
_ => DataType::Utf8,
}
}
fn infer_schema(rows: &[Vec<String>]) -> Schema {
let mut col_types: HashMap<&str, HashSet<DataType>> = HashMap::new();
let col_names = &rows[0];
rows.iter().skip(1).for_each(|row| {
row.iter().enumerate().for_each(|(i, col_val)| {
let col_name = &col_names[i];
let col_type = infer_value_type(col_val);
let entry = col_types.entry(col_name).or_insert_with(HashSet::new);
entry.insert(col_type);
});
});
let fields: Vec<Field> = col_names
.iter()
.map(|col_name| {
let set = col_types.entry(col_name).or_insert_with(|| {
// TODO: this should never happen, maybe we should use panic instead?
let mut set = HashSet::new();
set.insert(DataType::Utf8);
set
});
let mut dt_iter = set.iter().cloned();
let dt_init = dt_iter.next().unwrap_or(DataType::Utf8);
let dt = dt_iter.fold(dt_init, coerce_type);
// normalize column name by replacing space with under score
Field::new(&col_name.replace(" ", "_"), dt, false)
})
.collect();
Schema::new(fields)
}
fn parse_boolean(s: &str) -> bool {
s.eq_ignore_ascii_case("true")
}
fn sheet_values_to_record_batch(values: &[Vec<String>]) -> Result<RecordBatch, ColumnQError> {
let schema = infer_schema(values);
let arrays = schema
.fields()
.iter()
.enumerate()
.map(|(i, field)| {
// skip header row
let rows_iter = values.iter().skip(1);
Ok(match field.data_type() {
DataType::Boolean => Arc::new(
rows_iter
.map(|row| Some(parse_boolean(&row[i])))
.collect::<BooleanArray>(),
) as ArrayRef,
DataType::Int64 => Arc::new(
rows_iter
.map(|row| {
Ok(Some(row[i].parse::<i64>().map_err(|_| {
ColumnQError::GoogleSpreadsheets(format!(
"Expect int64 value, got: {}",
row[i]
))
})?))
})
.collect::<Result<PrimitiveArray<Int64Type>, ColumnQError>>()?,
) as ArrayRef,
DataType::Float64 => Arc::new(
rows_iter
.map(|row| {
Ok(Some(row[i].parse::<f64>().map_err(|_| {
ColumnQError::GoogleSpreadsheets(format!(
"Expect float64 value, got: {}",
row[i]
))
})?))
})
.collect::<Result<PrimitiveArray<Float64Type>, ColumnQError>>()?,
) as ArrayRef,
_ => Arc::new(rows_iter.map(|row| Some(&row[i])).collect::<StringArray>())
as ArrayRef,
})
})
.collect::<Result<Vec<ArrayRef>, ColumnQError>>()?;
Ok(RecordBatch::try_new(Arc::new(schema), arrays)?)
}
async fn fetch_auth_token(
opt: &TableOptionGoogleSpreasheet,
) -> Result<yup_oauth2::AccessToken, ColumnQError> {
// Read application creds from a file.The clientsecret file contains JSON like
// `{"installed":{"client_id":... }}`
let creds = yup_oauth2::read_service_account_key(&opt.application_secret_path)
.await
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Error reading application secret from disk: {}",
e.to_string()
))
})?;
let sa = yup_oauth2::ServiceAccountAuthenticator::builder(creds)
.build()
.await
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Error building service account authenticator: {}",
e.to_string()
))
})?;
let scopes = &["https://www.googleapis.com/auth/spreadsheets.readonly"];
sa.token(scopes).await.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!("Failed to obtain OAuth2 token: {}", e.to_string()))
})
}
async fn resolve_sheet_title<'a, 'b, 'c, 'd>(
token: &'a str,
spreadsheet_id: &'b str,
uri: &'c URIReference<'d>,
) -> Result<String, ColumnQError> {
// look up sheet title by sheet id through API
let resp = gs_api_get(
token,
&format!(
"https://sheets.googleapis.com/v4/spreadsheets/{}",
spreadsheet_id
),
)
.await?
.error_for_status()
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Failed to resolve sheet title from API: {}",
e.to_string()
))
})?;
let spreadsheets = resp.json::<Spreadsheets>().await.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!("Failed to parse API response: {}", e.to_string()))
})?;
// when sheet id is not specified from config, try to parse it from URI
let sheet_id: Option<usize> = match uri.fragment() {
// if sheeit id is specified within the URI in the format of #gid=x
Some(fragment) => {
let s = fragment.as_str();
let parts: Vec<&str> = s.split('=').collect();
match parts.len() {
2 => match parts[0] {
"gid" => parts[1].parse().ok(),
_ => None,
},
_ => None,
}
}
None => None,
};
let sheet = match sheet_id {
Some(id) => spreadsheets
.sheets
.iter()
.find(|s| s.properties.sheet_id == id)
.ok_or_else(|| ColumnQError::GoogleSpreadsheets(format!("Invalid sheet id {}", id)))?,
// no sheet id specified, default to the first sheet
None => spreadsheets
.sheets
.iter()
.find(|s| s.properties.index == 0)
.ok_or_else(|| ColumnQError::GoogleSpreadsheets("spreadsheets is empty".to_string()))?,
};
Ok(sheet.properties.title.clone())
}
pub async fn to_mem_table(
t: &TableSource,
) -> Result<datafusion::datasource::MemTable, ColumnQError> {
lazy_static! {
static ref RE_GOOGLE_SHEET: Regex =
Regex::new(r"https://docs.google.com/spreadsheets/d/(.+)").unwrap();
}
let uri_str = t.get_uri_str();
if RE_GOOGLE_SHEET.captures(uri_str).is_none() {
return Err(ColumnQError::InvalidUri(uri_str.to_string()));
}
let uri = URIReference::try_from(uri_str)?;
let spreadsheet_id = uri.path().segments()[2].as_str();
let opt = t
.option
.as_ref()
.ok_or(ColumnQError::MissingOption)?
.as_google_spreadsheet()?;
let token = fetch_auth_token(opt).await?;
let token_str = token.as_str();
let sheet_title = match &opt.sheet_title {
Some(t) => t.clone(),
None => resolve_sheet_title(token_str, spreadsheet_id, &uri).await?,
};
let resp = gs_api_get(
token_str,
&format!(
"https://sheets.googleapis.com/v4/spreadsheets/{}/values/{}",
spreadsheet_id, sheet_title,
),
)
.await?
.error_for_status()
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Failed to load sheet value from API: {}",
e.to_string()
))
})?;
let sheet = resp.json::<SpreadsheetValues>().await.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!("Failed to parse API response: {}", e.to_string()))
})?;
let batch = sheet_values_to_record_batch(&sheet.values)?;
let schema_ref = batch.schema();
let partitions = vec![vec![batch]];
Ok(datafusion::datasource::MemTable::try_new(
schema_ref, partitions,
)?)
}
#[cfg(test)]
mod tests {
use super::*;
use datafusion::arrow::array::{BooleanArray, Int64Array};
fn row(raw: &[&str]) -> Vec<String> {
raw.iter().map(|s| s.to_string()).collect()
}
fn property_sheet() -> SpreadsheetValues {
SpreadsheetValues {
range: "Properties!A1:AB1000".to_string(),
major_dimension: "ROWS".to_string(),
values: vec![
row(&[
"Address",
"Image",
"Landlord",
"Bed",
"Bath",
"Occupied",
"Monthly Rent",
"Lease Expiration Date",
"Days Until Expiration",
]),
row(&[
"Bothell, WA",
"https://a.com/1.jpeg",
"Roger",
"3",
"2",
"FALSE",
"$2,000",
"10/23/2020",
"Expired",
]),
row(&[
"Mill Creek, WA",
"https://a.com/2.jpeg",
"Sam",
"3",
"3",
"TRUE",
"$3,500",
"8/4/2021",
"193",
]),
row(&[
"Fremont, WA",
"",
"Daniel",
"5",
"3",
"FALSE",
"$4,500",
"7/13/2019",
"Expired",
]),
row(&[
"Shoreline, WA",
"https://a.com/3.jpeg",
"Roger",
"1",
"1",
"TRUE",
"$1,200",
"12/9/2021",
"320",
]),
],
}
}
#[test]
fn schema_inference() {
let sheet = property_sheet();
let schema = infer_schema(&sheet.values);
assert_eq!(
schema,
Schema::new(vec![
Field::new("Address", DataType::Utf8, false),
Field::new("Image", DataType::Utf8, false),
Field::new("Landlord", DataType::Utf8, false),
Field::new("Bed", DataType::Int64, false),
Field::new("Bath", DataType::Int64, false),
Field::new("Occupied", DataType::Boolean, false),
Field::new("Monthly_Rent", DataType::Utf8, false),
Field::new("Lease_Expiration_Date", DataType::Utf8, false),
Field::new("Days_Until_Expiration", DataType::Utf8, false),
])
);
}
#[test]
fn sheetvalue_to_record_batch() -> anyhow::Result<()> {
let sheet = property_sheet();
let batch = sheet_values_to_record_batch(&sheet.values)?;
assert_eq!(batch.num_columns(), 9);
assert_eq!(
batch.column(3).as_ref(),
Arc::new(Int64Array::from(vec![3, 3, 5, 1])).as_ref(),
);
assert_eq!(
batch.column(5).as_ref(),
Arc::new(BooleanArray::from(vec![false, true, false, true])).as_ref(),
);
assert_eq!(
batch.column(2).as_ref(),
Arc::new(StringArray::from(vec!["Roger", "Sam", "Daniel", "Roger"])).as_ref(),
);
Ok(())
}
}
| {
Client::builder()
.build()
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Failed to initialize HTTP client: {}",
e.to_string()
))
})?
.get(url)
.bearer_auth(token)
.send()
.await
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Failed to send API request: {}",
e.to_string()
))
})
} | identifier_body |
google_spreadsheets.rs | use std::collections::{HashMap, HashSet};
use std::convert::TryFrom;
use std::sync::Arc;
use datafusion::arrow::array::{ArrayRef, BooleanArray, PrimitiveArray, StringArray};
use datafusion::arrow::datatypes::{DataType, Field, Schema};
use datafusion::arrow::datatypes::{Float64Type, Int64Type};
use datafusion::arrow::record_batch::RecordBatch;
use regex::Regex;
use reqwest::Client;
use serde_derive::Deserialize;
use uriparse::URIReference;
use crate::error::ColumnQError;
use crate::table::{TableOptionGoogleSpreasheet, TableSource};
// steps
// * Activate the Google Sheets API in the Google API Console.
//
// * Create service account: https://console.developers.google.com/apis/api/sheets.googleapis.com/credentials?project=roapi-302505
// * create key and save the json format somewhere safe
// * Share spreadsheet with service account
#[derive(Deserialize, Debug)]
struct SheetProperties {
#[serde(rename = "sheetId")]
sheet_id: usize,
title: String,
index: usize,
// other unused attributes:
//
// "sheetType": "GRID",
// "gridProperties": {
// "rowCount": 1000,
// "columnCount": 28
// }
//
// see: https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/sheets#SheetProperties
}
#[derive(Deserialize, Debug)]
struct Sheet {
properties: SheetProperties,
// for all available fields, see:
// https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets#Spreadsheet
}
#[derive(Deserialize, Debug)]
struct Spreadsheets {
sheets: Vec<Sheet>,
// other unused attributes:
// * spreadsheetId
// * properties
// * spreadsheetUrl
//
// see: https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets
}
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct SpreadsheetValues {
range: String,
#[serde(rename = "majorDimension")]
major_dimension: String,
values: Vec<Vec<String>>,
}
// TODO: should we support optional column?
fn infer_value_type(v: &str) -> DataType {
// match order matters
match v {
// TODO: support Date64 and Time64
_ if v.parse::<i64>().is_ok() => DataType::Int64,
_ if v.parse::<f64>().is_ok() => DataType::Float64,
_ => match v.to_lowercase().as_str() {
"false" | "true" => DataType::Boolean,
_ => DataType::Utf8,
},
}
}
// util wrapper for calling google spreadsheet API
async fn gs_api_get(token: &str, url: &str) -> Result<reqwest::Response, ColumnQError> {
Client::builder()
.build()
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Failed to initialize HTTP client: {}",
e.to_string()
))
})?
.get(url)
.bearer_auth(token)
.send()
.await
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Failed to send API request: {}",
e.to_string()
))
})
}
fn coerce_type(l: DataType, r: DataType) -> DataType {
match (l, r) {
(DataType::Boolean, DataType::Boolean) => DataType::Boolean,
(DataType::Date32, DataType::Date32) => DataType::Date32,
(DataType::Date64, DataType::Date64)
| (DataType::Date64, DataType::Date32)
| (DataType::Date32, DataType::Date64) => DataType::Date64,
(DataType::Int64, DataType::Int64) => DataType::Int64,
(DataType::Float64, DataType::Float64)
| (DataType::Float64, DataType::Int64)
| (DataType::Int64, DataType::Float64) => DataType::Float64,
_ => DataType::Utf8,
}
}
fn infer_schema(rows: &[Vec<String>]) -> Schema {
let mut col_types: HashMap<&str, HashSet<DataType>> = HashMap::new();
let col_names = &rows[0];
rows.iter().skip(1).for_each(|row| {
row.iter().enumerate().for_each(|(i, col_val)| {
let col_name = &col_names[i];
let col_type = infer_value_type(col_val);
let entry = col_types.entry(col_name).or_insert_with(HashSet::new);
entry.insert(col_type);
});
});
let fields: Vec<Field> = col_names
.iter()
.map(|col_name| {
let set = col_types.entry(col_name).or_insert_with(|| {
// TODO: this should never happen, maybe we should use panic instead?
let mut set = HashSet::new();
set.insert(DataType::Utf8);
set
});
let mut dt_iter = set.iter().cloned();
let dt_init = dt_iter.next().unwrap_or(DataType::Utf8);
let dt = dt_iter.fold(dt_init, coerce_type);
// normalize column name by replacing space with under score
Field::new(&col_name.replace(" ", "_"), dt, false)
})
.collect();
Schema::new(fields)
}
fn | (s: &str) -> bool {
s.eq_ignore_ascii_case("true")
}
fn sheet_values_to_record_batch(values: &[Vec<String>]) -> Result<RecordBatch, ColumnQError> {
let schema = infer_schema(values);
let arrays = schema
.fields()
.iter()
.enumerate()
.map(|(i, field)| {
// skip header row
let rows_iter = values.iter().skip(1);
Ok(match field.data_type() {
DataType::Boolean => Arc::new(
rows_iter
.map(|row| Some(parse_boolean(&row[i])))
.collect::<BooleanArray>(),
) as ArrayRef,
DataType::Int64 => Arc::new(
rows_iter
.map(|row| {
Ok(Some(row[i].parse::<i64>().map_err(|_| {
ColumnQError::GoogleSpreadsheets(format!(
"Expect int64 value, got: {}",
row[i]
))
})?))
})
.collect::<Result<PrimitiveArray<Int64Type>, ColumnQError>>()?,
) as ArrayRef,
DataType::Float64 => Arc::new(
rows_iter
.map(|row| {
Ok(Some(row[i].parse::<f64>().map_err(|_| {
ColumnQError::GoogleSpreadsheets(format!(
"Expect float64 value, got: {}",
row[i]
))
})?))
})
.collect::<Result<PrimitiveArray<Float64Type>, ColumnQError>>()?,
) as ArrayRef,
_ => Arc::new(rows_iter.map(|row| Some(&row[i])).collect::<StringArray>())
as ArrayRef,
})
})
.collect::<Result<Vec<ArrayRef>, ColumnQError>>()?;
Ok(RecordBatch::try_new(Arc::new(schema), arrays)?)
}
async fn fetch_auth_token(
opt: &TableOptionGoogleSpreasheet,
) -> Result<yup_oauth2::AccessToken, ColumnQError> {
// Read application creds from a file.The clientsecret file contains JSON like
// `{"installed":{"client_id":... }}`
let creds = yup_oauth2::read_service_account_key(&opt.application_secret_path)
.await
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Error reading application secret from disk: {}",
e.to_string()
))
})?;
let sa = yup_oauth2::ServiceAccountAuthenticator::builder(creds)
.build()
.await
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Error building service account authenticator: {}",
e.to_string()
))
})?;
let scopes = &["https://www.googleapis.com/auth/spreadsheets.readonly"];
sa.token(scopes).await.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!("Failed to obtain OAuth2 token: {}", e.to_string()))
})
}
async fn resolve_sheet_title<'a, 'b, 'c, 'd>(
token: &'a str,
spreadsheet_id: &'b str,
uri: &'c URIReference<'d>,
) -> Result<String, ColumnQError> {
// look up sheet title by sheet id through API
let resp = gs_api_get(
token,
&format!(
"https://sheets.googleapis.com/v4/spreadsheets/{}",
spreadsheet_id
),
)
.await?
.error_for_status()
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Failed to resolve sheet title from API: {}",
e.to_string()
))
})?;
let spreadsheets = resp.json::<Spreadsheets>().await.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!("Failed to parse API response: {}", e.to_string()))
})?;
// when sheet id is not specified from config, try to parse it from URI
let sheet_id: Option<usize> = match uri.fragment() {
// if sheeit id is specified within the URI in the format of #gid=x
Some(fragment) => {
let s = fragment.as_str();
let parts: Vec<&str> = s.split('=').collect();
match parts.len() {
2 => match parts[0] {
"gid" => parts[1].parse().ok(),
_ => None,
},
_ => None,
}
}
None => None,
};
let sheet = match sheet_id {
Some(id) => spreadsheets
.sheets
.iter()
.find(|s| s.properties.sheet_id == id)
.ok_or_else(|| ColumnQError::GoogleSpreadsheets(format!("Invalid sheet id {}", id)))?,
// no sheet id specified, default to the first sheet
None => spreadsheets
.sheets
.iter()
.find(|s| s.properties.index == 0)
.ok_or_else(|| ColumnQError::GoogleSpreadsheets("spreadsheets is empty".to_string()))?,
};
Ok(sheet.properties.title.clone())
}
pub async fn to_mem_table(
t: &TableSource,
) -> Result<datafusion::datasource::MemTable, ColumnQError> {
lazy_static! {
static ref RE_GOOGLE_SHEET: Regex =
Regex::new(r"https://docs.google.com/spreadsheets/d/(.+)").unwrap();
}
let uri_str = t.get_uri_str();
if RE_GOOGLE_SHEET.captures(uri_str).is_none() {
return Err(ColumnQError::InvalidUri(uri_str.to_string()));
}
let uri = URIReference::try_from(uri_str)?;
let spreadsheet_id = uri.path().segments()[2].as_str();
let opt = t
.option
.as_ref()
.ok_or(ColumnQError::MissingOption)?
.as_google_spreadsheet()?;
let token = fetch_auth_token(opt).await?;
let token_str = token.as_str();
let sheet_title = match &opt.sheet_title {
Some(t) => t.clone(),
None => resolve_sheet_title(token_str, spreadsheet_id, &uri).await?,
};
let resp = gs_api_get(
token_str,
&format!(
"https://sheets.googleapis.com/v4/spreadsheets/{}/values/{}",
spreadsheet_id, sheet_title,
),
)
.await?
.error_for_status()
.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!(
"Failed to load sheet value from API: {}",
e.to_string()
))
})?;
let sheet = resp.json::<SpreadsheetValues>().await.map_err(|e| {
ColumnQError::GoogleSpreadsheets(format!("Failed to parse API response: {}", e.to_string()))
})?;
let batch = sheet_values_to_record_batch(&sheet.values)?;
let schema_ref = batch.schema();
let partitions = vec![vec![batch]];
Ok(datafusion::datasource::MemTable::try_new(
schema_ref, partitions,
)?)
}
#[cfg(test)]
mod tests {
use super::*;
use datafusion::arrow::array::{BooleanArray, Int64Array};
fn row(raw: &[&str]) -> Vec<String> {
raw.iter().map(|s| s.to_string()).collect()
}
fn property_sheet() -> SpreadsheetValues {
SpreadsheetValues {
range: "Properties!A1:AB1000".to_string(),
major_dimension: "ROWS".to_string(),
values: vec![
row(&[
"Address",
"Image",
"Landlord",
"Bed",
"Bath",
"Occupied",
"Monthly Rent",
"Lease Expiration Date",
"Days Until Expiration",
]),
row(&[
"Bothell, WA",
"https://a.com/1.jpeg",
"Roger",
"3",
"2",
"FALSE",
"$2,000",
"10/23/2020",
"Expired",
]),
row(&[
"Mill Creek, WA",
"https://a.com/2.jpeg",
"Sam",
"3",
"3",
"TRUE",
"$3,500",
"8/4/2021",
"193",
]),
row(&[
"Fremont, WA",
"",
"Daniel",
"5",
"3",
"FALSE",
"$4,500",
"7/13/2019",
"Expired",
]),
row(&[
"Shoreline, WA",
"https://a.com/3.jpeg",
"Roger",
"1",
"1",
"TRUE",
"$1,200",
"12/9/2021",
"320",
]),
],
}
}
#[test]
fn schema_inference() {
let sheet = property_sheet();
let schema = infer_schema(&sheet.values);
assert_eq!(
schema,
Schema::new(vec![
Field::new("Address", DataType::Utf8, false),
Field::new("Image", DataType::Utf8, false),
Field::new("Landlord", DataType::Utf8, false),
Field::new("Bed", DataType::Int64, false),
Field::new("Bath", DataType::Int64, false),
Field::new("Occupied", DataType::Boolean, false),
Field::new("Monthly_Rent", DataType::Utf8, false),
Field::new("Lease_Expiration_Date", DataType::Utf8, false),
Field::new("Days_Until_Expiration", DataType::Utf8, false),
])
);
}
#[test]
fn sheetvalue_to_record_batch() -> anyhow::Result<()> {
let sheet = property_sheet();
let batch = sheet_values_to_record_batch(&sheet.values)?;
assert_eq!(batch.num_columns(), 9);
assert_eq!(
batch.column(3).as_ref(),
Arc::new(Int64Array::from(vec![3, 3, 5, 1])).as_ref(),
);
assert_eq!(
batch.column(5).as_ref(),
Arc::new(BooleanArray::from(vec![false, true, false, true])).as_ref(),
);
assert_eq!(
batch.column(2).as_ref(),
Arc::new(StringArray::from(vec!["Roger", "Sam", "Daniel", "Roger"])).as_ref(),
);
Ok(())
}
}
| parse_boolean | identifier_name |
pm.rs | //! Implementation of the power manager (PM) peripheral.
use bpm;
use bscif;
use core::cell::Cell;
use core::sync::atomic::Ordering;
use flashcalw;
use gpio;
use kernel::common::VolatileCell;
use scif;
#[repr(C)]
struct PmRegisters { | pbbsel: VolatileCell<u32>,
pbcsel: VolatileCell<u32>,
pbdsel: VolatileCell<u32>,
_reserved2: VolatileCell<u32>,
cpumask: VolatileCell<u32>, // 0x020
hsbmask: VolatileCell<u32>,
pbamask: VolatileCell<u32>,
pbbmask: VolatileCell<u32>,
pbcmask: VolatileCell<u32>,
pbdmask: VolatileCell<u32>,
_reserved3: [VolatileCell<u32>; 2],
pbadivmask: VolatileCell<u32>, // 0x040
_reserved4: [VolatileCell<u32>; 4],
cfdctrl: VolatileCell<u32>,
unlock: VolatileCell<u32>,
_reserved5: [VolatileCell<u32>; 25], // 0x60
ier: VolatileCell<u32>, // 0xC0
idr: VolatileCell<u32>,
imr: VolatileCell<u32>,
isr: VolatileCell<u32>,
icr: VolatileCell<u32>,
sr: VolatileCell<u32>,
_reserved6: [VolatileCell<u32>; 34], // 0x100
ppcr: VolatileCell<u32>, // 0x160
_reserved7: [VolatileCell<u32>; 7],
rcause: VolatileCell<u32>, // 0x180
wcause: VolatileCell<u32>,
awen: VolatileCell<u32>,
protctrl: VolatileCell<u32>,
_reserved8: VolatileCell<u32>,
fastsleep: VolatileCell<u32>,
_reserved9: [VolatileCell<u32>; 152],
config: VolatileCell<u32>, // 0x200
version: VolatileCell<u32>,
}
pub enum MainClock {
RCSYS,
OSC0,
PLL,
DFLL,
RC80M,
RCFAST,
RC1M,
}
#[derive(Copy, Clone, Debug)]
pub enum Clock {
HSB(HSBClock),
PBA(PBAClock),
PBB(PBBClock),
PBC(PBCClock),
PBD(PBDClock),
}
#[derive(Copy, Clone, Debug)]
pub enum HSBClock {
PDCA,
FLASHCALW,
FLASHCALWP,
USBC,
CRCCU,
APBA,
APBB,
APBC,
APBD,
AESA,
}
#[derive(Copy, Clone, Debug)]
pub enum PBAClock {
IISC,
SPI,
TC0,
TC1,
TWIM0,
TWIS0,
TWIM1,
TWIS1,
USART0,
USART1,
USART2,
USART3,
ADCIFE,
DACC,
ACIFC,
GLOC,
ABSACB,
TRNG,
PARC,
CATB,
NULL,
TWIM2,
TWIM3,
LCDCA,
}
#[derive(Copy, Clone, Debug)]
pub enum PBBClock {
FLASHCALW,
HRAMC1,
HMATRIX,
PDCA,
CRCCU,
USBC,
PEVC,
}
#[derive(Copy, Clone, Debug)]
pub enum PBCClock {
PM,
CHIPID,
SCIF,
FREQM,
GPIO,
}
#[derive(Copy, Clone, Debug)]
pub enum PBDClock {
BPM,
BSCIF,
AST,
WDT,
EIC,
PICOUART,
}
/// Frequency of the external oscillator. For the SAM4L, different
/// configurations are needed for different ranges of oscillator frequency, so
/// based on the input frequency, various configurations may need to change.
/// When additional oscillator frequencies are needed, they should be added
/// here and the `setup_system_clock` function should be modified to support
/// it.
#[derive(Copy, Clone, Debug)]
pub enum OscillatorFrequency {
/// 16 MHz external oscillator
Frequency16MHz,
}
/// Configuration for the startup time of the external oscillator. In practice
/// we have found that some boards work with a short startup time, while others
/// need a slow start in order to properly wake from sleep. In general, we find
/// that for systems that do not work, at fast speed, they will hang or panic
/// after several entries into WAIT mode.
#[derive(Copy, Clone, Debug)]
pub enum OscillatorStartup {
/// Use a fast startup. ~0.5 ms in practice.
FastStart,
/// Use a slow startup. ~8.9 ms in practice.
SlowStart,
}
/// Which source the system clock should be generated from. These are specified
/// as system clock source appended with the clock that it is sourced from
/// appended with the final frequency of the system. So for example, one option
/// is to use the DFLL sourced from the RC32K with a final frequency of 48 MHz.
///
/// When new options (either sources or final frequencies) are needed, they
/// should be added to this list, and then the `setup_system_clock` function
/// can be modified to support it. This is necessary because configurations
/// must be changed not just with the input source but also based on the
/// desired final frequency.
///
/// For options utilizing an external oscillator, the configurations for that
/// oscillator must also be provided.
#[derive(Copy, Clone, Debug)]
pub enum SystemClockSource {
/// Use the RCSYS clock (which the system starts up on anyways). Final
/// system frequency will be 115 kHz. Note that while this is the default,
/// Tock is NOT guaranteed to work on this setting and will likely fail.
RcsysAt115kHz,
/// Use the internal digital frequency locked loop (DFLL) sourced from
/// the internal RC32K clock. Note this typically requires calibration
/// of the RC32K to have a consistent clock. Final frequency of 48 MHz.
DfllRc32kAt48MHz,
/// Use an external crystal oscillator as the direct source for the
/// system clock. The final system frequency will match the frequency of
/// the external oscillator.
ExternalOscillator {
frequency: OscillatorFrequency,
startup_mode: OscillatorStartup,
},
/// Use an external crystal oscillator as the input to the internal phase
/// locked loop (PLL) for the system clock. This results in a final
/// frequency of 48 MHz.
PllExternalOscillatorAt48MHz {
frequency: OscillatorFrequency,
startup_mode: OscillatorStartup,
},
}
const PM_BASE: usize = 0x400E0000;
const HSB_MASK_OFFSET: u32 = 0x24;
const PBA_MASK_OFFSET: u32 = 0x28;
const PBB_MASK_OFFSET: u32 = 0x2C;
const PBC_MASK_OFFSET: u32 = 0x30;
const PBD_MASK_OFFSET: u32 = 0x34;
static mut PM_REGS: *mut PmRegisters = PM_BASE as *mut PmRegisters;
/// Contains state for the power management peripheral. This includes the
/// configurations for various system clocks and the final frequency that the
/// system is running at.
pub struct PowerManager {
/// Frequency at which the system clock is running.
system_frequency: Cell<u32>,
/// Clock source configuration
system_clock_source: Cell<SystemClockSource>,
}
pub static mut PM: PowerManager = PowerManager {
/// Set to the RCSYS frequency by default (115 kHz).
system_frequency: Cell::new(115000),
/// Set to the RCSYS by default.
system_clock_source: Cell::new(SystemClockSource::RcsysAt115kHz),
};
impl PowerManager {
/// Sets up the system clock. This should be called as one of the first
/// lines in the `reset_handler` within the platform's `main.rs`.
pub unsafe fn setup_system_clock(&self, clock_source: SystemClockSource) {
// save configuration
self.system_clock_source.set(clock_source);
// For now, always go to PS2 as it enables all core speeds
bpm::set_power_scaling(bpm::PowerScaling::PS2);
match clock_source {
SystemClockSource::RcsysAt115kHz => {
// no configurations necessary, already running off the RCSYS
self.system_frequency.set(115000);
}
SystemClockSource::DfllRc32kAt48MHz => {
configure_48mhz_dfll();
self.system_frequency.set(48000000);
}
SystemClockSource::ExternalOscillator {
frequency,
startup_mode,
} => {
configure_external_oscillator(frequency, startup_mode);
match frequency {
OscillatorFrequency::Frequency16MHz => self.system_frequency.set(16000000),
};
}
SystemClockSource::PllExternalOscillatorAt48MHz {
frequency,
startup_mode,
} => {
configure_external_oscillator_pll(frequency, startup_mode);
self.system_frequency.set(48000000);
}
}
}
}
unsafe fn unlock(register_offset: u32) {
(*PM_REGS).unlock.set(0xAA000000 | register_offset);
}
unsafe fn select_main_clock(clock: MainClock) {
unlock(0);
(*PM_REGS).mcctrl.set(clock as u32);
}
/// Configure the system clock to use the DFLL with the RC32K as the source.
/// Run at 48 MHz.
unsafe fn configure_48mhz_dfll() {
// Enable HCACHE
flashcalw::FLASH_CONTROLLER.enable_cache();
// start the dfll
scif::setup_dfll_rc32k_48mhz();
// Since we are running at a fast speed we have to set a clock delay
// for flash, as well as enable fast flash mode.
flashcalw::FLASH_CONTROLLER.enable_high_speed_flash();
// Choose the main clock
select_main_clock(MainClock::DFLL);
}
/// Configure the system clock to use the 16 MHz external crystal directly
unsafe fn configure_external_oscillator(
frequency: OscillatorFrequency,
startup_mode: OscillatorStartup,
) {
// Use the cache
flashcalw::FLASH_CONTROLLER.enable_cache();
// Need the 32k RC oscillator for things like BPM module and AST.
bscif::enable_rc32k();
// start the external oscillator
match frequency {
OscillatorFrequency::Frequency16MHz => {
match startup_mode {
OscillatorStartup::FastStart => scif::setup_osc_16mhz_fast_startup(),
OscillatorStartup::SlowStart => scif::setup_osc_16mhz_slow_startup(),
};
}
}
// Go to high speed flash mode
flashcalw::FLASH_CONTROLLER.enable_high_speed_flash();
// Set the main clock to be the external oscillator
select_main_clock(MainClock::OSC0);
}
/// Configure the system clock to use the PLL with the 16 MHz external crystal
unsafe fn configure_external_oscillator_pll(
frequency: OscillatorFrequency,
startup_mode: OscillatorStartup,
) {
// Use the cache
flashcalw::FLASH_CONTROLLER.enable_cache();
// Need the 32k RC oscillator for things like BPM module and AST.
bscif::enable_rc32k();
// start the external oscillator
match frequency {
OscillatorFrequency::Frequency16MHz => {
match startup_mode {
OscillatorStartup::FastStart => scif::setup_osc_16mhz_fast_startup(),
OscillatorStartup::SlowStart => scif::setup_osc_16mhz_slow_startup(),
};
}
}
// Setup the PLL
scif::setup_pll_osc_48mhz();
// Go to high speed flash mode
flashcalw::FLASH_CONTROLLER.enable_high_speed_flash();
// Set the main clock to be the PLL
select_main_clock(MainClock::PLL);
}
pub fn get_system_frequency() -> u32 {
unsafe { PM.system_frequency.get() }
}
/// Utility macro to modify clock mask registers
///
/// It takes one of two forms:
///
/// mask_clock!(CLOCK: pm_register | value)
///
/// which performs a logical-or on the existing register value, or
///
/// mask_clock!(CLOCK: pm_register & value)
///
/// which performs a logical-and.
///
/// CLOCK is one of HSB, PBA, PBB, PBC or PBD
///
/// pm_register is one of hsbmask, pbamask, pbbmask, pbcmask or pbdmask.
///
macro_rules! mask_clock {
($module:ident: $field:ident | $mask:expr) => ({
unlock(concat_idents!($module, _MASK_OFFSET));
let val = (*PM_REGS).$field.get() | ($mask);
(*PM_REGS).$field.set(val);
});
($module:ident: $field:ident & $mask:expr) => ({
unlock(concat_idents!($module, _MASK_OFFSET));
let val = (*PM_REGS).$field.get() & ($mask);
(*PM_REGS).$field.set(val);
});
}
/// Utility macro to get value of clock register. Used to check if a specific
/// clock is enabled or not. See above description of `make_clock!`.
macro_rules! get_clock {
($module:ident: $field:ident & $mask:expr) => ({
unlock(concat_idents!($module, _MASK_OFFSET));
((*PM_REGS).$field.get() & ($mask))!= 0
});
}
// Clock masks that allow us to go into deep sleep without disabling any active
// peripherals.
// FLASHCALW clocks and APBx clocks are allowed
//
// This is identical to the reset value of the HSBMASK except it allows the
// PicoCache RAM clock to be on as well.
const DEEP_SLEEP_HSBMASK: u32 = 0x1e6;
// No clocks allowed on PBA
const DEEP_SLEEP_PBAMASK: u32 = 0x0;
// FLASHCALW and HRAMC1 clocks allowed
//
// This is identical to the reset value of the PBBMASK except it allows the
// flash's HRAMC1 clock as well.
const DEEP_SLEEP_PBBMASK: u32 = 0x3;
/// Determines if the chip can safely go into deep sleep without preventing
/// currently active peripherals from operating.
///
/// We look at the PM's clock mask registers and compare them against a set of
/// known masks that include no peripherals that can't operate in deep
/// sleep (or that have no function during sleep). Specifically:
///
/// * HSB may only have clocks for the flash (and PicoCache) and APBx bridges on.
///
/// * PBA may not have _any_ clocks on.
///
/// * PBB may only have clocks for the flash and HRAMC1 (also flash related) on.
///
/// * PBC and PBD may have any clocks on.
///
/// This means it is the responsibility of each peripheral to disable it's clock
/// mask whenever it is idle.
///
/// We also special case GPIO (which is in PBCMASK), and just see if any interrupts are pending
/// through the INTERRUPT_COUNT variable.
pub fn deep_sleep_ready() -> bool {
unsafe {
(*PM_REGS).hsbmask.get() &!(DEEP_SLEEP_HSBMASK) == 0
&& (*PM_REGS).pbamask.get() &!(DEEP_SLEEP_PBAMASK) == 0
&& (*PM_REGS).pbbmask.get() &!(DEEP_SLEEP_PBBMASK) == 0
&& gpio::INTERRUPT_COUNT.load(Ordering::Relaxed) == 0
}
}
pub unsafe fn enable_clock(clock: Clock) {
match clock {
Clock::HSB(v) => mask_clock!(HSB: hsbmask | 1 << (v as u32)),
Clock::PBA(v) => mask_clock!(PBA: pbamask | 1 << (v as u32)),
Clock::PBB(v) => mask_clock!(PBB: pbbmask | 1 << (v as u32)),
Clock::PBC(v) => mask_clock!(PBC: pbcmask | 1 << (v as u32)),
Clock::PBD(v) => mask_clock!(PBD: pbdmask | 1 << (v as u32)),
}
}
pub unsafe fn disable_clock(clock: Clock) {
match clock {
Clock::HSB(v) => mask_clock!(HSB: hsbmask &!(1 << (v as u32))),
Clock::PBA(v) => mask_clock!(PBA: pbamask &!(1 << (v as u32))),
Clock::PBB(v) => mask_clock!(PBB: pbbmask &!(1 << (v as u32))),
Clock::PBC(v) => mask_clock!(PBC: pbcmask &!(1 << (v as u32))),
Clock::PBD(v) => mask_clock!(PBD: pbdmask &!(1 << (v as u32))),
}
}
pub unsafe fn is_clock_enabled(clock: Clock) -> bool {
match clock {
Clock::HSB(v) => get_clock!(HSB: hsbmask & (1 << (v as u32))),
Clock::PBA(v) => get_clock!(PBA: pbamask & (1 << (v as u32))),
Clock::PBB(v) => get_clock!(PBB: pbbmask & (1 << (v as u32))),
Clock::PBC(v) => get_clock!(PBC: pbcmask & (1 << (v as u32))),
Clock::PBD(v) => get_clock!(PBD: pbdmask & (1 << (v as u32))),
}
} | mcctrl: VolatileCell<u32>,
cpusel: VolatileCell<u32>,
_reserved1: VolatileCell<u32>,
pbasel: VolatileCell<u32>, | random_line_split |
pm.rs | //! Implementation of the power manager (PM) peripheral.
use bpm;
use bscif;
use core::cell::Cell;
use core::sync::atomic::Ordering;
use flashcalw;
use gpio;
use kernel::common::VolatileCell;
use scif;
#[repr(C)]
struct PmRegisters {
mcctrl: VolatileCell<u32>,
cpusel: VolatileCell<u32>,
_reserved1: VolatileCell<u32>,
pbasel: VolatileCell<u32>,
pbbsel: VolatileCell<u32>,
pbcsel: VolatileCell<u32>,
pbdsel: VolatileCell<u32>,
_reserved2: VolatileCell<u32>,
cpumask: VolatileCell<u32>, // 0x020
hsbmask: VolatileCell<u32>,
pbamask: VolatileCell<u32>,
pbbmask: VolatileCell<u32>,
pbcmask: VolatileCell<u32>,
pbdmask: VolatileCell<u32>,
_reserved3: [VolatileCell<u32>; 2],
pbadivmask: VolatileCell<u32>, // 0x040
_reserved4: [VolatileCell<u32>; 4],
cfdctrl: VolatileCell<u32>,
unlock: VolatileCell<u32>,
_reserved5: [VolatileCell<u32>; 25], // 0x60
ier: VolatileCell<u32>, // 0xC0
idr: VolatileCell<u32>,
imr: VolatileCell<u32>,
isr: VolatileCell<u32>,
icr: VolatileCell<u32>,
sr: VolatileCell<u32>,
_reserved6: [VolatileCell<u32>; 34], // 0x100
ppcr: VolatileCell<u32>, // 0x160
_reserved7: [VolatileCell<u32>; 7],
rcause: VolatileCell<u32>, // 0x180
wcause: VolatileCell<u32>,
awen: VolatileCell<u32>,
protctrl: VolatileCell<u32>,
_reserved8: VolatileCell<u32>,
fastsleep: VolatileCell<u32>,
_reserved9: [VolatileCell<u32>; 152],
config: VolatileCell<u32>, // 0x200
version: VolatileCell<u32>,
}
pub enum MainClock {
RCSYS,
OSC0,
PLL,
DFLL,
RC80M,
RCFAST,
RC1M,
}
#[derive(Copy, Clone, Debug)]
pub enum Clock {
HSB(HSBClock),
PBA(PBAClock),
PBB(PBBClock),
PBC(PBCClock),
PBD(PBDClock),
}
#[derive(Copy, Clone, Debug)]
pub enum HSBClock {
PDCA,
FLASHCALW,
FLASHCALWP,
USBC,
CRCCU,
APBA,
APBB,
APBC,
APBD,
AESA,
}
#[derive(Copy, Clone, Debug)]
pub enum PBAClock {
IISC,
SPI,
TC0,
TC1,
TWIM0,
TWIS0,
TWIM1,
TWIS1,
USART0,
USART1,
USART2,
USART3,
ADCIFE,
DACC,
ACIFC,
GLOC,
ABSACB,
TRNG,
PARC,
CATB,
NULL,
TWIM2,
TWIM3,
LCDCA,
}
#[derive(Copy, Clone, Debug)]
pub enum PBBClock {
FLASHCALW,
HRAMC1,
HMATRIX,
PDCA,
CRCCU,
USBC,
PEVC,
}
#[derive(Copy, Clone, Debug)]
pub enum PBCClock {
PM,
CHIPID,
SCIF,
FREQM,
GPIO,
}
#[derive(Copy, Clone, Debug)]
pub enum PBDClock {
BPM,
BSCIF,
AST,
WDT,
EIC,
PICOUART,
}
/// Frequency of the external oscillator. For the SAM4L, different
/// configurations are needed for different ranges of oscillator frequency, so
/// based on the input frequency, various configurations may need to change.
/// When additional oscillator frequencies are needed, they should be added
/// here and the `setup_system_clock` function should be modified to support
/// it.
#[derive(Copy, Clone, Debug)]
pub enum | {
/// 16 MHz external oscillator
Frequency16MHz,
}
/// Configuration for the startup time of the external oscillator. In practice
/// we have found that some boards work with a short startup time, while others
/// need a slow start in order to properly wake from sleep. In general, we find
/// that for systems that do not work, at fast speed, they will hang or panic
/// after several entries into WAIT mode.
#[derive(Copy, Clone, Debug)]
pub enum OscillatorStartup {
/// Use a fast startup. ~0.5 ms in practice.
FastStart,
/// Use a slow startup. ~8.9 ms in practice.
SlowStart,
}
/// Which source the system clock should be generated from. These are specified
/// as system clock source appended with the clock that it is sourced from
/// appended with the final frequency of the system. So for example, one option
/// is to use the DFLL sourced from the RC32K with a final frequency of 48 MHz.
///
/// When new options (either sources or final frequencies) are needed, they
/// should be added to this list, and then the `setup_system_clock` function
/// can be modified to support it. This is necessary because configurations
/// must be changed not just with the input source but also based on the
/// desired final frequency.
///
/// For options utilizing an external oscillator, the configurations for that
/// oscillator must also be provided.
#[derive(Copy, Clone, Debug)]
pub enum SystemClockSource {
/// Use the RCSYS clock (which the system starts up on anyways). Final
/// system frequency will be 115 kHz. Note that while this is the default,
/// Tock is NOT guaranteed to work on this setting and will likely fail.
RcsysAt115kHz,
/// Use the internal digital frequency locked loop (DFLL) sourced from
/// the internal RC32K clock. Note this typically requires calibration
/// of the RC32K to have a consistent clock. Final frequency of 48 MHz.
DfllRc32kAt48MHz,
/// Use an external crystal oscillator as the direct source for the
/// system clock. The final system frequency will match the frequency of
/// the external oscillator.
ExternalOscillator {
frequency: OscillatorFrequency,
startup_mode: OscillatorStartup,
},
/// Use an external crystal oscillator as the input to the internal phase
/// locked loop (PLL) for the system clock. This results in a final
/// frequency of 48 MHz.
PllExternalOscillatorAt48MHz {
frequency: OscillatorFrequency,
startup_mode: OscillatorStartup,
},
}
const PM_BASE: usize = 0x400E0000;
const HSB_MASK_OFFSET: u32 = 0x24;
const PBA_MASK_OFFSET: u32 = 0x28;
const PBB_MASK_OFFSET: u32 = 0x2C;
const PBC_MASK_OFFSET: u32 = 0x30;
const PBD_MASK_OFFSET: u32 = 0x34;
static mut PM_REGS: *mut PmRegisters = PM_BASE as *mut PmRegisters;
/// Contains state for the power management peripheral. This includes the
/// configurations for various system clocks and the final frequency that the
/// system is running at.
pub struct PowerManager {
/// Frequency at which the system clock is running.
system_frequency: Cell<u32>,
/// Clock source configuration
system_clock_source: Cell<SystemClockSource>,
}
pub static mut PM: PowerManager = PowerManager {
/// Set to the RCSYS frequency by default (115 kHz).
system_frequency: Cell::new(115000),
/// Set to the RCSYS by default.
system_clock_source: Cell::new(SystemClockSource::RcsysAt115kHz),
};
impl PowerManager {
/// Sets up the system clock. This should be called as one of the first
/// lines in the `reset_handler` within the platform's `main.rs`.
pub unsafe fn setup_system_clock(&self, clock_source: SystemClockSource) {
// save configuration
self.system_clock_source.set(clock_source);
// For now, always go to PS2 as it enables all core speeds
bpm::set_power_scaling(bpm::PowerScaling::PS2);
match clock_source {
SystemClockSource::RcsysAt115kHz => {
// no configurations necessary, already running off the RCSYS
self.system_frequency.set(115000);
}
SystemClockSource::DfllRc32kAt48MHz => {
configure_48mhz_dfll();
self.system_frequency.set(48000000);
}
SystemClockSource::ExternalOscillator {
frequency,
startup_mode,
} => {
configure_external_oscillator(frequency, startup_mode);
match frequency {
OscillatorFrequency::Frequency16MHz => self.system_frequency.set(16000000),
};
}
SystemClockSource::PllExternalOscillatorAt48MHz {
frequency,
startup_mode,
} => {
configure_external_oscillator_pll(frequency, startup_mode);
self.system_frequency.set(48000000);
}
}
}
}
unsafe fn unlock(register_offset: u32) {
(*PM_REGS).unlock.set(0xAA000000 | register_offset);
}
unsafe fn select_main_clock(clock: MainClock) {
unlock(0);
(*PM_REGS).mcctrl.set(clock as u32);
}
/// Configure the system clock to use the DFLL with the RC32K as the source.
/// Run at 48 MHz.
unsafe fn configure_48mhz_dfll() {
// Enable HCACHE
flashcalw::FLASH_CONTROLLER.enable_cache();
// start the dfll
scif::setup_dfll_rc32k_48mhz();
// Since we are running at a fast speed we have to set a clock delay
// for flash, as well as enable fast flash mode.
flashcalw::FLASH_CONTROLLER.enable_high_speed_flash();
// Choose the main clock
select_main_clock(MainClock::DFLL);
}
/// Configure the system clock to use the 16 MHz external crystal directly
unsafe fn configure_external_oscillator(
frequency: OscillatorFrequency,
startup_mode: OscillatorStartup,
) {
// Use the cache
flashcalw::FLASH_CONTROLLER.enable_cache();
// Need the 32k RC oscillator for things like BPM module and AST.
bscif::enable_rc32k();
// start the external oscillator
match frequency {
OscillatorFrequency::Frequency16MHz => {
match startup_mode {
OscillatorStartup::FastStart => scif::setup_osc_16mhz_fast_startup(),
OscillatorStartup::SlowStart => scif::setup_osc_16mhz_slow_startup(),
};
}
}
// Go to high speed flash mode
flashcalw::FLASH_CONTROLLER.enable_high_speed_flash();
// Set the main clock to be the external oscillator
select_main_clock(MainClock::OSC0);
}
/// Configure the system clock to use the PLL with the 16 MHz external crystal
unsafe fn configure_external_oscillator_pll(
frequency: OscillatorFrequency,
startup_mode: OscillatorStartup,
) {
// Use the cache
flashcalw::FLASH_CONTROLLER.enable_cache();
// Need the 32k RC oscillator for things like BPM module and AST.
bscif::enable_rc32k();
// start the external oscillator
match frequency {
OscillatorFrequency::Frequency16MHz => {
match startup_mode {
OscillatorStartup::FastStart => scif::setup_osc_16mhz_fast_startup(),
OscillatorStartup::SlowStart => scif::setup_osc_16mhz_slow_startup(),
};
}
}
// Setup the PLL
scif::setup_pll_osc_48mhz();
// Go to high speed flash mode
flashcalw::FLASH_CONTROLLER.enable_high_speed_flash();
// Set the main clock to be the PLL
select_main_clock(MainClock::PLL);
}
pub fn get_system_frequency() -> u32 {
unsafe { PM.system_frequency.get() }
}
/// Utility macro to modify clock mask registers
///
/// It takes one of two forms:
///
/// mask_clock!(CLOCK: pm_register | value)
///
/// which performs a logical-or on the existing register value, or
///
/// mask_clock!(CLOCK: pm_register & value)
///
/// which performs a logical-and.
///
/// CLOCK is one of HSB, PBA, PBB, PBC or PBD
///
/// pm_register is one of hsbmask, pbamask, pbbmask, pbcmask or pbdmask.
///
macro_rules! mask_clock {
($module:ident: $field:ident | $mask:expr) => ({
unlock(concat_idents!($module, _MASK_OFFSET));
let val = (*PM_REGS).$field.get() | ($mask);
(*PM_REGS).$field.set(val);
});
($module:ident: $field:ident & $mask:expr) => ({
unlock(concat_idents!($module, _MASK_OFFSET));
let val = (*PM_REGS).$field.get() & ($mask);
(*PM_REGS).$field.set(val);
});
}
/// Utility macro to get value of clock register. Used to check if a specific
/// clock is enabled or not. See above description of `make_clock!`.
macro_rules! get_clock {
($module:ident: $field:ident & $mask:expr) => ({
unlock(concat_idents!($module, _MASK_OFFSET));
((*PM_REGS).$field.get() & ($mask))!= 0
});
}
// Clock masks that allow us to go into deep sleep without disabling any active
// peripherals.
// FLASHCALW clocks and APBx clocks are allowed
//
// This is identical to the reset value of the HSBMASK except it allows the
// PicoCache RAM clock to be on as well.
const DEEP_SLEEP_HSBMASK: u32 = 0x1e6;
// No clocks allowed on PBA
const DEEP_SLEEP_PBAMASK: u32 = 0x0;
// FLASHCALW and HRAMC1 clocks allowed
//
// This is identical to the reset value of the PBBMASK except it allows the
// flash's HRAMC1 clock as well.
const DEEP_SLEEP_PBBMASK: u32 = 0x3;
/// Determines if the chip can safely go into deep sleep without preventing
/// currently active peripherals from operating.
///
/// We look at the PM's clock mask registers and compare them against a set of
/// known masks that include no peripherals that can't operate in deep
/// sleep (or that have no function during sleep). Specifically:
///
/// * HSB may only have clocks for the flash (and PicoCache) and APBx bridges on.
///
/// * PBA may not have _any_ clocks on.
///
/// * PBB may only have clocks for the flash and HRAMC1 (also flash related) on.
///
/// * PBC and PBD may have any clocks on.
///
/// This means it is the responsibility of each peripheral to disable it's clock
/// mask whenever it is idle.
///
/// We also special case GPIO (which is in PBCMASK), and just see if any interrupts are pending
/// through the INTERRUPT_COUNT variable.
pub fn deep_sleep_ready() -> bool {
unsafe {
(*PM_REGS).hsbmask.get() &!(DEEP_SLEEP_HSBMASK) == 0
&& (*PM_REGS).pbamask.get() &!(DEEP_SLEEP_PBAMASK) == 0
&& (*PM_REGS).pbbmask.get() &!(DEEP_SLEEP_PBBMASK) == 0
&& gpio::INTERRUPT_COUNT.load(Ordering::Relaxed) == 0
}
}
pub unsafe fn enable_clock(clock: Clock) {
match clock {
Clock::HSB(v) => mask_clock!(HSB: hsbmask | 1 << (v as u32)),
Clock::PBA(v) => mask_clock!(PBA: pbamask | 1 << (v as u32)),
Clock::PBB(v) => mask_clock!(PBB: pbbmask | 1 << (v as u32)),
Clock::PBC(v) => mask_clock!(PBC: pbcmask | 1 << (v as u32)),
Clock::PBD(v) => mask_clock!(PBD: pbdmask | 1 << (v as u32)),
}
}
pub unsafe fn disable_clock(clock: Clock) {
match clock {
Clock::HSB(v) => mask_clock!(HSB: hsbmask &!(1 << (v as u32))),
Clock::PBA(v) => mask_clock!(PBA: pbamask &!(1 << (v as u32))),
Clock::PBB(v) => mask_clock!(PBB: pbbmask &!(1 << (v as u32))),
Clock::PBC(v) => mask_clock!(PBC: pbcmask &!(1 << (v as u32))),
Clock::PBD(v) => mask_clock!(PBD: pbdmask &!(1 << (v as u32))),
}
}
pub unsafe fn is_clock_enabled(clock: Clock) -> bool {
match clock {
Clock::HSB(v) => get_clock!(HSB: hsbmask & (1 << (v as u32))),
Clock::PBA(v) => get_clock!(PBA: pbamask & (1 << (v as u32))),
Clock::PBB(v) => get_clock!(PBB: pbbmask & (1 << (v as u32))),
Clock::PBC(v) => get_clock!(PBC: pbcmask & (1 << (v as u32))),
Clock::PBD(v) => get_clock!(PBD: pbdmask & (1 << (v as u32))),
}
}
| OscillatorFrequency | identifier_name |
main.rs | exe_name = &format!("local_quadratic_regression{}", ext);
let sep: String = path::MAIN_SEPARATOR.to_string();
let s = r#"
local_quadratic_regression Help
This tool is an implementation of the constrained quadratic regression algorithm
using a flexible window size described in Wood (1996)
The following commands are recognized:
help Prints help information.
run Runs the tool.
version Prints the tool version information.
The following flags can be used with the 'run' command:
-d, --dem Name of the input DEM raster file.
-o, --output Name of the output raster file.
--filter Edge length of the filter kernel.
Input/output file names can be fully qualified, or can rely on the working directory contained in
the WhiteboxTools settings.json file.
Example Usage:
>>.*EXE_NAME run --dem=DEM.tif --output=out_ras.tif --filter=15
"#
.replace("*", &sep)
.replace("EXE_NAME", exe_name);
println!("{}", s);
}
fn version() {
const VERSION: Option<&'static str> = option_env!("CARGO_PKG_VERSION");
println!(
"local_quadratic_regression v{} by Dr. John B. Lindsay (c) 2021.",
VERSION.unwrap_or("Unknown version")
);
}
fn get_tool_name() -> String {
String::from("LocalQuadraticRegression") // This should be camel case and is a reference to the tool name.
}
fn run(args: &Vec<String>) -> Result<(), std::io::Error> {
let tool_name = get_tool_name();
let sep: String = path::MAIN_SEPARATOR.to_string();
// Read in the environment variables and get the necessary values
let configurations = whitebox_common::configs::get_configs()?;
let mut working_directory = configurations.working_directory.clone();
if!working_directory.is_empty() &&!working_directory.ends_with(&sep) {
working_directory += &sep;
}
let mut input_file = String::new();
let mut output_file = String::new();
let mut filter_size = 3usize;
if args.len() == 0 {
return Err(Error::new(
ErrorKind::InvalidInput,
"Tool run with no parameters.",
));
}
for i in 0..args.len() {
let mut arg = args[i].replace("\"", "");
arg = arg.replace("\'", "");
let cmd = arg.split("="); // in case an equals sign was used
let vec = cmd.collect::<Vec<&str>>();
let mut keyval = false;
if vec.len() > 1 {
keyval = true;
}
let flag_val = vec[0].to_lowercase().replace("--", "-");
if flag_val == "-d" || flag_val == "-dem" {
if keyval {
input_file = vec[1].to_string();
} else {
input_file = args[i + 1].to_string();
}
} else if flag_val == "-o" || flag_val == "-output" {
if keyval {
output_file = vec[1].to_string();
} else {
output_file = args[i + 1].to_string();
}
} else if flag_val == "-filter" {
if keyval {
filter_size = vec[1]
.to_string()
.parse::<f32>()
.expect(&format!("Error parsing {}", flag_val))
as usize;
} else {
filter_size = args[i + 1]
.to_string()
.parse::<f32>()
.expect(&format!("Error parsing {}", flag_val))
as usize;
}
}
}
if filter_size < 3 { filter_size = 3; }
// The filter dimensions must be odd numbers such that there is a middle pixel
if (filter_size as f64 / 2f64).floor() == (filter_size as f64 / 2f64) {
filter_size += 1;
}
if configurations.verbose_mode {
let welcome_len = format!("* Welcome to {} *", tool_name).len().max(28);
// 28 = length of the 'Powered by' by statement.
println!("{}", "*".repeat(welcome_len));
println!("* Welcome to {} {}*", tool_name, " ".repeat(welcome_len - 15 - tool_name.len()));
println!("* Powered by WhiteboxTools {}*", " ".repeat(welcome_len - 28));
println!("* www.whiteboxgeo.com {}*", " ".repeat(welcome_len - 23));
println!("{}", "*".repeat(welcome_len));
}
let sep: String = path::MAIN_SEPARATOR.to_string();
let mut progress: usize;
let mut old_progress: usize = 1;
if!input_file.contains(&sep) &&!input_file.contains("/") {
input_file = format!("{}{}", working_directory, input_file);
}
if!output_file.contains(&sep) &&!output_file.contains("/") {
output_file = format!("{}{}", working_directory, output_file);
}
if configurations.verbose_mode {
println!("Reading data...")
};
let input = Arc::new(Raster::new(&input_file, "r")?);
let rows = input.configs.rows as isize;
let columns = input.configs.columns as isize;
let nodata = input.configs.nodata;
let resolution = input.configs.resolution_x; // assume square
let path_parts: Vec<&str> = output_file.rsplitn(2, ".").collect();
let mut outputs: [Raster; 8] = [
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "Slp", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "Asp", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "ProC", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "PlaC", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "LonC", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "CrsC", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "PrCM", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "GoF", &path_parts[0]), &input)
];
let start = Instant::now();
// no weights simplifies matrices
let offset = (filter_size - 1) / 2;
let num_cells = filter_size * filter_size;
// determine filter offsets
let mut dx = vec![0isize; num_cells];
let mut dy = vec![0isize; num_cells];
let mut idx = 0usize;
for i in 0..filter_size {
for j in 0..filter_size {
dx[idx] = (j - offset) as isize;
dy[idx] = (i - offset) as isize;
idx += 1;
}
}
let num_procs = num_cpus::get() as isize;
let (tx, rx) = mpsc::channel();
for tid in 0..num_procs {
let input = input.clone();
let dx = dx.clone();
let dy = dy.clone();
let tx = tx.clone();
// let a_decomp = a_decomp.clone();
thread::spawn(move || {
let mut z: f64;
let mut zi: f64;
for row in (0..rows).filter(|r| r % num_procs == tid) {
let mut slopes = vec![nodata; columns as usize];
let mut aspects = vec![nodata; columns as usize];
let mut prof_cs = vec![nodata; columns as usize];
let mut plan_cs = vec![nodata; columns as usize];
let mut long_cs = vec![nodata; columns as usize];
let mut cross_cs = vec![nodata; columns as usize];
let mut procmin_cs = vec![nodata; columns as usize];
let mut gofs = vec![nodata; columns as usize];
for col in 0..columns {
z = input[(row, col)];
if z!= nodata {
let (mut zx2, mut zy2, mut zxy, mut zx, mut zy, mut _zw) = (0f64,0f64,0f64,0f64,0f64,0f64);
let (mut x2, mut x2y2, mut x4) = (0f64, 0f64, 0f64);
let mut num_valid = 0usize;
let (mut z_pred, mut z_act): (f64, f64);
let (mut sum_x, mut sum_y, mut sum_xy, mut sum_xx, mut sum_yy) = (0f64, 0f64, 0f64, 0f64, 0f64);
let (r, n): (f64, f64);
let mut xs = vec![];
let mut ys = vec![]; | for c in 0..num_cells {
zi = input[((row + dy[c] as isize), (col + dx[c] as isize))];
if zi!= nodata {
xs.push(dx[c] as f64 * resolution);
ys.push(dy[c] as f64 * resolution);
zs.push(zi - z); // elevation relative to center
num_valid += 1;
}
}
if num_valid >= 8 {//6 { // need at least six samples
// compute sums
for i in 0..num_valid {
zx2 += zs[i] * xs[i].powi(2);
zy2 += zs[i] * ys[i].powi(2);
zxy += zs[i] * xs[i] * ys[i];
zx += zs[i] * xs[i];
zy += zs[i] * ys[i];
_zw += zs[i];
x2 += xs[i].powi(2);
x2y2 += xs[i].powi(2) * ys[i].powi(2);
x4 += xs[i].powi(4);
}
let a = Matrix5::from_rows(&[
RowVector5::new(x4, x2y2, 0f64, 0f64, 0f64),
RowVector5::new(x2y2, x4, 0f64, 0f64, 0f64),
RowVector5::new(0f64,0f64,x2y2, 0f64, 0f64),
RowVector5::new(0f64, 0f64, 0f64, x2, 0f64),
RowVector5::new(0f64, 0f64, 0f64, 0f64, x2),
]);
let b = Vector5::new(zx2, zy2, zxy, zx, zy);
let fitted_surface = Quadratic2d::from_normals_origin(a, b);
for i in 0..num_valid {
z_act = zs[i];
sum_x += z_act;
sum_xx += z_act * z_act;
z_pred = fitted_surface.solve(xs[i], ys[i]);
sum_y += z_pred;
sum_yy += z_pred * z_pred;
sum_xy += z_act * z_pred;
}
n = num_valid as f64;
let noom = n * sum_xy - (sum_x * sum_y);
let den = (n * sum_xx - (sum_x * sum_x)).sqrt() * ((n * sum_yy - (sum_y * sum_y)).sqrt());
if noom == 0f64 || den == 0f64 {
r = 0f64;
} else {
r = noom / den;
}
slopes[col as usize] = fitted_surface.slope();
aspects[col as usize] = fitted_surface.aspect();
prof_cs[col as usize] = fitted_surface.profile_convexity();
plan_cs[col as usize] = fitted_surface.plan_convexity();
long_cs[col as usize] = fitted_surface.longitudinal_curvature();
cross_cs[col as usize] = fitted_surface.cross_sectional_curvature();
procmin_cs[col as usize] = fitted_surface.min_prof_convexity();
gofs[col as usize] = r * r;
}
}
}
tx.send(
(row,
slopes,
aspects,
prof_cs,
plan_cs,
long_cs,
cross_cs,
procmin_cs,
gofs)
).unwrap();
}
});
}
for row in 0..rows {
let data = rx.recv().expect("Error receiving data from thread.");
outputs[0].set_row_data(data.0, data.1);
outputs[1].set_row_data(data.0, data.2);
outputs[2].set_row_data(data.0, data.3);
outputs[3].set_row_data(data.0, data.4);
outputs[4].set_row_data(data.0, data.5);
outputs[5].set_row_data(data.0, data.6);
outputs[6].set_row_data(data.0, data.7);
outputs[7].set_row_data(data.0, data.8);
if configurations.verbose_mode {
progress = (100.0_f64 * row as f64 / (rows - 1) as f64) as usize;
if progress!= old_progress {
println!("Performing analysis: {}%", progress);
old_progress = progress;
}
}
}
let elapsed_time = get_formatted_elapsed_time(start);
if configurations.verbose_mode {
println!("Saving data...")
};
for o in 0..outputs.len() {
outputs[o].configs.palette = "grey.plt".to_string();
outputs[o].add_metadata_entry(format!(
"Created by whitebox_tools\' {} tool",
tool_name
));
outputs[o].add_metadata_entry(format!("Input file: {}", input_file));
outputs[o].add_metadata_entry(format!("Elapsed Time (excluding I/O): {}", elapsed_time));
let _ = match outputs[o].write() {
Ok(_) => {
if configurations.verbose_mode {
println!("Output file {:?} written", o+1);
}
}
Err(e) => return Err(e),
};
}
if configurations.verbose_mode {
println!(
"{}",
&format!("Elapsed Time (excluding I/O): {}", elapsed_time)
);
}
Ok(())
}
// Equation of a 2d quadratic model:
// z(x,y) = ax^2 + by^2 + cxy + dx + ey + f
#[derive(Default, Clone, Copy)]
struct Quadratic2d {
a: f64,
b: f64,
c: f64,
d: f64,
e: f64,
f: f64
}
impl Quadratic2d {
fn new(a: f64, b: f64, c: f64, d: f64, e: f64, f: f64) -> Quadratic2d {
Quadratic2d {
a: a,
b: b,
c: c,
d: d,
e: e,
f: f
}
}
// solves a system of normal equations ax = b
// fn from_normal_equations(a: Matrix6<f64>, b: Vector6<f64>) -> Quadratic2d {
// let decomp = a.lu();
// if decomp.is_invertible() {
// let x = decomp.solve(&b).expect("Linear resolution failed.");
// Quadratic2d::new(
// *x.get(0).unwrap(), // a
// *x.get(1).unwrap(), // b
// *x.get(2).unwrap(), // c
// *x.get(3).unwrap(), // d
// *x.get(4).unwrap(), // e
// *x.get(5).unwrap() // f
// )
// } else {
// Quadratic2d::new(0f64,0f64,0f64,0f64,0f64,0f64)
// }
// }
fn from_normals_origin(a: Matrix5<f64>, b: Vector5<f64>) -> Quadratic2d {
let decomp = a.lu();
if decomp.is_invertible() {
let x = decomp.solve(&b).expect("Linear resolution failed.");
Quadratic2d::new(
*x.get(0).unwrap(), // a
*x.get(1).unwrap(), // b
*x.get(2).unwrap(), // c
*x.get(3).unwrap(), // d
*x.get(4).unwrap(), | let mut zs = vec![];
| random_line_split |
main.rs | keyval = true;
}
let flag_val = vec[0].to_lowercase().replace("--", "-");
if flag_val == "-d" || flag_val == "-dem" {
if keyval {
input_file = vec[1].to_string();
} else {
input_file = args[i + 1].to_string();
}
} else if flag_val == "-o" || flag_val == "-output" {
if keyval {
output_file = vec[1].to_string();
} else {
output_file = args[i + 1].to_string();
}
} else if flag_val == "-filter" {
if keyval {
filter_size = vec[1]
.to_string()
.parse::<f32>()
.expect(&format!("Error parsing {}", flag_val))
as usize;
} else {
filter_size = args[i + 1]
.to_string()
.parse::<f32>()
.expect(&format!("Error parsing {}", flag_val))
as usize;
}
}
}
if filter_size < 3 { filter_size = 3; }
// The filter dimensions must be odd numbers such that there is a middle pixel
if (filter_size as f64 / 2f64).floor() == (filter_size as f64 / 2f64) {
filter_size += 1;
}
if configurations.verbose_mode {
let welcome_len = format!("* Welcome to {} *", tool_name).len().max(28);
// 28 = length of the 'Powered by' by statement.
println!("{}", "*".repeat(welcome_len));
println!("* Welcome to {} {}*", tool_name, " ".repeat(welcome_len - 15 - tool_name.len()));
println!("* Powered by WhiteboxTools {}*", " ".repeat(welcome_len - 28));
println!("* www.whiteboxgeo.com {}*", " ".repeat(welcome_len - 23));
println!("{}", "*".repeat(welcome_len));
}
let sep: String = path::MAIN_SEPARATOR.to_string();
let mut progress: usize;
let mut old_progress: usize = 1;
if!input_file.contains(&sep) &&!input_file.contains("/") {
input_file = format!("{}{}", working_directory, input_file);
}
if!output_file.contains(&sep) &&!output_file.contains("/") {
output_file = format!("{}{}", working_directory, output_file);
}
if configurations.verbose_mode {
println!("Reading data...")
};
let input = Arc::new(Raster::new(&input_file, "r")?);
let rows = input.configs.rows as isize;
let columns = input.configs.columns as isize;
let nodata = input.configs.nodata;
let resolution = input.configs.resolution_x; // assume square
let path_parts: Vec<&str> = output_file.rsplitn(2, ".").collect();
let mut outputs: [Raster; 8] = [
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "Slp", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "Asp", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "ProC", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "PlaC", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "LonC", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "CrsC", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "PrCM", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "GoF", &path_parts[0]), &input)
];
let start = Instant::now();
// no weights simplifies matrices
let offset = (filter_size - 1) / 2;
let num_cells = filter_size * filter_size;
// determine filter offsets
let mut dx = vec![0isize; num_cells];
let mut dy = vec![0isize; num_cells];
let mut idx = 0usize;
for i in 0..filter_size {
for j in 0..filter_size {
dx[idx] = (j - offset) as isize;
dy[idx] = (i - offset) as isize;
idx += 1;
}
}
let num_procs = num_cpus::get() as isize;
let (tx, rx) = mpsc::channel();
for tid in 0..num_procs {
let input = input.clone();
let dx = dx.clone();
let dy = dy.clone();
let tx = tx.clone();
// let a_decomp = a_decomp.clone();
thread::spawn(move || {
let mut z: f64;
let mut zi: f64;
for row in (0..rows).filter(|r| r % num_procs == tid) {
let mut slopes = vec![nodata; columns as usize];
let mut aspects = vec![nodata; columns as usize];
let mut prof_cs = vec![nodata; columns as usize];
let mut plan_cs = vec![nodata; columns as usize];
let mut long_cs = vec![nodata; columns as usize];
let mut cross_cs = vec![nodata; columns as usize];
let mut procmin_cs = vec![nodata; columns as usize];
let mut gofs = vec![nodata; columns as usize];
for col in 0..columns {
z = input[(row, col)];
if z!= nodata {
let (mut zx2, mut zy2, mut zxy, mut zx, mut zy, mut _zw) = (0f64,0f64,0f64,0f64,0f64,0f64);
let (mut x2, mut x2y2, mut x4) = (0f64, 0f64, 0f64);
let mut num_valid = 0usize;
let (mut z_pred, mut z_act): (f64, f64);
let (mut sum_x, mut sum_y, mut sum_xy, mut sum_xx, mut sum_yy) = (0f64, 0f64, 0f64, 0f64, 0f64);
let (r, n): (f64, f64);
let mut xs = vec![];
let mut ys = vec![];
let mut zs = vec![];
for c in 0..num_cells {
zi = input[((row + dy[c] as isize), (col + dx[c] as isize))];
if zi!= nodata {
xs.push(dx[c] as f64 * resolution);
ys.push(dy[c] as f64 * resolution);
zs.push(zi - z); // elevation relative to center
num_valid += 1;
}
}
if num_valid >= 8 {//6 { // need at least six samples
// compute sums
for i in 0..num_valid {
zx2 += zs[i] * xs[i].powi(2);
zy2 += zs[i] * ys[i].powi(2);
zxy += zs[i] * xs[i] * ys[i];
zx += zs[i] * xs[i];
zy += zs[i] * ys[i];
_zw += zs[i];
x2 += xs[i].powi(2);
x2y2 += xs[i].powi(2) * ys[i].powi(2);
x4 += xs[i].powi(4);
}
let a = Matrix5::from_rows(&[
RowVector5::new(x4, x2y2, 0f64, 0f64, 0f64),
RowVector5::new(x2y2, x4, 0f64, 0f64, 0f64),
RowVector5::new(0f64,0f64,x2y2, 0f64, 0f64),
RowVector5::new(0f64, 0f64, 0f64, x2, 0f64),
RowVector5::new(0f64, 0f64, 0f64, 0f64, x2),
]);
let b = Vector5::new(zx2, zy2, zxy, zx, zy);
let fitted_surface = Quadratic2d::from_normals_origin(a, b);
for i in 0..num_valid {
z_act = zs[i];
sum_x += z_act;
sum_xx += z_act * z_act;
z_pred = fitted_surface.solve(xs[i], ys[i]);
sum_y += z_pred;
sum_yy += z_pred * z_pred;
sum_xy += z_act * z_pred;
}
n = num_valid as f64;
let noom = n * sum_xy - (sum_x * sum_y);
let den = (n * sum_xx - (sum_x * sum_x)).sqrt() * ((n * sum_yy - (sum_y * sum_y)).sqrt());
if noom == 0f64 || den == 0f64 {
r = 0f64;
} else {
r = noom / den;
}
slopes[col as usize] = fitted_surface.slope();
aspects[col as usize] = fitted_surface.aspect();
prof_cs[col as usize] = fitted_surface.profile_convexity();
plan_cs[col as usize] = fitted_surface.plan_convexity();
long_cs[col as usize] = fitted_surface.longitudinal_curvature();
cross_cs[col as usize] = fitted_surface.cross_sectional_curvature();
procmin_cs[col as usize] = fitted_surface.min_prof_convexity();
gofs[col as usize] = r * r;
}
}
}
tx.send(
(row,
slopes,
aspects,
prof_cs,
plan_cs,
long_cs,
cross_cs,
procmin_cs,
gofs)
).unwrap();
}
});
}
for row in 0..rows {
let data = rx.recv().expect("Error receiving data from thread.");
outputs[0].set_row_data(data.0, data.1);
outputs[1].set_row_data(data.0, data.2);
outputs[2].set_row_data(data.0, data.3);
outputs[3].set_row_data(data.0, data.4);
outputs[4].set_row_data(data.0, data.5);
outputs[5].set_row_data(data.0, data.6);
outputs[6].set_row_data(data.0, data.7);
outputs[7].set_row_data(data.0, data.8);
if configurations.verbose_mode {
progress = (100.0_f64 * row as f64 / (rows - 1) as f64) as usize;
if progress!= old_progress {
println!("Performing analysis: {}%", progress);
old_progress = progress;
}
}
}
let elapsed_time = get_formatted_elapsed_time(start);
if configurations.verbose_mode {
println!("Saving data...")
};
for o in 0..outputs.len() {
outputs[o].configs.palette = "grey.plt".to_string();
outputs[o].add_metadata_entry(format!(
"Created by whitebox_tools\' {} tool",
tool_name
));
outputs[o].add_metadata_entry(format!("Input file: {}", input_file));
outputs[o].add_metadata_entry(format!("Elapsed Time (excluding I/O): {}", elapsed_time));
let _ = match outputs[o].write() {
Ok(_) => {
if configurations.verbose_mode {
println!("Output file {:?} written", o+1);
}
}
Err(e) => return Err(e),
};
}
if configurations.verbose_mode {
println!(
"{}",
&format!("Elapsed Time (excluding I/O): {}", elapsed_time)
);
}
Ok(())
}
// Equation of a 2d quadratic model:
// z(x,y) = ax^2 + by^2 + cxy + dx + ey + f
#[derive(Default, Clone, Copy)]
struct Quadratic2d {
a: f64,
b: f64,
c: f64,
d: f64,
e: f64,
f: f64
}
impl Quadratic2d {
fn new(a: f64, b: f64, c: f64, d: f64, e: f64, f: f64) -> Quadratic2d {
Quadratic2d {
a: a,
b: b,
c: c,
d: d,
e: e,
f: f
}
}
// solves a system of normal equations ax = b
// fn from_normal_equations(a: Matrix6<f64>, b: Vector6<f64>) -> Quadratic2d {
// let decomp = a.lu();
// if decomp.is_invertible() {
// let x = decomp.solve(&b).expect("Linear resolution failed.");
// Quadratic2d::new(
// *x.get(0).unwrap(), // a
// *x.get(1).unwrap(), // b
// *x.get(2).unwrap(), // c
// *x.get(3).unwrap(), // d
// *x.get(4).unwrap(), // e
// *x.get(5).unwrap() // f
// )
// } else {
// Quadratic2d::new(0f64,0f64,0f64,0f64,0f64,0f64)
// }
// }
fn from_normals_origin(a: Matrix5<f64>, b: Vector5<f64>) -> Quadratic2d {
let decomp = a.lu();
if decomp.is_invertible() {
let x = decomp.solve(&b).expect("Linear resolution failed.");
Quadratic2d::new(
*x.get(0).unwrap(), // a
*x.get(1).unwrap(), // b
*x.get(2).unwrap(), // c
*x.get(3).unwrap(), // d
*x.get(4).unwrap(), // e
0f64, //f
)
} else {
Quadratic2d::new(0f64,0f64,0f64,0f64,0f64,0f64)
}
}
// fn from_decomposed_normals(
// decomp: LU<f64, nalgebra::base::dimension::U6, nalgebra::base::dimension::U6>,
// b: Vector6<f64>
// ) -> Quadratic2d {
// if decomp.is_invertible() {
// let x = decomp.solve(&b).expect("Linear resolution fialed.");
// Quadratic2d::new(
// *x.get(0).unwrap(), // a
// *x.get(1).unwrap(), // b
// *x.get(2).unwrap(), // c
// *x.get(3).unwrap(), // d
// *x.get(4).unwrap(), // e
// *x.get(5).unwrap() // f
// )
// } else {
// Quadratic2d::new(0f64,0f64,0f64,0f64,0f64,0f64)
// }
// }
fn slope(&self) -> f64 {
// (self.a*self.a + self.b*self.b).sqrt().atan().to_degrees()
(self.d*self.d + self.e*self.e).sqrt().atan()
}
fn aspect(&self) -> f64 {
if self.e == 0f64 || self.d == 0f64 {
0f64
} else {
(self.e / self.d).atan()
}
}
fn profile_convexity(&self) -> f64 {
let nu = -200f64 * ((self.a*self.d*self.d) + (self.b*self.e*self.e) + (self.c*self.d*self.e));
let de = ((self.e*self.e) + (self.d*self.d)) * (1f64 + (self.d*self.d) + (self.e*self.e)).powf(1.5);
if nu == 0f64 || de == 0f64 | {
0f64
} | conditional_block |
|
main.rs | // The filter dimensions must be odd numbers such that there is a middle pixel
if (filter_size as f64 / 2f64).floor() == (filter_size as f64 / 2f64) {
filter_size += 1;
}
if configurations.verbose_mode {
let welcome_len = format!("* Welcome to {} *", tool_name).len().max(28);
// 28 = length of the 'Powered by' by statement.
println!("{}", "*".repeat(welcome_len));
println!("* Welcome to {} {}*", tool_name, " ".repeat(welcome_len - 15 - tool_name.len()));
println!("* Powered by WhiteboxTools {}*", " ".repeat(welcome_len - 28));
println!("* www.whiteboxgeo.com {}*", " ".repeat(welcome_len - 23));
println!("{}", "*".repeat(welcome_len));
}
let sep: String = path::MAIN_SEPARATOR.to_string();
let mut progress: usize;
let mut old_progress: usize = 1;
if!input_file.contains(&sep) &&!input_file.contains("/") {
input_file = format!("{}{}", working_directory, input_file);
}
if!output_file.contains(&sep) &&!output_file.contains("/") {
output_file = format!("{}{}", working_directory, output_file);
}
if configurations.verbose_mode {
println!("Reading data...")
};
let input = Arc::new(Raster::new(&input_file, "r")?);
let rows = input.configs.rows as isize;
let columns = input.configs.columns as isize;
let nodata = input.configs.nodata;
let resolution = input.configs.resolution_x; // assume square
let path_parts: Vec<&str> = output_file.rsplitn(2, ".").collect();
let mut outputs: [Raster; 8] = [
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "Slp", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "Asp", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "ProC", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "PlaC", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "LonC", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "CrsC", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "PrCM", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "GoF", &path_parts[0]), &input)
];
let start = Instant::now();
// no weights simplifies matrices
let offset = (filter_size - 1) / 2;
let num_cells = filter_size * filter_size;
// determine filter offsets
let mut dx = vec![0isize; num_cells];
let mut dy = vec![0isize; num_cells];
let mut idx = 0usize;
for i in 0..filter_size {
for j in 0..filter_size {
dx[idx] = (j - offset) as isize;
dy[idx] = (i - offset) as isize;
idx += 1;
}
}
let num_procs = num_cpus::get() as isize;
let (tx, rx) = mpsc::channel();
for tid in 0..num_procs {
let input = input.clone();
let dx = dx.clone();
let dy = dy.clone();
let tx = tx.clone();
// let a_decomp = a_decomp.clone();
thread::spawn(move || {
let mut z: f64;
let mut zi: f64;
for row in (0..rows).filter(|r| r % num_procs == tid) {
let mut slopes = vec![nodata; columns as usize];
let mut aspects = vec![nodata; columns as usize];
let mut prof_cs = vec![nodata; columns as usize];
let mut plan_cs = vec![nodata; columns as usize];
let mut long_cs = vec![nodata; columns as usize];
let mut cross_cs = vec![nodata; columns as usize];
let mut procmin_cs = vec![nodata; columns as usize];
let mut gofs = vec![nodata; columns as usize];
for col in 0..columns {
z = input[(row, col)];
if z!= nodata {
let (mut zx2, mut zy2, mut zxy, mut zx, mut zy, mut _zw) = (0f64,0f64,0f64,0f64,0f64,0f64);
let (mut x2, mut x2y2, mut x4) = (0f64, 0f64, 0f64);
let mut num_valid = 0usize;
let (mut z_pred, mut z_act): (f64, f64);
let (mut sum_x, mut sum_y, mut sum_xy, mut sum_xx, mut sum_yy) = (0f64, 0f64, 0f64, 0f64, 0f64);
let (r, n): (f64, f64);
let mut xs = vec![];
let mut ys = vec![];
let mut zs = vec![];
for c in 0..num_cells {
zi = input[((row + dy[c] as isize), (col + dx[c] as isize))];
if zi!= nodata {
xs.push(dx[c] as f64 * resolution);
ys.push(dy[c] as f64 * resolution);
zs.push(zi - z); // elevation relative to center
num_valid += 1;
}
}
if num_valid >= 8 {//6 { // need at least six samples
// compute sums
for i in 0..num_valid {
zx2 += zs[i] * xs[i].powi(2);
zy2 += zs[i] * ys[i].powi(2);
zxy += zs[i] * xs[i] * ys[i];
zx += zs[i] * xs[i];
zy += zs[i] * ys[i];
_zw += zs[i];
x2 += xs[i].powi(2);
x2y2 += xs[i].powi(2) * ys[i].powi(2);
x4 += xs[i].powi(4);
}
let a = Matrix5::from_rows(&[
RowVector5::new(x4, x2y2, 0f64, 0f64, 0f64),
RowVector5::new(x2y2, x4, 0f64, 0f64, 0f64),
RowVector5::new(0f64,0f64,x2y2, 0f64, 0f64),
RowVector5::new(0f64, 0f64, 0f64, x2, 0f64),
RowVector5::new(0f64, 0f64, 0f64, 0f64, x2),
]);
let b = Vector5::new(zx2, zy2, zxy, zx, zy);
let fitted_surface = Quadratic2d::from_normals_origin(a, b);
for i in 0..num_valid {
z_act = zs[i];
sum_x += z_act;
sum_xx += z_act * z_act;
z_pred = fitted_surface.solve(xs[i], ys[i]);
sum_y += z_pred;
sum_yy += z_pred * z_pred;
sum_xy += z_act * z_pred;
}
n = num_valid as f64;
let noom = n * sum_xy - (sum_x * sum_y);
let den = (n * sum_xx - (sum_x * sum_x)).sqrt() * ((n * sum_yy - (sum_y * sum_y)).sqrt());
if noom == 0f64 || den == 0f64 {
r = 0f64;
} else {
r = noom / den;
}
slopes[col as usize] = fitted_surface.slope();
aspects[col as usize] = fitted_surface.aspect();
prof_cs[col as usize] = fitted_surface.profile_convexity();
plan_cs[col as usize] = fitted_surface.plan_convexity();
long_cs[col as usize] = fitted_surface.longitudinal_curvature();
cross_cs[col as usize] = fitted_surface.cross_sectional_curvature();
procmin_cs[col as usize] = fitted_surface.min_prof_convexity();
gofs[col as usize] = r * r;
}
}
}
tx.send(
(row,
slopes,
aspects,
prof_cs,
plan_cs,
long_cs,
cross_cs,
procmin_cs,
gofs)
).unwrap();
}
});
}
for row in 0..rows {
let data = rx.recv().expect("Error receiving data from thread.");
outputs[0].set_row_data(data.0, data.1);
outputs[1].set_row_data(data.0, data.2);
outputs[2].set_row_data(data.0, data.3);
outputs[3].set_row_data(data.0, data.4);
outputs[4].set_row_data(data.0, data.5);
outputs[5].set_row_data(data.0, data.6);
outputs[6].set_row_data(data.0, data.7);
outputs[7].set_row_data(data.0, data.8);
if configurations.verbose_mode {
progress = (100.0_f64 * row as f64 / (rows - 1) as f64) as usize;
if progress!= old_progress {
println!("Performing analysis: {}%", progress);
old_progress = progress;
}
}
}
let elapsed_time = get_formatted_elapsed_time(start);
if configurations.verbose_mode {
println!("Saving data...")
};
for o in 0..outputs.len() {
outputs[o].configs.palette = "grey.plt".to_string();
outputs[o].add_metadata_entry(format!(
"Created by whitebox_tools\' {} tool",
tool_name
));
outputs[o].add_metadata_entry(format!("Input file: {}", input_file));
outputs[o].add_metadata_entry(format!("Elapsed Time (excluding I/O): {}", elapsed_time));
let _ = match outputs[o].write() {
Ok(_) => {
if configurations.verbose_mode {
println!("Output file {:?} written", o+1);
}
}
Err(e) => return Err(e),
};
}
if configurations.verbose_mode {
println!(
"{}",
&format!("Elapsed Time (excluding I/O): {}", elapsed_time)
);
}
Ok(())
}
// Equation of a 2d quadratic model:
// z(x,y) = ax^2 + by^2 + cxy + dx + ey + f
#[derive(Default, Clone, Copy)]
struct Quadratic2d {
a: f64,
b: f64,
c: f64,
d: f64,
e: f64,
f: f64
}
impl Quadratic2d {
fn new(a: f64, b: f64, c: f64, d: f64, e: f64, f: f64) -> Quadratic2d {
Quadratic2d {
a: a,
b: b,
c: c,
d: d,
e: e,
f: f
}
}
// solves a system of normal equations ax = b
// fn from_normal_equations(a: Matrix6<f64>, b: Vector6<f64>) -> Quadratic2d {
// let decomp = a.lu();
// if decomp.is_invertible() {
// let x = decomp.solve(&b).expect("Linear resolution failed.");
// Quadratic2d::new(
// *x.get(0).unwrap(), // a
// *x.get(1).unwrap(), // b
// *x.get(2).unwrap(), // c
// *x.get(3).unwrap(), // d
// *x.get(4).unwrap(), // e
// *x.get(5).unwrap() // f
// )
// } else {
// Quadratic2d::new(0f64,0f64,0f64,0f64,0f64,0f64)
// }
// }
fn from_normals_origin(a: Matrix5<f64>, b: Vector5<f64>) -> Quadratic2d {
let decomp = a.lu();
if decomp.is_invertible() {
let x = decomp.solve(&b).expect("Linear resolution failed.");
Quadratic2d::new(
*x.get(0).unwrap(), // a
*x.get(1).unwrap(), // b
*x.get(2).unwrap(), // c
*x.get(3).unwrap(), // d
*x.get(4).unwrap(), // e
0f64, //f
)
} else {
Quadratic2d::new(0f64,0f64,0f64,0f64,0f64,0f64)
}
}
// fn from_decomposed_normals(
// decomp: LU<f64, nalgebra::base::dimension::U6, nalgebra::base::dimension::U6>,
// b: Vector6<f64>
// ) -> Quadratic2d {
// if decomp.is_invertible() {
// let x = decomp.solve(&b).expect("Linear resolution fialed.");
// Quadratic2d::new(
// *x.get(0).unwrap(), // a
// *x.get(1).unwrap(), // b
// *x.get(2).unwrap(), // c
// *x.get(3).unwrap(), // d
// *x.get(4).unwrap(), // e
// *x.get(5).unwrap() // f
// )
// } else {
// Quadratic2d::new(0f64,0f64,0f64,0f64,0f64,0f64)
// }
// }
fn slope(&self) -> f64 {
// (self.a*self.a + self.b*self.b).sqrt().atan().to_degrees()
(self.d*self.d + self.e*self.e).sqrt().atan()
}
fn aspect(&self) -> f64 {
if self.e == 0f64 || self.d == 0f64 {
0f64
} else {
(self.e / self.d).atan()
}
}
fn profile_convexity(&self) -> f64 {
let nu = -200f64 * ((self.a*self.d*self.d) + (self.b*self.e*self.e) + (self.c*self.d*self.e));
let de = ((self.e*self.e) + (self.d*self.d)) * (1f64 + (self.d*self.d) + (self.e*self.e)).powf(1.5);
if nu == 0f64 || de == 0f64 {
0f64
} else {
nu / de
}
}
fn plan_convexity(&self) -> f64 {
let nu = 200f64 * ((self.b*self.d*self.d) + (self.a*self.e*self.e) - (self.c*self.d*self.e));
let de = ((self.e*self.e) + (self.d*self.d)).powf(1.5);
if nu == 0f64 || de == 0f64 {
0f64
} else {
nu / de
}
}
fn longitudinal_curvature(&self) -> f64 {
let nu = (self.a*self.d*self.d) + (self.b*self.e*self.e) + (self.c*self.d*self.e);
let de = (self.d*self.d) + (self.e*self.e);
if nu == 0f64 || de == 0f64 {
0f64
} else{
-2f64*(nu / de)
}
}
fn cross_sectional_curvature(&self) -> f64 {
let nu = (self.b*self.d*self.d) + (self.a*self.e*self.e) - (self.c*self.d*self.e);
let de = (self.d*self.d) + (self.e*self.e);
if nu == 0f64 || de == 0f64 {
0f64
} else{
-2f64*(nu / de)
}
}
// fn max_prof_convexity(&self) -> f64 {
// (self.a * -1f64) - self.b + ((self.a - self.b).powi(2) + (self.c * self.c)).sqrt()
// }
fn | min_prof_convexity | identifier_name |
|
main.rs | _name = &format!("local_quadratic_regression{}", ext);
let sep: String = path::MAIN_SEPARATOR.to_string();
let s = r#"
local_quadratic_regression Help
This tool is an implementation of the constrained quadratic regression algorithm
using a flexible window size described in Wood (1996)
The following commands are recognized:
help Prints help information.
run Runs the tool.
version Prints the tool version information.
The following flags can be used with the 'run' command:
-d, --dem Name of the input DEM raster file.
-o, --output Name of the output raster file.
--filter Edge length of the filter kernel.
Input/output file names can be fully qualified, or can rely on the working directory contained in
the WhiteboxTools settings.json file.
Example Usage:
>>.*EXE_NAME run --dem=DEM.tif --output=out_ras.tif --filter=15
"#
.replace("*", &sep)
.replace("EXE_NAME", exe_name);
println!("{}", s);
}
fn version() {
const VERSION: Option<&'static str> = option_env!("CARGO_PKG_VERSION");
println!(
"local_quadratic_regression v{} by Dr. John B. Lindsay (c) 2021.",
VERSION.unwrap_or("Unknown version")
);
}
fn get_tool_name() -> String {
String::from("LocalQuadraticRegression") // This should be camel case and is a reference to the tool name.
}
fn run(args: &Vec<String>) -> Result<(), std::io::Error> | ));
}
for i in 0..args.len() {
let mut arg = args[i].replace("\"", "");
arg = arg.replace("\'", "");
let cmd = arg.split("="); // in case an equals sign was used
let vec = cmd.collect::<Vec<&str>>();
let mut keyval = false;
if vec.len() > 1 {
keyval = true;
}
let flag_val = vec[0].to_lowercase().replace("--", "-");
if flag_val == "-d" || flag_val == "-dem" {
if keyval {
input_file = vec[1].to_string();
} else {
input_file = args[i + 1].to_string();
}
} else if flag_val == "-o" || flag_val == "-output" {
if keyval {
output_file = vec[1].to_string();
} else {
output_file = args[i + 1].to_string();
}
} else if flag_val == "-filter" {
if keyval {
filter_size = vec[1]
.to_string()
.parse::<f32>()
.expect(&format!("Error parsing {}", flag_val))
as usize;
} else {
filter_size = args[i + 1]
.to_string()
.parse::<f32>()
.expect(&format!("Error parsing {}", flag_val))
as usize;
}
}
}
if filter_size < 3 { filter_size = 3; }
// The filter dimensions must be odd numbers such that there is a middle pixel
if (filter_size as f64 / 2f64).floor() == (filter_size as f64 / 2f64) {
filter_size += 1;
}
if configurations.verbose_mode {
let welcome_len = format!("* Welcome to {} *", tool_name).len().max(28);
// 28 = length of the 'Powered by' by statement.
println!("{}", "*".repeat(welcome_len));
println!("* Welcome to {} {}*", tool_name, " ".repeat(welcome_len - 15 - tool_name.len()));
println!("* Powered by WhiteboxTools {}*", " ".repeat(welcome_len - 28));
println!("* www.whiteboxgeo.com {}*", " ".repeat(welcome_len - 23));
println!("{}", "*".repeat(welcome_len));
}
let sep: String = path::MAIN_SEPARATOR.to_string();
let mut progress: usize;
let mut old_progress: usize = 1;
if!input_file.contains(&sep) &&!input_file.contains("/") {
input_file = format!("{}{}", working_directory, input_file);
}
if!output_file.contains(&sep) &&!output_file.contains("/") {
output_file = format!("{}{}", working_directory, output_file);
}
if configurations.verbose_mode {
println!("Reading data...")
};
let input = Arc::new(Raster::new(&input_file, "r")?);
let rows = input.configs.rows as isize;
let columns = input.configs.columns as isize;
let nodata = input.configs.nodata;
let resolution = input.configs.resolution_x; // assume square
let path_parts: Vec<&str> = output_file.rsplitn(2, ".").collect();
let mut outputs: [Raster; 8] = [
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "Slp", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "Asp", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "ProC", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "PlaC", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "LonC", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "CrsC", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "PrCM", &path_parts[0]), &input),
Raster::initialize_using_file(&format!("{}_{}.{}", &path_parts[1], "GoF", &path_parts[0]), &input)
];
let start = Instant::now();
// no weights simplifies matrices
let offset = (filter_size - 1) / 2;
let num_cells = filter_size * filter_size;
// determine filter offsets
let mut dx = vec![0isize; num_cells];
let mut dy = vec![0isize; num_cells];
let mut idx = 0usize;
for i in 0..filter_size {
for j in 0..filter_size {
dx[idx] = (j - offset) as isize;
dy[idx] = (i - offset) as isize;
idx += 1;
}
}
let num_procs = num_cpus::get() as isize;
let (tx, rx) = mpsc::channel();
for tid in 0..num_procs {
let input = input.clone();
let dx = dx.clone();
let dy = dy.clone();
let tx = tx.clone();
// let a_decomp = a_decomp.clone();
thread::spawn(move || {
let mut z: f64;
let mut zi: f64;
for row in (0..rows).filter(|r| r % num_procs == tid) {
let mut slopes = vec![nodata; columns as usize];
let mut aspects = vec![nodata; columns as usize];
let mut prof_cs = vec![nodata; columns as usize];
let mut plan_cs = vec![nodata; columns as usize];
let mut long_cs = vec![nodata; columns as usize];
let mut cross_cs = vec![nodata; columns as usize];
let mut procmin_cs = vec![nodata; columns as usize];
let mut gofs = vec![nodata; columns as usize];
for col in 0..columns {
z = input[(row, col)];
if z!= nodata {
let (mut zx2, mut zy2, mut zxy, mut zx, mut zy, mut _zw) = (0f64,0f64,0f64,0f64,0f64,0f64);
let (mut x2, mut x2y2, mut x4) = (0f64, 0f64, 0f64);
let mut num_valid = 0usize;
let (mut z_pred, mut z_act): (f64, f64);
let (mut sum_x, mut sum_y, mut sum_xy, mut sum_xx, mut sum_yy) = (0f64, 0f64, 0f64, 0f64, 0f64);
let (r, n): (f64, f64);
let mut xs = vec![];
let mut ys = vec![];
let mut zs = vec![];
for c in 0..num_cells {
zi = input[((row + dy[c] as isize), (col + dx[c] as isize))];
if zi!= nodata {
xs.push(dx[c] as f64 * resolution);
ys.push(dy[c] as f64 * resolution);
zs.push(zi - z); // elevation relative to center
num_valid += 1;
}
}
if num_valid >= 8 {//6 { // need at least six samples
// compute sums
for i in 0..num_valid {
zx2 += zs[i] * xs[i].powi(2);
zy2 += zs[i] * ys[i].powi(2);
zxy += zs[i] * xs[i] * ys[i];
zx += zs[i] * xs[i];
zy += zs[i] * ys[i];
_zw += zs[i];
x2 += xs[i].powi(2);
x2y2 += xs[i].powi(2) * ys[i].powi(2);
x4 += xs[i].powi(4);
}
let a = Matrix5::from_rows(&[
RowVector5::new(x4, x2y2, 0f64, 0f64, 0f64),
RowVector5::new(x2y2, x4, 0f64, 0f64, 0f64),
RowVector5::new(0f64,0f64,x2y2, 0f64, 0f64),
RowVector5::new(0f64, 0f64, 0f64, x2, 0f64),
RowVector5::new(0f64, 0f64, 0f64, 0f64, x2),
]);
let b = Vector5::new(zx2, zy2, zxy, zx, zy);
let fitted_surface = Quadratic2d::from_normals_origin(a, b);
for i in 0..num_valid {
z_act = zs[i];
sum_x += z_act;
sum_xx += z_act * z_act;
z_pred = fitted_surface.solve(xs[i], ys[i]);
sum_y += z_pred;
sum_yy += z_pred * z_pred;
sum_xy += z_act * z_pred;
}
n = num_valid as f64;
let noom = n * sum_xy - (sum_x * sum_y);
let den = (n * sum_xx - (sum_x * sum_x)).sqrt() * ((n * sum_yy - (sum_y * sum_y)).sqrt());
if noom == 0f64 || den == 0f64 {
r = 0f64;
} else {
r = noom / den;
}
slopes[col as usize] = fitted_surface.slope();
aspects[col as usize] = fitted_surface.aspect();
prof_cs[col as usize] = fitted_surface.profile_convexity();
plan_cs[col as usize] = fitted_surface.plan_convexity();
long_cs[col as usize] = fitted_surface.longitudinal_curvature();
cross_cs[col as usize] = fitted_surface.cross_sectional_curvature();
procmin_cs[col as usize] = fitted_surface.min_prof_convexity();
gofs[col as usize] = r * r;
}
}
}
tx.send(
(row,
slopes,
aspects,
prof_cs,
plan_cs,
long_cs,
cross_cs,
procmin_cs,
gofs)
).unwrap();
}
});
}
for row in 0..rows {
let data = rx.recv().expect("Error receiving data from thread.");
outputs[0].set_row_data(data.0, data.1);
outputs[1].set_row_data(data.0, data.2);
outputs[2].set_row_data(data.0, data.3);
outputs[3].set_row_data(data.0, data.4);
outputs[4].set_row_data(data.0, data.5);
outputs[5].set_row_data(data.0, data.6);
outputs[6].set_row_data(data.0, data.7);
outputs[7].set_row_data(data.0, data.8);
if configurations.verbose_mode {
progress = (100.0_f64 * row as f64 / (rows - 1) as f64) as usize;
if progress!= old_progress {
println!("Performing analysis: {}%", progress);
old_progress = progress;
}
}
}
let elapsed_time = get_formatted_elapsed_time(start);
if configurations.verbose_mode {
println!("Saving data...")
};
for o in 0..outputs.len() {
outputs[o].configs.palette = "grey.plt".to_string();
outputs[o].add_metadata_entry(format!(
"Created by whitebox_tools\' {} tool",
tool_name
));
outputs[o].add_metadata_entry(format!("Input file: {}", input_file));
outputs[o].add_metadata_entry(format!("Elapsed Time (excluding I/O): {}", elapsed_time));
let _ = match outputs[o].write() {
Ok(_) => {
if configurations.verbose_mode {
println!("Output file {:?} written", o+1);
}
}
Err(e) => return Err(e),
};
}
if configurations.verbose_mode {
println!(
"{}",
&format!("Elapsed Time (excluding I/O): {}", elapsed_time)
);
}
Ok(())
}
// Equation of a 2d quadratic model:
// z(x,y) = ax^2 + by^2 + cxy + dx + ey + f
#[derive(Default, Clone, Copy)]
struct Quadratic2d {
a: f64,
b: f64,
c: f64,
d: f64,
e: f64,
f: f64
}
impl Quadratic2d {
fn new(a: f64, b: f64, c: f64, d: f64, e: f64, f: f64) -> Quadratic2d {
Quadratic2d {
a: a,
b: b,
c: c,
d: d,
e: e,
f: f
}
}
// solves a system of normal equations ax = b
// fn from_normal_equations(a: Matrix6<f64>, b: Vector6<f64>) -> Quadratic2d {
// let decomp = a.lu();
// if decomp.is_invertible() {
// let x = decomp.solve(&b).expect("Linear resolution failed.");
// Quadratic2d::new(
// *x.get(0).unwrap(), // a
// *x.get(1).unwrap(), // b
// *x.get(2).unwrap(), // c
// *x.get(3).unwrap(), // d
// *x.get(4).unwrap(), // e
// *x.get(5).unwrap() // f
// )
// } else {
// Quadratic2d::new(0f64,0f64,0f64,0f64,0f64,0f64)
// }
// }
fn from_normals_origin(a: Matrix5<f64>, b: Vector5<f64>) -> Quadratic2d {
let decomp = a.lu();
if decomp.is_invertible() {
let x = decomp.solve(&b).expect("Linear resolution failed.");
Quadratic2d::new(
*x.get(0).unwrap(), // a
*x.get(1).unwrap(), // b
*x.get(2).unwrap(), // c
*x.get(3).unwrap(), // d
*x.get(4). | {
let tool_name = get_tool_name();
let sep: String = path::MAIN_SEPARATOR.to_string();
// Read in the environment variables and get the necessary values
let configurations = whitebox_common::configs::get_configs()?;
let mut working_directory = configurations.working_directory.clone();
if !working_directory.is_empty() && !working_directory.ends_with(&sep) {
working_directory += &sep;
}
let mut input_file = String::new();
let mut output_file = String::new();
let mut filter_size = 3usize;
if args.len() == 0 {
return Err(Error::new(
ErrorKind::InvalidInput,
"Tool run with no parameters.", | identifier_body |
mod.rs | mod fifo;
mod background_fifo;
mod sprite_fifo;
use crate::bus::*;
use crate::cpu::{interrupt, InterruptType};
use crate::clock::ClockListener;
use std::fmt;
use std::cell::RefCell;
use std::rc::Rc;
use crate::graphics_driver::GraphicsDriver;
use crate::ppu::background_fifo::BackgroundFifo;
use crate::ppu::sprite_fifo::SpriteFifo;
pub const DISPLAY_WIDTH: u8 = 160;
pub const DISPLAY_HEIGHT: u8 = 144;
const PITCH: usize = DISPLAY_WIDTH as usize * DISPLAY_HEIGHT as usize;
const VIRTUAL_DISPLAY_HEIGHT: u8 = 154;
const LCDC_DISPLAY_ENABLE: u8 = 1 << 7;
const LCDC_WINDOW_TILE_MAP_SELECT: u8 = 1 << 6;
const LCDC_WINDOW_ENABLE: u8 = 1 << 5;
const LCDC_TILE_DATA_SELECT: u8 = 1 << 4;
const LCDC_TILE_MAP_SELECT: u8 = 1 << 3;
const LCDC_SPRITE_SIZE: u8 = 1 << 2; // 1: Double height
const LCDC_SPRITE_ENABLE: u8 = 1 << 1;
// const LCDC_BG_VS_WINDOW_PRIORITY: u8 = 1 << 1;
const MAX_SPRITES_PER_LINE: usize = 10;
const STAT_LYC_INTERRUPT: u8 = 1 << 6;
const STAT_OAM_INTERRUPT: u8 = 1 << 5;
const STAT_VBLANK_INTERRUPT: u8 = 1 << 4;
const STAT_HBLANK_INTERRUPT: u8 = 1 << 3;
const STAT_LYC_FLAG: u8 = 1 << 2;
const STAT_MODE_MASK: u8 = 0x03;
const VRAM_BASE_ADDRESS: Address = 0x8000;
const TILE_MAP_LO_BASE: Address = 0x1800; // VRAM Relative Address; Bus Address 0x9800;
const TILE_MAP_HI_BASE: Address = 0x1C00; // VRAM Relative Address; Bus Address 0x9C00;
const TILE_DATA_BLOCK_BASE: [Address; 3] = [0x0000, 0x0800, 0x1000]; // VRAM Relative Addresses
// Timings from "The Ultimate Game Boy Talk"
const OAM_CYCLES: u16 = 20;
const DRAW_CYCLES: u16 = 43;
const HBLANK_CYCLES: u16 = 51;
const VBLANK_LINE_CYCLES: u16 = 114;
const SCREEN_CYCLES: u16 = VBLANK_LINE_CYCLES * VIRTUAL_DISPLAY_HEIGHT as u16;
#[repr(u8)]
#[derive(Debug, Copy, Clone, PartialEq)]
enum Mode {
HBlank = 0,
VBlank,
OAM,
Draw,
}
#[derive(Debug, Copy, Clone)]
struct Point {
x: u16,
y: u16,
}
#[derive(Debug, Copy, Clone)]
pub struct Registers {
LCDC: Byte,
STAT: Byte,
SCY: Byte,
SCX: Byte,
// LX // Fake register specifying X position of renderer
LX: Byte,
LY: Byte,
LYC: Byte,
WY: Byte,
WX: Byte,
BGP: Byte,
OBP0: Byte,
OBP1: Byte,
dma_active: bool,
dma_address: Byte,
dma_counter: Byte,
}
#[derive(Debug)]
pub struct PPU {
on: bool,
mode: Mode,
clock: u16,
pixel_buffer: [u32; PITCH],
palette_buffer: [u32; 4],
render_flag: bool,
VRAM: [Byte; 0x2000],
OAM: [Byte; 0x100],
registers: Registers,
bgfifo: BackgroundFifo,
spfifo: SpriteFifo,
}
impl PPU {
pub fn new() -> Self {
Self {
on: false,
mode: Mode::VBlank,
clock: 0,
pixel_buffer: [0x00; PITCH],
palette_buffer: [0xFFFFFF, 0xC0C0C0, 0x404040, 0x000000],
render_flag: true,
VRAM: [0; 0x2000],
OAM: [0; 0x100],
registers: Registers {
LCDC: 0,
STAT: Mode::VBlank as u8,
SCY: 0,
SCX: 0,
LX: 0,
LY: 0,
LYC: 0,
WY: 0,
WX: 0,
BGP: 0,
OBP0: 0,
OBP1: 0,
dma_active: false,
dma_counter: 0,
dma_address: 0,
},
bgfifo: BackgroundFifo::new(),
spfifo: SpriteFifo::new(),
}
}
pub fn update<'a>(&mut self, driver: &mut (dyn GraphicsDriver + 'a)) {
if!self.render_flag {
return;
}
let start = crate::graphics_driver::Point { x: 0, y: 0 };
let end = crate::graphics_driver::Point {
x: DISPLAY_WIDTH as u16,
y: DISPLAY_HEIGHT as u16,
};
if!self.on {
let screen: [u32; PITCH] = [0; PITCH];
driver.render(&screen);
}
else {
driver.render(&self.pixel_buffer);
}
self.render_flag = false;
}
fn set_mode(&mut self, bus: &mut Bus, mode: Mode) {
self.mode = mode;
// Clear previous mode flag
self.registers.STAT &= 0xFF ^ STAT_MODE_MASK;
// Set current mode flag
self.registers.STAT |= mode as u8;
const INTERRUPT_SOURCE_FLAGS: [u8; 3] = [
STAT_HBLANK_INTERRUPT,
STAT_VBLANK_INTERRUPT,
STAT_OAM_INTERRUPT
];
match mode {
// Draw does not have an associated interrupt.
Mode::Draw => return,
Mode::VBlank => interrupt(bus, InterruptType::VBlank),
_ => {},
}
if self.registers.STAT & INTERRUPT_SOURCE_FLAGS[mode as usize]!= 0 {
interrupt(bus, InterruptType::LCDStat);
}
}
pub fn reset(&mut self, bus: &mut Bus) {
self.set_mode(bus, Mode::OAM);
self.registers.LY = 0;
self.clock = 0;
}
}
impl fmt::Display for PPU {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write! {f,
concat! {
"PPU | MODE {:6?}\n",
" | LCDC {:02X} STAT {:02X}\n",
" | SCY {:02X} SCX {:02X}\n",
" | LY {:02X} LYC {:02X}\n",
" | LY {:02X} LX {:02X}\n",
" | WY {:02X} WX {:02X}\n\n",
"BGF | MODE {:?}\n"
},
self.mode,
self.registers.LCDC, self.registers.STAT, self.registers.SCY, self.registers.SCX,
self.registers.LY, self.registers.LYC, self.registers.LY, self.registers.LX,
self.registers.WY, self.registers.WX,
self.bgfifo.state,
}
}
}
impl BusListener for PPU {
fn bus_attach(&mut self) -> Vec<Attach> {
vec![
Attach::BlockRange(0x80, 0x9F), // VRAM
Attach::Block(0xFE), // OAM Sprite Memory (Note that OAM is only up to 0xFE9F)
Attach::RegisterRange(0x40, 0x4B), // LCD Position / Palettes / DMA Transfer Start Address
// Attach::Register(0x4F), // VRAM Bank Selector
// Attach::RegisterRange(0x51, 0x55), // HDMA 1-5
// Attach::RegisterRange(0x68, 0x6B), // CGB Palletes
]
}
fn bus_read(&self, address: Address) -> Byte {
// TODO: Prevent access during OAM or Draw.
match address {
0x8000..=0x9FFF => self.VRAM[(address - 0x8000) as usize],
0xFE00..=0xFE9F => self.OAM[(address - 0xFE00) as usize],
0xFEA0..=0xFEFF => 0, // This range is unusable
0xFF40 => self.registers.LCDC,
0xFF41 => self.registers.STAT,
0xFF42 => self.registers.SCY,
0xFF43 => self.registers.SCX,
//0xFF44 => 0x90, //DEBUG//
0xFF44 => self.registers.LY,
0xFF45 => self.registers.LYC,
0xFF46 => self.registers.dma_address,
0xFF47 => self.registers.BGP,
0xFF48 => self.registers.OBP0,
0xFF49 => self.registers.OBP1,
0xFF4A => self.registers.WY,
0xFF4B => self.registers.WX,
// 0xFF4F | 0xFF51..=0xFF55 | 0xFF68..=0xFF6B => 0x00, // TODO
_ => panic!("PPU Address ({:04X}) Not Implemented", address),
}
}
fn bus_write(&mut self, _bus: &mut Bus, address: Address, value: Byte) {
match address {
// 0xFF4F | 0xFF51..=0xFF55 | 0xFF68..=0xFF6B => return, // TODO
0xFEA0..=0xFEFF => return, // This range is unusable
0xFF41 => {
// Lower 3 bits of STAT are read-only mode indicators.
let stat = self.registers.STAT;
self.registers.STAT = (value & 0xF8) | (stat & 0x07);
return;
}
_ => {},
}
let ptr = match address {
0x8000..=0x9FFF => &mut self.VRAM[(address - 0x8000) as usize],
0xFE00..=0xFE9F => {
if self.mode == Mode::OAM || self.mode == Mode::Draw {
panic!("Illegal write to OAM table.");
}
&mut self.OAM[(address - 0xFE00) as usize]
},
0xFF40 => &mut self.registers.LCDC,
// 0xFF41 HANDLED ABOVE //
0xFF42 => &mut self.registers.SCY,
0xFF43 => &mut self.registers.SCX,
// 0xFF44 (LY) is READ ONLY //
0xFF45 => &mut self.registers.LYC,
0xFF47 => &mut self.registers.BGP,
0xFF48 => &mut self.registers.OBP0,
0xFF49 => &mut self.registers.OBP1,
0xFF4A => &mut self.registers.WY,
0xFF4B => &mut self.registers.WX,
// Writing to the DMA Transfer Register initializes transfer
0xFF46 => {
self.registers.dma_active = true;
self.registers.dma_counter = 0;
assert!(value <= 0xF1);
&mut self.registers.dma_address
},
_ => panic!("PPU Address ({:04X}) Not Implemented", address),
};
*ptr = value;
}
}
impl ClockListener for PPU {
fn callback(&mut self, bus: &mut Bus, cycles: u8) {
if self.registers.LCDC & LCDC_DISPLAY_ENABLE == 0 {
self.on = false;
self.clock += cycles as u16;
if SCREEN_CYCLES < self.clock {
self.clock -= SCREEN_CYCLES;
self.render_flag = true;
}
return;
}
else if!self.on {
self.reset(bus);
self.on = true;
}
// DMA Transfer Loop
for _ in 0..cycles {
// DMA may terminate in the middle of this loop.
if!self.registers.dma_active {
break;
}
let dma_counter = self.registers.dma_counter as u16;
let data = bus.read_byte(((self.registers.dma_address as Address) << 8) | dma_counter);
self.OAM[dma_counter as usize] = data;
|
self.clock += cycles as u16;
use Mode::*;
match self.mode {
OAM => {
for _ in 0..(cycles << 1) {
self.spfifo.scan_next_oam_table_entry(&self.OAM, &self.registers);
}
if self.clock < OAM_CYCLES {
return;
}
self.clock -= OAM_CYCLES;
self.set_mode(bus, Draw);
}
Draw => {
// Render cycle: Push pixels onto the screen.
for _ in 0..(cycles << 1) {
self.bgfifo.step(&self.VRAM, self.registers);
self.spfifo.step(&self.VRAM, self.registers);
for _ in 0..2 {
// TODO: Window Handling
if DISPLAY_WIDTH <= self.registers.LX {
break;
}
let mut pixel_index = 0u8;
match self.bgfifo.pop() {
None => break,
Some(index) => pixel_index = index,
}
let alt_palette_buffer: [u32; 4] = [0xFFFFFF, 0xCC0000, 0x440000, 0xFF0000];
let mut pixel = self.palette_buffer[pixel_index as usize];
// TODO: Sprite priority.
match self.spfifo.pop(self.registers.LX) {
None => {},
Some(index) => {
pixel_index = index;
pixel = alt_palette_buffer[pixel_index as usize];
},
}
let buffer_index = (self.registers.LY as u16 * DISPLAY_WIDTH as u16)
+ self.registers.LX as u16;
self.pixel_buffer[buffer_index as usize] = pixel;
self.registers.LX += 1;
}
}
if self.registers.LX < DISPLAY_WIDTH || self.clock < DRAW_CYCLES {
return;
}
self.clock -= DRAW_CYCLES;
self.set_mode(bus, HBlank);
}
HBlank => {
if self.clock < HBLANK_CYCLES {
return;
}
self.clock -= HBLANK_CYCLES;
if self.registers.LY == self.registers.LYC {
// Set the LYC flag
self.registers.STAT |= STAT_LYC_FLAG;
if self.registers.STAT & STAT_LYC_INTERRUPT!= 0 {
interrupt(bus, InterruptType::LCDStat);
}
}
else {
// Clear the LYC flag.
self.registers.STAT &= 0xFF ^ STAT_LYC_FLAG;
}
self.bgfifo.reset(0);
self.spfifo.reset();
self.registers.LX = 0;
self.registers.LY += 1;
if self.registers.LY >= DISPLAY_HEIGHT {
self.set_mode(bus, VBlank);
}
else {
self.set_mode(bus, OAM);
}
}
VBlank => {
if self.clock < VBLANK_LINE_CYCLES {
return;
}
self.clock -= VBLANK_LINE_CYCLES;
self.registers.LY += 1;
if self.registers.LY < VIRTUAL_DISPLAY_HEIGHT {
return;
}
self.render_flag = true;
self.registers.LY = 0;
self.set_mode(bus, OAM);
}
}
}
} | self.registers.dma_counter += 1;
self.registers.dma_active = self.registers.dma_counter < DISPLAY_WIDTH;
} | random_line_split |
mod.rs | mod fifo;
mod background_fifo;
mod sprite_fifo;
use crate::bus::*;
use crate::cpu::{interrupt, InterruptType};
use crate::clock::ClockListener;
use std::fmt;
use std::cell::RefCell;
use std::rc::Rc;
use crate::graphics_driver::GraphicsDriver;
use crate::ppu::background_fifo::BackgroundFifo;
use crate::ppu::sprite_fifo::SpriteFifo;
pub const DISPLAY_WIDTH: u8 = 160;
pub const DISPLAY_HEIGHT: u8 = 144;
const PITCH: usize = DISPLAY_WIDTH as usize * DISPLAY_HEIGHT as usize;
const VIRTUAL_DISPLAY_HEIGHT: u8 = 154;
const LCDC_DISPLAY_ENABLE: u8 = 1 << 7;
const LCDC_WINDOW_TILE_MAP_SELECT: u8 = 1 << 6;
const LCDC_WINDOW_ENABLE: u8 = 1 << 5;
const LCDC_TILE_DATA_SELECT: u8 = 1 << 4;
const LCDC_TILE_MAP_SELECT: u8 = 1 << 3;
const LCDC_SPRITE_SIZE: u8 = 1 << 2; // 1: Double height
const LCDC_SPRITE_ENABLE: u8 = 1 << 1;
// const LCDC_BG_VS_WINDOW_PRIORITY: u8 = 1 << 1;
const MAX_SPRITES_PER_LINE: usize = 10;
const STAT_LYC_INTERRUPT: u8 = 1 << 6;
const STAT_OAM_INTERRUPT: u8 = 1 << 5;
const STAT_VBLANK_INTERRUPT: u8 = 1 << 4;
const STAT_HBLANK_INTERRUPT: u8 = 1 << 3;
const STAT_LYC_FLAG: u8 = 1 << 2;
const STAT_MODE_MASK: u8 = 0x03;
const VRAM_BASE_ADDRESS: Address = 0x8000;
const TILE_MAP_LO_BASE: Address = 0x1800; // VRAM Relative Address; Bus Address 0x9800;
const TILE_MAP_HI_BASE: Address = 0x1C00; // VRAM Relative Address; Bus Address 0x9C00;
const TILE_DATA_BLOCK_BASE: [Address; 3] = [0x0000, 0x0800, 0x1000]; // VRAM Relative Addresses
// Timings from "The Ultimate Game Boy Talk"
const OAM_CYCLES: u16 = 20;
const DRAW_CYCLES: u16 = 43;
const HBLANK_CYCLES: u16 = 51;
const VBLANK_LINE_CYCLES: u16 = 114;
const SCREEN_CYCLES: u16 = VBLANK_LINE_CYCLES * VIRTUAL_DISPLAY_HEIGHT as u16;
#[repr(u8)]
#[derive(Debug, Copy, Clone, PartialEq)]
enum Mode {
HBlank = 0,
VBlank,
OAM,
Draw,
}
#[derive(Debug, Copy, Clone)]
struct Point {
x: u16,
y: u16,
}
#[derive(Debug, Copy, Clone)]
pub struct Registers {
LCDC: Byte,
STAT: Byte,
SCY: Byte,
SCX: Byte,
// LX // Fake register specifying X position of renderer
LX: Byte,
LY: Byte,
LYC: Byte,
WY: Byte,
WX: Byte,
BGP: Byte,
OBP0: Byte,
OBP1: Byte,
dma_active: bool,
dma_address: Byte,
dma_counter: Byte,
}
#[derive(Debug)]
pub struct PPU {
on: bool,
mode: Mode,
clock: u16,
pixel_buffer: [u32; PITCH],
palette_buffer: [u32; 4],
render_flag: bool,
VRAM: [Byte; 0x2000],
OAM: [Byte; 0x100],
registers: Registers,
bgfifo: BackgroundFifo,
spfifo: SpriteFifo,
}
impl PPU {
pub fn | () -> Self {
Self {
on: false,
mode: Mode::VBlank,
clock: 0,
pixel_buffer: [0x00; PITCH],
palette_buffer: [0xFFFFFF, 0xC0C0C0, 0x404040, 0x000000],
render_flag: true,
VRAM: [0; 0x2000],
OAM: [0; 0x100],
registers: Registers {
LCDC: 0,
STAT: Mode::VBlank as u8,
SCY: 0,
SCX: 0,
LX: 0,
LY: 0,
LYC: 0,
WY: 0,
WX: 0,
BGP: 0,
OBP0: 0,
OBP1: 0,
dma_active: false,
dma_counter: 0,
dma_address: 0,
},
bgfifo: BackgroundFifo::new(),
spfifo: SpriteFifo::new(),
}
}
pub fn update<'a>(&mut self, driver: &mut (dyn GraphicsDriver + 'a)) {
if!self.render_flag {
return;
}
let start = crate::graphics_driver::Point { x: 0, y: 0 };
let end = crate::graphics_driver::Point {
x: DISPLAY_WIDTH as u16,
y: DISPLAY_HEIGHT as u16,
};
if!self.on {
let screen: [u32; PITCH] = [0; PITCH];
driver.render(&screen);
}
else {
driver.render(&self.pixel_buffer);
}
self.render_flag = false;
}
fn set_mode(&mut self, bus: &mut Bus, mode: Mode) {
self.mode = mode;
// Clear previous mode flag
self.registers.STAT &= 0xFF ^ STAT_MODE_MASK;
// Set current mode flag
self.registers.STAT |= mode as u8;
const INTERRUPT_SOURCE_FLAGS: [u8; 3] = [
STAT_HBLANK_INTERRUPT,
STAT_VBLANK_INTERRUPT,
STAT_OAM_INTERRUPT
];
match mode {
// Draw does not have an associated interrupt.
Mode::Draw => return,
Mode::VBlank => interrupt(bus, InterruptType::VBlank),
_ => {},
}
if self.registers.STAT & INTERRUPT_SOURCE_FLAGS[mode as usize]!= 0 {
interrupt(bus, InterruptType::LCDStat);
}
}
pub fn reset(&mut self, bus: &mut Bus) {
self.set_mode(bus, Mode::OAM);
self.registers.LY = 0;
self.clock = 0;
}
}
impl fmt::Display for PPU {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write! {f,
concat! {
"PPU | MODE {:6?}\n",
" | LCDC {:02X} STAT {:02X}\n",
" | SCY {:02X} SCX {:02X}\n",
" | LY {:02X} LYC {:02X}\n",
" | LY {:02X} LX {:02X}\n",
" | WY {:02X} WX {:02X}\n\n",
"BGF | MODE {:?}\n"
},
self.mode,
self.registers.LCDC, self.registers.STAT, self.registers.SCY, self.registers.SCX,
self.registers.LY, self.registers.LYC, self.registers.LY, self.registers.LX,
self.registers.WY, self.registers.WX,
self.bgfifo.state,
}
}
}
impl BusListener for PPU {
fn bus_attach(&mut self) -> Vec<Attach> {
vec![
Attach::BlockRange(0x80, 0x9F), // VRAM
Attach::Block(0xFE), // OAM Sprite Memory (Note that OAM is only up to 0xFE9F)
Attach::RegisterRange(0x40, 0x4B), // LCD Position / Palettes / DMA Transfer Start Address
// Attach::Register(0x4F), // VRAM Bank Selector
// Attach::RegisterRange(0x51, 0x55), // HDMA 1-5
// Attach::RegisterRange(0x68, 0x6B), // CGB Palletes
]
}
fn bus_read(&self, address: Address) -> Byte {
// TODO: Prevent access during OAM or Draw.
match address {
0x8000..=0x9FFF => self.VRAM[(address - 0x8000) as usize],
0xFE00..=0xFE9F => self.OAM[(address - 0xFE00) as usize],
0xFEA0..=0xFEFF => 0, // This range is unusable
0xFF40 => self.registers.LCDC,
0xFF41 => self.registers.STAT,
0xFF42 => self.registers.SCY,
0xFF43 => self.registers.SCX,
//0xFF44 => 0x90, //DEBUG//
0xFF44 => self.registers.LY,
0xFF45 => self.registers.LYC,
0xFF46 => self.registers.dma_address,
0xFF47 => self.registers.BGP,
0xFF48 => self.registers.OBP0,
0xFF49 => self.registers.OBP1,
0xFF4A => self.registers.WY,
0xFF4B => self.registers.WX,
// 0xFF4F | 0xFF51..=0xFF55 | 0xFF68..=0xFF6B => 0x00, // TODO
_ => panic!("PPU Address ({:04X}) Not Implemented", address),
}
}
fn bus_write(&mut self, _bus: &mut Bus, address: Address, value: Byte) {
match address {
// 0xFF4F | 0xFF51..=0xFF55 | 0xFF68..=0xFF6B => return, // TODO
0xFEA0..=0xFEFF => return, // This range is unusable
0xFF41 => {
// Lower 3 bits of STAT are read-only mode indicators.
let stat = self.registers.STAT;
self.registers.STAT = (value & 0xF8) | (stat & 0x07);
return;
}
_ => {},
}
let ptr = match address {
0x8000..=0x9FFF => &mut self.VRAM[(address - 0x8000) as usize],
0xFE00..=0xFE9F => {
if self.mode == Mode::OAM || self.mode == Mode::Draw {
panic!("Illegal write to OAM table.");
}
&mut self.OAM[(address - 0xFE00) as usize]
},
0xFF40 => &mut self.registers.LCDC,
// 0xFF41 HANDLED ABOVE //
0xFF42 => &mut self.registers.SCY,
0xFF43 => &mut self.registers.SCX,
// 0xFF44 (LY) is READ ONLY //
0xFF45 => &mut self.registers.LYC,
0xFF47 => &mut self.registers.BGP,
0xFF48 => &mut self.registers.OBP0,
0xFF49 => &mut self.registers.OBP1,
0xFF4A => &mut self.registers.WY,
0xFF4B => &mut self.registers.WX,
// Writing to the DMA Transfer Register initializes transfer
0xFF46 => {
self.registers.dma_active = true;
self.registers.dma_counter = 0;
assert!(value <= 0xF1);
&mut self.registers.dma_address
},
_ => panic!("PPU Address ({:04X}) Not Implemented", address),
};
*ptr = value;
}
}
impl ClockListener for PPU {
fn callback(&mut self, bus: &mut Bus, cycles: u8) {
if self.registers.LCDC & LCDC_DISPLAY_ENABLE == 0 {
self.on = false;
self.clock += cycles as u16;
if SCREEN_CYCLES < self.clock {
self.clock -= SCREEN_CYCLES;
self.render_flag = true;
}
return;
}
else if!self.on {
self.reset(bus);
self.on = true;
}
// DMA Transfer Loop
for _ in 0..cycles {
// DMA may terminate in the middle of this loop.
if!self.registers.dma_active {
break;
}
let dma_counter = self.registers.dma_counter as u16;
let data = bus.read_byte(((self.registers.dma_address as Address) << 8) | dma_counter);
self.OAM[dma_counter as usize] = data;
self.registers.dma_counter += 1;
self.registers.dma_active = self.registers.dma_counter < DISPLAY_WIDTH;
}
self.clock += cycles as u16;
use Mode::*;
match self.mode {
OAM => {
for _ in 0..(cycles << 1) {
self.spfifo.scan_next_oam_table_entry(&self.OAM, &self.registers);
}
if self.clock < OAM_CYCLES {
return;
}
self.clock -= OAM_CYCLES;
self.set_mode(bus, Draw);
}
Draw => {
// Render cycle: Push pixels onto the screen.
for _ in 0..(cycles << 1) {
self.bgfifo.step(&self.VRAM, self.registers);
self.spfifo.step(&self.VRAM, self.registers);
for _ in 0..2 {
// TODO: Window Handling
if DISPLAY_WIDTH <= self.registers.LX {
break;
}
let mut pixel_index = 0u8;
match self.bgfifo.pop() {
None => break,
Some(index) => pixel_index = index,
}
let alt_palette_buffer: [u32; 4] = [0xFFFFFF, 0xCC0000, 0x440000, 0xFF0000];
let mut pixel = self.palette_buffer[pixel_index as usize];
// TODO: Sprite priority.
match self.spfifo.pop(self.registers.LX) {
None => {},
Some(index) => {
pixel_index = index;
pixel = alt_palette_buffer[pixel_index as usize];
},
}
let buffer_index = (self.registers.LY as u16 * DISPLAY_WIDTH as u16)
+ self.registers.LX as u16;
self.pixel_buffer[buffer_index as usize] = pixel;
self.registers.LX += 1;
}
}
if self.registers.LX < DISPLAY_WIDTH || self.clock < DRAW_CYCLES {
return;
}
self.clock -= DRAW_CYCLES;
self.set_mode(bus, HBlank);
}
HBlank => {
if self.clock < HBLANK_CYCLES {
return;
}
self.clock -= HBLANK_CYCLES;
if self.registers.LY == self.registers.LYC {
// Set the LYC flag
self.registers.STAT |= STAT_LYC_FLAG;
if self.registers.STAT & STAT_LYC_INTERRUPT!= 0 {
interrupt(bus, InterruptType::LCDStat);
}
}
else {
// Clear the LYC flag.
self.registers.STAT &= 0xFF ^ STAT_LYC_FLAG;
}
self.bgfifo.reset(0);
self.spfifo.reset();
self.registers.LX = 0;
self.registers.LY += 1;
if self.registers.LY >= DISPLAY_HEIGHT {
self.set_mode(bus, VBlank);
}
else {
self.set_mode(bus, OAM);
}
}
VBlank => {
if self.clock < VBLANK_LINE_CYCLES {
return;
}
self.clock -= VBLANK_LINE_CYCLES;
self.registers.LY += 1;
if self.registers.LY < VIRTUAL_DISPLAY_HEIGHT {
return;
}
self.render_flag = true;
self.registers.LY = 0;
self.set_mode(bus, OAM);
}
}
}
}
| new | identifier_name |
metadata.rs | "duration" for clarity.
/// The minimum block size (in inter-channel samples) used in the stream.
///
/// This number is independent of the number of channels. To get the minimum
/// block duration in seconds, divide this by the sample rate.
pub min_block_size: u16,
/// The maximum block size (in inter-channel samples) used in the stream.
///
/// This number is independent of the number of channels. To get the
/// maximum block duration in seconds, divide by the sample rate. To avoid
/// allocations during decoding, a buffer of this size times the number of
/// channels can be allocated up front and passed into
/// `FrameReader::read_next_or_eof()`.
pub max_block_size: u16,
/// The minimum frame size (in bytes) used in the stream.
pub min_frame_size: Option<u32>,
/// The maximum frame size (in bytes) used in the stream.
pub max_frame_size: Option<u32>,
/// The sample rate in Hz.
pub sample_rate: u32,
/// The number of channels.
pub channels: u32,
/// The number of bits per sample.
pub bits_per_sample: u32,
/// The total number of inter-channel samples in the stream.
// TODO: rename to `duration` for clarity?
pub samples: Option<u64>,
/// MD5 signature of the unencoded audio data.
pub md5sum: [u8; 16],
}
/// A seek point in the seek table.
#[derive(Clone, Copy, PartialEq, Eq)]
pub struct SeekPoint {
/// Sample number of the first sample in the target frame, or 2<sup>64</sup> - 1 for a placeholder.
pub sample: u64,
/// Offset in bytes from the first byte of the first frame header to the first byte of the
/// target frame's header.
pub offset: u64,
/// Number of samples in the target frame.
pub samples: u16,
}
/// A seek table to aid seeking in the stream.
pub struct SeekTable {
/// The seek points, sorted in ascending order by sample number.
#[allow(dead_code)] // TODO: Implement seeking.
seekpoints: Vec<SeekPoint>,
}
/// Vorbis comments, also known as FLAC tags (e.g. artist, title, etc.).
pub struct VorbisComment {
/// The “vendor string”, chosen by the encoder vendor.
///
/// This string usually contains the name and version of the program that
/// encoded the FLAC stream, such as `reference libFLAC 1.3.2 20170101`
/// or `Lavf57.25.100`.
pub vendor: String,
/// Name-value pairs of Vorbis comments, such as `ARTIST=Queen`.
///
/// This struct stores a raw low-level representation of tags. Use
/// `FlacReader::tags()` for a friendlier iterator. The tuple consists of
/// the string in `"NAME=value"` format, and the index of the `'='` into
/// that string.
///
/// The name is supposed to be interpreted case-insensitively, and is
/// guaranteed to consist of ASCII characters. Claxon does not normalize
/// the casing of the name. Use `metadata::GetTag` to do a case-insensitive
/// lookup.
///
/// Names need not be unique. For instance, multiple `ARTIST` comments might
/// be present on a collaboration track.
///
/// See <https://www.xiph.org/vorbis/doc/v-comment.html> for more details.
pub comments: Vec<(String, usize)>,
}
/// A metadata about the flac stream.
pub enum MetadataBlock {
/// A stream info block.
StreamInfo(StreamInfo),
/// A padding block (with no meaningful data).
Padding {
/// The number of padding bytes.
length: u32,
},
/// An application block with application-specific data.
Application {
/// The registered application ID.
id: u32,
/// The contents of the application block.
data: Vec<u8>,
},
/// A seek table block.
SeekTable(SeekTable),
/// A Vorbis comment block, also known as FLAC tags.
VorbisComment(VorbisComment),
/// A CUE sheet block.
CueSheet, // TODO
/// A picture block.
Picture, // TODO
/// A block with a reserved block type, not supported by this library.
Reserved,
}
/// Iterates over Vorbis comments (FLAC tags) in a FLAC stream.
///
/// See `FlacReader::tags()` for more details.
pub struct Tags<'a> {
/// The underlying iterator.
iter: slice::Iter<'a, (String, usize)>,
}
impl<'a> Tags<'a> {
/// Returns a new `Tags` iterator.
#[inline]
pub fn new(comments: &'a [(String, usize)]) -> Tags<'a> {
Tags {
iter: comments.iter(),
}
}
}
impl<'a> Iterator for Tags<'a> {
type Item = (&'a str, &'a str);
#[inline]
fn next(&mut self) -> Option<(&'a str, &'a str)> {
return self.iter.next().map(|&(ref comment, sep_idx)| {
(&comment[..sep_idx], &comment[sep_idx+1..])
})
}
#[inline]
fn size_hint(&self) -> (usize, Option<usize>) {
self.iter.size_hint()
}
}
impl<'a> ExactSizeIterator for Tags<'a> {}
/// Iterates over Vorbis comments looking for a specific one; returns its values as `&str`.
///
/// See `FlacReader::get_tag()` for more details.
pub struct GetTag<'a> {
/// The Vorbis comments to search through.
vorbis_comments: &'a [(String, usize)],
/// The tag to look for.
needle: &'a str,
/// The index of the (name, value) pair that should be inspected next.
index: usize,
}
impl<'a> GetTag<'a> {
/// Returns a new `GetTag` iterator.
#[inline]
pub fn new(vorbis_comments: &'a [(String, usize)], needle: &'a str) -> GetTag<'a> {
GetTag {
vorbis_comments: vorbis_comments,
needle: needle,
index: 0,
}
}
}
impl<'a> Iterator for GetTag<'a> {
type Item = &'a str;
#[inline]
fn next(&mut self) -> Option<&'a str> {
// This import is actually required on Rust 1.13.
#[allow(unused_imports)]
use std::ascii::AsciiExt;
while self.index < self.vorbis_comments.len() {
let (ref comment, sep_idx) = self.vorbis_comments[self.index];
self.index += 1;
if comment[..sep_idx].eq_ignore_ascii_case(self.needle) {
return Some(&comment[sep_idx + 1..])
}
}
return None
}
}
#[inline]
fn read_metadata_block_header<R: ReadBytes>(input: &mut R) -> Result<MetadataBlockHeader> {
let byte = try!(input.read_u8());
// The first bit specifies whether this is the last block, the next 7 bits
// specify the type of the metadata block to follow.
let is_last = (byte >> 7) == 1;
let block_type = byte & 0b0111_1111;
// The length field is 24 bits, or 3 bytes.
let length = try!(input.read_be_u24());
let header = MetadataBlockHeader {
is_last: is_last,
block_type: block_type,
length: length,
};
Ok(header)
}
/// Read a single metadata block header and body from the input.
///
/// When reading a regular flac stream, there is no need to use this function
/// directly; constructing a `FlacReader` will read the header and its metadata
/// blocks.
///
/// When a flac stream is embedded in a container format, this function can be
/// used to decode a single metadata block. For instance, the Ogg format embeds
/// metadata blocks including their header verbatim in packets. This function
/// can be used to decode that raw data.
#[inline]
pub fn read_metadata_block_with_header<R: ReadBytes>(input: &mut R)
-> Result<MetadataBlock> {
let header = try!(read_metadata_block_header(input));
read_metadata_block(input, header.block_type, header.length)
}
/// Read a single metadata block of the given type and length from the input.
///
/// When reading a regular flac stream, there is no need to use this function
/// directly; constructing a `FlacReader` will read the header and its metadata
/// blocks.
///
/// When a flac stream is embedded in a container format, this function can be
/// used to decode a single metadata block. For instance, the MP4 format sports
/// a “FLAC Specific Box” which contains the block type and the raw data. This
/// function can be used to decode that raw data.
#[inline]
pub fn read_metadata_block<R: ReadBytes>(input: &mut R,
block_type: u8,
length: u32)
-> Result<MetadataBlock> {
match block_type {
0 => {
// The streaminfo block has a fixed size of 34 bytes.
if length == 34 {
let streaminfo = try!(read_streaminfo_block(input));
Ok(MetadataBlock::StreamInfo(streaminfo))
} else {
fmt_err("invalid streaminfo metadata block length")
}
}
1 => {
try!(read_padding_block(input, length));
Ok(MetadataBlock::Padding { length: length })
}
2 => {
let (id, data) = try!(read_application_block(input, length));
Ok(MetadataBlock::Application {
id: id,
data: data,
})
}
3 => {
// TODO: implement seektable reading. For now, pretend it is padding.
try!(input.skip(length));
Ok(MetadataBlock::Padding { length: length })
}
4 => {
let vorbis_comment = try!(read_vorbis_comment_block(input, length));
Ok(MetadataBlock::VorbisComment(vorbis_comment))
}
5 => {
// TODO: implement CUE sheet reading. For now, pretend it is padding.
try!(input.skip(length));
Ok(MetadataBlock::Padding { length: length })
}
6 => {
// TODO: implement picture reading. For now, pretend it is padding.
try!(input.skip(length));
Ok(MetadataBlock::Padding { length: length })
}
127 => {
// This code is invalid to avoid confusion with a frame sync code.
fmt_err("invalid metadata block type")
}
_ => {
// Any other block type is'reserved' at the moment of writing. The
// reference implementation reads it as an 'unknown' block. That is
// one way of handling it, but maybe there should be some kind of
//'strict' mode (configurable at compile time?) so that this can
// be an error if desired.
try!(input.skip(length));
Ok(MetadataBlock::Reserved)
}
}
}
fn read_streaminfo_block<R: ReadBytes>(input: &mut R) -> Result<StreamInfo> {
let min_block_size = try!(input.read_be_u16());
let max_block_size = try!(input.read_be_u16());
// The frame size fields are 24 bits, or 3 bytes.
let min_frame_size = try!(input.read_be_u24());
let max_frame_size = try!(input.read_be_u24());
// Next up are 20 bits that determine the sample rate.
let sample_rate_msb = try!(input.read_be_u16());
let sample_rate_lsb = try!(input.read_u8());
// Stitch together the value from the first 16 bits,
// and then the 4 most significant bits of the next byte.
let sample_rate = (sample_rate_msb as u32) << 4 | (sample_rate_lsb as u32) >> 4;
// Next three bits are the number of channels - 1. Mask them out and add 1.
let n_channels_bps = sample_rate_lsb;
let n_channels = ((n_channels_bps >> 1) & 0b0000_0111) + 1;
// The final bit is the most significant of bits per sample - 1. Bits per
// sample - 1 is 5 bits in total.
let bps_msb = n_channels_bps & 1;
let bps_lsb_n_samples = try!(input.read_u8());
// Stitch together these values, add 1 because # - 1 is stored.
let bits_per_sample = (bps_msb << 4 | (bps_lsb_n_samples >> 4)) + 1;
// Number of samples in 36 bits, we have 4 already, 32 to go.
let n_samples_msb = bps_lsb_n_samples & 0b0000_1111;
let n_samples_lsb = try!(input.read_be_u32());
let n_samples = (n_samples_msb as u64) << 32 | n_samples_lsb as u64;
// Next are 128 bits (16 bytes) of MD5 signature.
let mut md5sum = [0u8; 16];
try!(input.read_into(&mut md5sum));
// Lower bounds can never be larger than upper bounds. Note that 0 indicates
// unknown for the frame size. Also, the block size must be at least 16.
if min_block_size > max_block_size {
return fmt_err("inconsistent bounds, min block size > max block size");
}
if min_block_size < 16 {
return fmt_err("invalid block size, must be at least 16");
}
if min_frame_size > max_frame_size && max_frame_size!= 0 {
return fmt_err("inconsistent bounds, min frame size > max frame size");
}
// A sample rate of 0 is invalid, and the maximum sample rate is limited by
// the structure of the frame headers to 655350 Hz.
if sample_rate == 0 || sample_rate > 655350 {
return fmt_err("invalid sample rate");
}
let stream_info = StreamInfo {
min_block_size: min_block_size,
max_block_size: max_block_size,
min_frame_size: if min_frame_size == 0 {
None
} else {
Some(min_frame_size)
},
max_frame_size: if max_frame_size == 0 {
None
} else {
Some(max_frame_size)
},
sample_rate: sample_rate,
channels: n_channels as u32,
bits_per_sample: bits_per_sample as u32,
samples: if n_samples == 0 {
None
} else {
Some(n_samples)
},
md5sum: md5sum,
};
Ok(stream_info)
}
fn read_vorbis_comment_block<R: ReadBytes>(input: &mut R, length: u32) -> Result<VorbisComment> {
if length < 8 {
// We expect at a minimum a 32-bit vendor string length, and a 32-bit
// comment count.
return fmt_err("Vorbis comment block is too short")
}
// Fail if the length of the Vorbis comment block is larger than 1 MiB. This
// block is full of length-prefixed strings for which we allocate memory up
// front. If there were no limit on these, a maliciously crafted file could
// cause OOM by claiming to contain large strings. But at least the strings
// cannot be longer than the size of the Vorbis comment block, and by
// limiting the size of that block, we can mitigate such DoS attacks.
//
// The typical size of a the Vorbis comment block is 1 KiB; on a corpus of
// real-world flac files, the 0.05 and 0.95 quantiles were 792 and 1257
// bytes respectively, with even the 0.99 quantile below 2 KiB. The only
// reason for having a large Vorbis comment block is when cover art is
// incorrectly embedded there, but the Vorbis comment block is not the right
// place for that anyway.
if length > 10 * 1024 * 1024 {
let msg = "Vorbis comment blocks larger than 10 MiB are not supported";
return Err(Error::Unsupported(msg))
}
// The Vorbis comment block starts with a length-prefixed "vendor string".
// It cannot be larger than the block length - 8, because there are the
// 32-bit vendor string length, and comment count.
let vendor_len = try!(input.read_le_u32());
if vendor_len > length - 8 { return fmt_err("vendor string too long") }
let mut vendor_bytes = Vec::with_capacity(vendor_len as usize);
// We can safely set the lenght of the vector here; the uninitialized memory
// is not exposed. If `read_into` succeeds, it will have overwritten all
// bytes. If not, an error is returned and the memory is never exposed.
unsafe { vendor_bytes.set_len(vendor_len as usize); }
try!(input.read_into(&mut vendor_bytes));
let vendor = try!(String::from_utf8(vendor_bytes));
// Next up is the number of comments. Because every comment is at least 4
// bytes to indicate its length, there cannot be more comments than the
// length of the block divided by 4. This is only an upper bound to ensure
// that we don't allocate a big vector, to protect against DoS attacks.
let mut comments_len = try!(input.read_le_u32());
if comments_len >= length / 4 {
return fmt_err("too many entries for Vorbis comment block")
}
let mut comments = Vec::with_capacity(comments_len as usize);
let mut bytes_left = length - 8 - vendor_len;
// For every comment, there is a length-prefixed string of the form
// "NAME=value".
while bytes_left >= 4 && comments.len() < comments_len as usize {
let comment_len = try!(input.read_le_u32());
bytes_left -= 4;
if comment_len > bytes_left {
return fmt_err("Vorbis comment too long for Vorbis comment block")
}
// Some older versions of libflac allowed writing zero-length Vorbis
// comments. ALthough such files are invalid, they do occur in the wild,
// so we skip over the empty comment.
if comment_len == 0 {
// Does not overflow because `comments_len > comments.len() >= 0`.
comments_len -= 1;
continue;
}
// For the same reason as above, setting the length is safe here.
let mut comment_bytes = Vec::with_capacity(comment_len as usize);
unsafe { comment_bytes.set_len(comment_len as usize); }
try!(input.read_into(&mut comment_bytes));
bytes_left -= comment_len;
if let Some(sep_index) = comment_bytes.iter().position(|&x| x == b'=') {
{
let name_bytes = &comment_bytes[..sep_index];
// According to the Vorbis spec, the field name may consist of ascii
// bytes 0x20 through 0x7d, 0x3d (`=`) excluded. Verifying this has
// the advantage that if the check passes, the result is valid
// UTF-8, so the conversion to string will not fail.
if name_bytes.iter().any(|&x| x < 0x20 || x > 0x7d) {
return fmt_err("Vorbis comment field name contains invalid byte")
}
}
let comment = try!(String::from_utf8(comment_bytes));
comments.push((comment, sep_index));
} else {
return fmt_err("Vorbis comment does not contain '='")
}
}
if bytes_left!= 0 {
return fmt_err("Vorbis comment block has excess data")
}
if comments.len()!= comments_len as usize {
return fmt_err("Vorbis comment block contains wrong number of entries")
}
let vorbis_comment = VorbisComment {
vendor: vendor,
comments: comments,
};
Ok(vorbis_comment)
}
fn read_padding_block<R: ReadBytes>(input: &mut R, length: u32) -> Result<()> {
// The specification dictates that all bits of the padding block must be 0.
// However, the reference implementation does not issue an error when this
// is not the case, and frankly, when you are going to skip over these
// bytes and do nothing with them whatsoever, why waste all those CPU
// cycles checking that the padding is valid?
Ok(try!(input.skip(length)))
}
fn read_application_block<R: ReadBytes>(input: &mut R, length: u32) -> Result<(u32, Vec<u8>)> {
if length < 4 {
return fmt_err("application block length must be at least 4 bytes")
}
// Reject large application blocks to avoid memory-based denial-
// of-service attacks. See also the more elaborate motivation in
// `read_vorbis_comment_block()`.
if length > 10 * 1024 * 1024 {
let msg = "application blocks larger than 10 MiB are not supported";
return Err(Error::Unsupported(msg))
}
let id = try!(input.read_be_u32());
// Four bytes of the block have been used for the ID, the rest is payload.
// Create a vector of uninitialized memory, and read the block into it. The
// uninitialized memory is never exposed: read_into will either fill the
// buffer completely, or return an err, in which case the memory is not
// exposed.
let mut data = Vec::with_capacity(length as usize - 4);
unsafe { data.set_len(length as usize - 4); }
try!(input.read_into(&mut data));
Ok((id, data))
}
/// Reads metadata blocks from a stream and exposes them as an iterator.
///
/// It is assumed that the next byte that the reader will read, is the first
/// byte of a metadata block header. This means that the iterator will yield at
/// least a single value. If the iterator ever yields an error, then no more
/// data will be read thereafter, and the next value will be `None`.
pub struct MetadataBlockReader<R: ReadBytes> {
input: R,
done: bool,
}
/// Either a `MetadataBlock` or an `Error`.
pub type MetadataBlockResult = Result<MetadataBlock>;
impl<R: ReadBytes> MetadataBlockReader<R> {
/// Creates a metadata block reader that will yield at least one element.
pub fn new(input: R) -> MetadataBlockReader<R> {
MetadataBlockReader {
input: input,
done: false,
}
}
#[inline]
fn read_next(&mut self) -> MetadataBlockResult {
let header = try!(read_metadata_block_header(&mut self.input));
let block = try!(read_metadata_block(&mut self.input, header.block_type, header.length));
self.done = header.is_last;
Ok(block)
}
}
impl<R: ReadBytes> Iterator for MetadataBlockReader<R> {
type Item = MetadataBlockResult;
#[inline]
fn next(&mut self) -> Option<MetadataBlockResult> {
| if self.done {
None
} else {
let block = self.read_next();
// After a failure, no more attempts to read will be made,
// because we don't know where we are in the stream.
if !block.is_ok() {
self.done = true;
}
Some(block)
}
}
#[ | identifier_body |
|
metadata.rs | Error, Result, fmt_err};
use input::ReadBytes;
use std::str;
use std::slice;
#[derive(Clone, Copy)]
struct MetadataBlockHeader {
is_last: bool,
block_type: u8,
length: u32,
}
/// The streaminfo metadata block, with important information about the stream.
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub struct StreamInfo {
// TODO: "size" would better be called "duration" for clarity.
/// The minimum block size (in inter-channel samples) used in the stream.
///
/// This number is independent of the number of channels. To get the minimum
/// block duration in seconds, divide this by the sample rate.
pub min_block_size: u16,
/// The maximum block size (in inter-channel samples) used in the stream.
///
/// This number is independent of the number of channels. To get the
/// maximum block duration in seconds, divide by the sample rate. To avoid
/// allocations during decoding, a buffer of this size times the number of
/// channels can be allocated up front and passed into
/// `FrameReader::read_next_or_eof()`.
pub max_block_size: u16,
/// The minimum frame size (in bytes) used in the stream.
pub min_frame_size: Option<u32>,
/// The maximum frame size (in bytes) used in the stream.
pub max_frame_size: Option<u32>,
/// The sample rate in Hz.
pub sample_rate: u32,
/// The number of channels.
pub channels: u32,
/// The number of bits per sample.
pub bits_per_sample: u32,
/// The total number of inter-channel samples in the stream.
// TODO: rename to `duration` for clarity?
pub samples: Option<u64>,
/// MD5 signature of the unencoded audio data.
pub md5sum: [u8; 16],
}
/// A seek point in the seek table.
#[derive(Clone, Copy, PartialEq, Eq)]
pub struct SeekPoint {
/// Sample number of the first sample in the target frame, or 2<sup>64</sup> - 1 for a placeholder.
pub sample: u64,
/// Offset in bytes from the first byte of the first frame header to the first byte of the
/// target frame's header.
pub offset: u64,
/// Number of samples in the target frame.
pub samples: u16,
}
/// A seek table to aid seeking in the stream.
pub struct SeekTable {
/// The seek points, sorted in ascending order by sample number.
#[allow(dead_code)] // TODO: Implement seeking.
seekpoints: Vec<SeekPoint>,
}
/// Vorbis comments, also known as FLAC tags (e.g. artist, title, etc.).
pub struct VorbisComment {
/// The “vendor string”, chosen by the encoder vendor.
///
/// This string usually contains the name and version of the program that
/// encoded the FLAC stream, such as `reference libFLAC 1.3.2 20170101`
/// or `Lavf57.25.100`.
pub vendor: String,
/// Name-value pairs of Vorbis comments, such as `ARTIST=Queen`.
///
/// This struct stores a raw low-level representation of tags. Use
/// `FlacReader::tags()` for a friendlier iterator. The tuple consists of
/// the string in `"NAME=value"` format, and the index of the `'='` into
/// that string.
///
/// The name is supposed to be interpreted case-insensitively, and is
/// guaranteed to consist of ASCII characters. Claxon does not normalize
/// the casing of the name. Use `metadata::GetTag` to do a case-insensitive
/// lookup.
///
/// Names need not be unique. For instance, multiple `ARTIST` comments might
/// be present on a collaboration track.
///
/// See <https://www.xiph.org/vorbis/doc/v-comment.html> for more details.
pub comments: Vec<(String, usize)>,
}
/// A metadata about the flac stream.
pub enum MetadataBlock {
/// A stream info block.
StreamInfo(StreamInfo),
/// A padding block (with no meaningful data).
Padding {
/// The number of padding bytes.
length: u32,
},
/// An application block with application-specific data.
Application {
/// The registered application ID.
id: u32,
/// The contents of the application block.
data: Vec<u8>,
},
/// A seek table block.
SeekTable(SeekTable),
/// A Vorbis comment block, also known as FLAC tags.
VorbisComment(VorbisComment),
/// A CUE sheet block.
CueSheet, // TODO
/// A picture block.
Picture, // TODO
/// A block with a reserved block type, not supported by this library.
Reserved,
}
/// Iterates over Vorbis comments (FLAC tags) in a FLAC stream.
///
/// See `FlacReader::tags()` for more details.
pub struct Tags<'a> {
/// The underlying iterator.
iter: slice::Iter<'a, (String, usize)>,
}
impl<'a> Tags<'a> {
/// Returns a new `Tags` iterator.
#[inline]
pub fn new(comments: &'a [(String, usize)]) -> Tags<'a> {
Tags {
iter: comments.iter(),
}
}
}
impl<'a> Iterator for Tags<'a> {
type Item = (&'a str, &'a str);
#[inline]
fn next(&mut self) -> Option<(&'a str, &'a str)> {
return self.iter.next().map(|&(ref comment, sep_idx)| {
(&comment[..sep_idx], &comment[sep_idx+1..])
})
}
#[inline]
fn size_hint(&self) -> (usize, Option<usize>) {
self.iter.size_hint()
}
}
impl<'a> ExactSizeIterator for Tags<'a> {}
/// Iterates over Vorbis comments looking for a specific one; returns its values as `&str`.
///
/// See `FlacReader::get_tag()` for more details.
pub struct GetTag<'a> {
/// The Vorbis comments to search through.
vorbis_comments: &'a [(String, usize)],
/// The tag to look for.
needle: &'a str,
/// The index of the (name, value) pair that should be inspected next.
index: usize,
}
impl<'a> GetTag<'a> {
/// Returns a new `GetTag` iterator.
#[inline]
pub fn new(vorbis_comments: &'a [(String, usize)], needle: &'a str) -> GetTag<'a> {
GetTag {
vorbis_comments: vorbis_comments,
needle: needle,
index: 0,
}
}
}
impl<'a> Iterator for GetTag<'a> {
type Item = &'a str;
#[inline]
fn next(&mut self) -> Option<&'a str> {
// This import is actually required on Rust 1.13.
#[allow(unused_imports)]
use std::ascii::AsciiExt;
while self.index < self.vorbis_comments.len() {
let (ref comment, sep_idx) = self.vorbis_comments[self.index];
self.index += 1;
if comment[..sep_idx].eq_ignore_ascii_case(self.needle) {
return Some(&comment[sep_idx + 1..])
}
}
return None
}
}
#[inline]
fn read_metadata_block_header<R: ReadBytes>(input: &mut R) -> Result<MetadataBlockHeader> {
let byte = try!(input.read_u8());
// The first bit specifies whether this is the last block, the next 7 bits
// specify the type of the metadata block to follow.
let is_last = (byte >> 7) == 1;
let block_type = byte & 0b0111_1111;
// The length field is 24 bits, or 3 bytes.
let length = try!(input.read_be_u24());
let header = MetadataBlockHeader {
is_last: is_last,
block_type: block_type,
length: length,
};
Ok(header)
}
/// Read a single metadata block header and body from the input.
///
/// When reading a regular flac stream, there is no need to use this function
/// directly; constructing a `FlacReader` will read the header and its metadata
/// blocks.
///
/// When a flac stream is embedded in a container format, this function can be
/// used to decode a single metadata block. For instance, the Ogg format embeds
/// metadata blocks including their header verbatim in packets. This function
/// can be used to decode that raw data.
#[inline]
pub fn read_metadata_block_with_header<R: ReadBytes>(input: &mut R)
-> Result<MetadataBlock> {
let header = try!(read_metadata_block_header(input));
read_metadata_block(input, header.block_type, header.length)
}
/// Read a single metadata block of the given type and length from the input.
///
/// When reading a regular flac stream, there is no need to use this function
/// directly; constructing a `FlacReader` will read the header and its metadata
/// blocks.
///
/// When a flac stream is embedded in a container format, this function can be
/// used to decode a single metadata block. For instance, the MP4 format sports
/// a “FLAC Specific Box” which contains the block type and the raw data. This
/// function can be used to decode that raw data.
#[inline]
pub fn read_metadata_block<R: ReadBytes>(input: &mut R,
block_type: u8,
length: u32)
-> Result<MetadataBlock> {
match block_type {
0 => {
// The streaminfo block has a fixed size of 34 bytes.
if length == 34 {
let streaminfo = try!(read_streaminfo_block(input));
Ok(MetadataBlock::StreamInfo(streaminfo))
} else {
fmt_err("invalid streaminfo metadata block length")
}
}
1 => {
try!(read_padding_block(input, length));
Ok(MetadataBlock::Padding { length: length })
}
2 => {
let (id, data) = try!(read_application_block(input, length));
Ok(MetadataBlock::Application {
id: id,
data: data,
})
}
3 => {
// TODO: implement seektable reading. For now, pretend it is padding.
try!(input.skip(length));
Ok(MetadataBlock::Padding { length: length })
}
4 => {
let vorbis_comment = try!(read_vorbis_comment_block(input, length));
Ok(MetadataBlock::VorbisComment(vorbis_comment))
}
5 => {
// TODO: implement CUE sheet reading. For now, pretend it is padding.
try!(input.skip(length));
Ok(MetadataBlock::Padding { length: length })
}
6 => {
// TODO: implement picture reading. For now, pretend it is padding.
try!(input.skip(length));
Ok(MetadataBlock::Padding { length: length })
}
127 => {
// This code is invalid to avoid confusion with a frame sync code.
fmt_err("invalid metadata block type")
}
_ => {
// Any other block type is'reserved' at the moment of writing. The
// reference implementation reads it as an 'unknown' block. That is
// one way of handling it, but maybe there should be some kind of
//'strict' mode (configurable at compile time?) so that this can
// be an error if desired.
try!(input.skip(length));
Ok(MetadataBlock::Reserved)
}
}
}
fn read_streaminfo_block<R: ReadBytes>(input: &mut R) -> Result<StreamInfo> {
let min_block_size = try!(input.read_be_u16());
let max_block_size = try!(input.read_be_u16());
// The frame size fields are 24 bits, or 3 bytes.
let min_frame_size = try!(input.read_be_u24());
let max_frame_size = try!(input.read_be_u24());
// Next up are 20 bits that determine the sample rate.
let sample_rate_msb = try!(input.read_be_u16());
let sample_rate_lsb = try!(input.read_u8());
// Stitch together the value from the first 16 bits,
// and then the 4 most significant bits of the next byte.
let sample_rate = (sample_rate_msb as u32) << 4 | (sample_rate_lsb as u32) >> 4;
// Next three bits are the number of channels - 1. Mask them out and add 1.
let n_channels_bps = sample_rate_lsb;
let n_channels = ((n_channels_bps >> 1) & 0b0000_0111) + 1;
// The final bit is the most significant of bits per sample - 1. Bits per
// sample - 1 is 5 bits in total.
let bps_msb = n_channels_bps & 1;
let bps_lsb_n_samples = try!(input.read_u8());
// Stitch together these values, add 1 because # - 1 is stored.
let bits_per_sample = (bps_msb << 4 | (bps_lsb_n_samples >> 4)) + 1;
// Number of samples in 36 bits, we have 4 already, 32 to go.
let n_samples_msb = bps_lsb_n_samples & 0b0000_1111;
let n_samples_lsb = try!(input.read_be_u32());
let n_samples = (n_samples_msb as u64) << 32 | n_samples_lsb as u64;
// Next are 128 bits (16 bytes) of MD5 signature.
let mut md5sum = [0u8; 16];
try!(input.read_into(&mut md5sum));
// Lower bounds can never be larger than upper bounds. Note that 0 indicates
// unknown for the frame size. Also, the block size must be at least 16.
if min_block_size > max_block_size {
return fmt_err("inconsistent bounds, min block size > max block size");
}
if min_block_size < 16 {
return fmt_err("invalid block size, must be at least 16");
}
if min_frame_size > max_frame_size && max_frame_size!= 0 {
return fmt_err("inconsistent bounds, min frame size > max frame size");
}
// A sample rate of 0 is invalid, and the maximum sample rate is limited by
// the structure of the frame headers to 655350 Hz.
if sample_rate == 0 || sample_rate > 655350 {
return fmt_err("invalid sample rate");
}
let stream_info = StreamInfo {
min_block_size: min_block_size,
max_block_size: max_block_size,
min_frame_size: if min_frame_size == 0 {
None
} else {
Some(min_frame_size)
},
max_frame_size: if max_frame_size == 0 {
None
} else {
Some(max_frame_size)
},
sample_rate: sample_rate,
channels: n_channels as u32,
bits_per_sample: bits_per_sample as u32,
samples: if n_samples == 0 {
None
} else {
Some(n_samples)
},
md5sum: md5sum,
};
Ok(stream_info)
}
fn read_vorbis_comment_block<R: ReadBytes>(input: &mut R, length: u32) -> Result<VorbisComment> {
if length < 8 {
// We expect at a minimum a 32-bit vendor string length, and a 32-bit
// comment count.
return fmt_err("Vorbis comment block is too short")
}
// Fail if the length of the Vorbis comment block is larger than 1 MiB. This
// block is full of length-prefixed strings for which we allocate memory up
// front. If there were no limit on these, a maliciously crafted file could
// cause OOM by claiming to contain large strings. But at least the strings
// cannot be longer than the size of the Vorbis comment block, and by
// limiting the size of that block, we can mitigate such DoS attacks.
//
// The typical size of a the Vorbis comment block is 1 KiB; on a corpus of
// real-world flac files, the 0.05 and 0.95 quantiles were 792 and 1257
// bytes respectively, with even the 0.99 quantile below 2 KiB. The only
// reason for having a large Vorbis comment block is when cover art is
// incorrectly embedded there, but the Vorbis comment block is not the right
// place for that anyway.
if length > 10 * 1024 * 1024 {
let msg = "Vorbis comment blocks larger than 10 MiB are not supported";
return Err(Error::Unsupported(msg))
}
// The Vorbis comment block starts with a length-prefixed "vendor string".
// It cannot be larger than the block length - 8, because there are the
// 32-bit vendor string length, and comment count.
let vendor_len = try!(input.read_le_u32());
if vendor_len > length - 8 { return fmt_err("vendor string too long") }
let mut vendor_bytes = Vec::with_capacity(vendor_len as usize);
// We can safely set the lenght of the vector here; the uninitialized memory
// is not exposed. If `read_into` succeeds, it will have overwritten all
// bytes. If not, an error is returned and the memory is never exposed.
unsafe { vendor_bytes.set_len(vendor_len as usize); }
try!(input.read_into(&mut vendor_bytes));
let vendor = try!(String::from_utf8(vendor_bytes));
// Next up is the number of comments. Because every comment is at least 4
// bytes to indicate its length, there cannot be more comments than the
// length of the block divided by 4. This is only an upper bound to ensure
// that we don't allocate a big vector, to protect against DoS attacks.
let mut comments_len = try!(input.read_le_u32());
if comments_len >= length / 4 {
return fmt_err("too many entries for Vorbis comment block")
}
let mut comments = Vec::with_capacity(comments_len as usize);
let mut bytes_left = length - 8 - vendor_len;
// For every comment, there is a length-prefixed string of the form
// "NAME=value".
while bytes_left >= 4 && comments.len() < comments_len as usize {
let comment_len = try!(input.read_le_u32());
bytes_left -= 4;
if comment_len > bytes_left {
return fmt_err("Vorbis comment too long for Vorbis comment block")
}
// Some older versions of libflac allowed writing zero-length Vorbis
// comments. ALthough such files are invalid, they do occur in the wild,
// so we skip over the empty comment.
if comment_len == 0 {
// Does not overflow because `comments_len > comments.len() >= 0`.
comments_len -= 1;
continue;
}
// For the same reason as above, setting the length is safe here.
let mut comment_bytes = Vec::with_capacity(comment_len as usize);
unsafe { comment_bytes.set_len(comment_len as usize); }
try!(input.read_into(&mut comment_bytes));
bytes_left -= comment_len;
if let Some(sep_index) = comment_bytes.iter().position(|&x| x == b'=') {
{
let name_bytes = &comment_bytes[..sep_index];
// According to the Vorbis spec, the field name may consist of ascii
// bytes 0x20 through 0x7d, 0x3d (`=`) excluded. Verifying this has
// the advantage that if the check passes, the result is valid
// UTF-8, so the conversion to string will not fail.
if name_bytes.iter().any(|&x| x < 0x20 || x > 0x7d) {
return fmt_err("Vorbis comment field name contains invalid byte")
}
}
let comment = try!(String::from_utf8(comment_bytes));
comments.push((comment, sep_index));
} else {
return fmt_err("Vorbis comment does not contain '='")
}
}
if bytes_left!= 0 {
return fmt_err("Vorbis comment block has excess data")
}
if comments.len()!= comments_len as usize {
return fmt_err("Vorbis comment block contains wrong number of entries")
}
let vorbis_comment = VorbisComment {
vendor: vendor,
comments: comments,
};
Ok(vorbis_comment)
}
fn read_padding_block<R: ReadBytes>(input: &mut R, length: u32) -> Result<()> {
// The specification dictates that all bits of the padding block must be 0.
// However, the reference implementation does not issue an error when this
// is not the case, and frankly, when you are going to skip over these
// bytes and do nothing with them whatsoever, why waste all those CPU
// cycles checking that the padding is valid?
Ok(try!(input.skip(length)))
}
fn read_application_block<R: ReadBytes>(input: &mut R, length: u32) -> Result<(u32, Vec<u8>)> {
if length < 4 {
return fmt_err("application block length must be at least 4 bytes")
}
// Reject large application blocks to avoid memory-based denial-
// of-service attacks. See also the more elaborate motivation in
// `read_vorbis_comment_block()`.
if length > 10 * 1024 * 1024 {
let msg = "application blocks larger than 10 MiB are not supported";
return Err(Error::Unsupported(msg))
}
let id = try!(input.read_be_u32());
// Four bytes of the block have been used for the ID, the rest is payload.
// Create a vector of uninitialized memory, and read the block into it. The
// uninitialized memory is never exposed: read_into will either fill the
// buffer completely, or return an err, in which case the memory is not
// exposed.
let mut data = Vec::with_capacity(length as usize - 4);
unsafe { data.set_len(length as usize - 4); }
try!(input.read_into(&mut data));
Ok((id, data))
}
/// Reads metadata blocks from a stream and exposes them as an iterator.
///
/// It is assumed that the next byte that the reader will read, is the first
/// byte of a metadata block header. This means that the iterator will yield at
/// least a single value. If the iterator ever yields an error, then no more
/// data will be read thereafter, and the next value will be `None`.
pub struct MetadataBlockReader<R: ReadBytes> {
input: R,
done: bool,
}
/// Either a `MetadataBlock` or an `Error`.
pub type MetadataBlockResult = Result<MetadataBlock>;
impl<R: ReadBytes> MetadataBlockReader<R> {
/// Creates a metadata block reader that will yield at least one element.
pub fn new(inpu | R) -> MetadataBlockReader<R> {
MetadataBlockReader {
input: input,
done: false,
}
}
#[inline]
fn read_next(&mut self) -> MetadataBlockResult {
let header = try!(read_metadata_block_header(&mut self.input));
let block = try!(read_metadata_block(&mut self.input, header.block_type, header.length));
self.done = header.is_last;
Ok(block)
}
}
impl<R: ReadBytes> Iterator for MetadataBlockReader<R> {
type Item = MetadataBlockResult;
#[inline]
fn next(&mut self) -> Option<MetadataBlockResult> {
if self.done {
None
} else {
let block = self.read_next();
// After a failure | t: | identifier_name |
metadata.rs | ::{Error, Result, fmt_err};
use input::ReadBytes;
use std::str;
use std::slice;
#[derive(Clone, Copy)]
struct MetadataBlockHeader {
is_last: bool,
block_type: u8,
length: u32,
}
/// The streaminfo metadata block, with important information about the stream.
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub struct StreamInfo {
// TODO: "size" would better be called "duration" for clarity.
/// The minimum block size (in inter-channel samples) used in the stream.
///
/// This number is independent of the number of channels. To get the minimum
/// block duration in seconds, divide this by the sample rate.
pub min_block_size: u16,
/// The maximum block size (in inter-channel samples) used in the stream.
///
/// This number is independent of the number of channels. To get the
/// maximum block duration in seconds, divide by the sample rate. To avoid
/// allocations during decoding, a buffer of this size times the number of
/// channels can be allocated up front and passed into
/// `FrameReader::read_next_or_eof()`.
pub max_block_size: u16,
/// The minimum frame size (in bytes) used in the stream.
pub min_frame_size: Option<u32>,
/// The maximum frame size (in bytes) used in the stream.
pub max_frame_size: Option<u32>,
/// The sample rate in Hz.
pub sample_rate: u32,
/// The number of channels.
pub channels: u32,
/// The number of bits per sample.
pub bits_per_sample: u32,
/// The total number of inter-channel samples in the stream.
// TODO: rename to `duration` for clarity?
pub samples: Option<u64>,
/// MD5 signature of the unencoded audio data.
pub md5sum: [u8; 16],
}
/// A seek point in the seek table.
#[derive(Clone, Copy, PartialEq, Eq)]
pub struct SeekPoint {
/// Sample number of the first sample in the target frame, or 2<sup>64</sup> - 1 for a placeholder.
pub sample: u64,
/// Offset in bytes from the first byte of the first frame header to the first byte of the
/// target frame's header.
pub offset: u64,
/// Number of samples in the target frame.
pub samples: u16,
}
/// A seek table to aid seeking in the stream.
pub struct SeekTable {
/// The seek points, sorted in ascending order by sample number.
#[allow(dead_code)] // TODO: Implement seeking.
seekpoints: Vec<SeekPoint>,
}
/// Vorbis comments, also known as FLAC tags (e.g. artist, title, etc.).
pub struct VorbisComment {
/// The “vendor string”, chosen by the encoder vendor.
///
/// This string usually contains the name and version of the program that
/// encoded the FLAC stream, such as `reference libFLAC 1.3.2 20170101`
/// or `Lavf57.25.100`.
pub vendor: String,
/// Name-value pairs of Vorbis comments, such as `ARTIST=Queen`.
///
/// This struct stores a raw low-level representation of tags. Use
/// `FlacReader::tags()` for a friendlier iterator. The tuple consists of
/// the string in `"NAME=value"` format, and the index of the `'='` into
/// that string.
///
/// The name is supposed to be interpreted case-insensitively, and is
/// guaranteed to consist of ASCII characters. Claxon does not normalize
/// the casing of the name. Use `metadata::GetTag` to do a case-insensitive
/// lookup.
///
/// Names need not be unique. For instance, multiple `ARTIST` comments might
/// be present on a collaboration track.
///
/// See <https://www.xiph.org/vorbis/doc/v-comment.html> for more details.
pub comments: Vec<(String, usize)>,
}
/// A metadata about the flac stream.
pub enum MetadataBlock {
/// A stream info block.
StreamInfo(StreamInfo),
/// A padding block (with no meaningful data).
Padding {
/// The number of padding bytes.
length: u32,
},
/// An application block with application-specific data.
Application {
/// The registered application ID.
id: u32,
/// The contents of the application block.
data: Vec<u8>,
},
/// A seek table block.
SeekTable(SeekTable),
/// A Vorbis comment block, also known as FLAC tags.
VorbisComment(VorbisComment),
/// A CUE sheet block.
CueSheet, // TODO
/// A picture block.
Picture, // TODO
/// A block with a reserved block type, not supported by this library.
Reserved,
}
/// Iterates over Vorbis comments (FLAC tags) in a FLAC stream.
///
/// See `FlacReader::tags()` for more details.
pub struct Tags<'a> {
/// The underlying iterator.
iter: slice::Iter<'a, (String, usize)>,
}
impl<'a> Tags<'a> {
/// Returns a new `Tags` iterator.
#[inline]
pub fn new(comments: &'a [(String, usize)]) -> Tags<'a> {
Tags {
iter: comments.iter(),
}
}
}
impl<'a> Iterator for Tags<'a> {
type Item = (&'a str, &'a str);
#[inline]
fn next(&mut self) -> Option<(&'a str, &'a str)> {
return self.iter.next().map(|&(ref comment, sep_idx)| {
(&comment[..sep_idx], &comment[sep_idx+1..])
})
}
#[inline]
fn size_hint(&self) -> (usize, Option<usize>) {
self.iter.size_hint()
}
}
impl<'a> ExactSizeIterator for Tags<'a> {}
/// Iterates over Vorbis comments looking for a specific one; returns its values as `&str`.
///
/// See `FlacReader::get_tag()` for more details.
pub struct GetTag<'a> {
/// The Vorbis comments to search through.
vorbis_comments: &'a [(String, usize)],
/// The tag to look for.
needle: &'a str,
/// The index of the (name, value) pair that should be inspected next.
index: usize,
}
impl<'a> GetTag<'a> {
/// Returns a new `GetTag` iterator.
#[inline]
pub fn new(vorbis_comments: &'a [(String, usize)], needle: &'a str) -> GetTag<'a> {
GetTag {
vorbis_comments: vorbis_comments,
needle: needle,
index: 0,
}
}
}
impl<'a> Iterator for GetTag<'a> {
type Item = &'a str;
#[inline]
fn next(&mut self) -> Option<&'a str> {
// This import is actually required on Rust 1.13.
#[allow(unused_imports)]
use std::ascii::AsciiExt;
while self.index < self.vorbis_comments.len() {
let (ref comment, sep_idx) = self.vorbis_comments[self.index];
self.index += 1;
if comment[..sep_idx].eq_ignore_ascii_case(self.needle) {
return Some(&comment[sep_idx + 1..])
}
}
return None
}
}
#[inline]
fn read_metadata_block_header<R: ReadBytes>(input: &mut R) -> Result<MetadataBlockHeader> {
let byte = try!(input.read_u8());
// The first bit specifies whether this is the last block, the next 7 bits
// specify the type of the metadata block to follow.
let is_last = (byte >> 7) == 1;
let block_type = byte & 0b0111_1111;
// The length field is 24 bits, or 3 bytes.
let length = try!(input.read_be_u24());
let header = MetadataBlockHeader {
is_last: is_last,
block_type: block_type,
length: length,
};
Ok(header)
}
/// Read a single metadata block header and body from the input.
///
/// When reading a regular flac stream, there is no need to use this function
/// directly; constructing a `FlacReader` will read the header and its metadata
/// blocks.
///
/// When a flac stream is embedded in a container format, this function can be
/// used to decode a single metadata block. For instance, the Ogg format embeds
/// metadata blocks including their header verbatim in packets. This function
/// can be used to decode that raw data.
#[inline]
pub fn read_metadata_block_with_header<R: ReadBytes>(input: &mut R)
-> Result<MetadataBlock> {
let header = try!(read_metadata_block_header(input));
read_metadata_block(input, header.block_type, header.length)
}
/// Read a single metadata block of the given type and length from the input.
///
/// When reading a regular flac stream, there is no need to use this function
/// directly; constructing a `FlacReader` will read the header and its metadata
/// blocks.
///
/// When a flac stream is embedded in a container format, this function can be
/// used to decode a single metadata block. For instance, the MP4 format sports
/// a “FLAC Specific Box” which contains the block type and the raw data. This
/// function can be used to decode that raw data.
#[inline]
pub fn read_metadata_block<R: ReadBytes>(input: &mut R,
block_type: u8,
length: u32)
-> Result<MetadataBlock> {
match block_type {
0 => {
// The streaminfo block has a fixed size of 34 bytes.
if length == 34 {
let streaminfo = try!(read_streaminfo_block(input));
Ok(MetadataBlock::StreamInfo(streaminfo))
} else {
fmt_err("invalid streaminfo metadata block length")
}
}
1 => {
| 2 => {
let (id, data) = try!(read_application_block(input, length));
Ok(MetadataBlock::Application {
id: id,
data: data,
})
}
3 => {
// TODO: implement seektable reading. For now, pretend it is padding.
try!(input.skip(length));
Ok(MetadataBlock::Padding { length: length })
}
4 => {
let vorbis_comment = try!(read_vorbis_comment_block(input, length));
Ok(MetadataBlock::VorbisComment(vorbis_comment))
}
5 => {
// TODO: implement CUE sheet reading. For now, pretend it is padding.
try!(input.skip(length));
Ok(MetadataBlock::Padding { length: length })
}
6 => {
// TODO: implement picture reading. For now, pretend it is padding.
try!(input.skip(length));
Ok(MetadataBlock::Padding { length: length })
}
127 => {
// This code is invalid to avoid confusion with a frame sync code.
fmt_err("invalid metadata block type")
}
_ => {
// Any other block type is'reserved' at the moment of writing. The
// reference implementation reads it as an 'unknown' block. That is
// one way of handling it, but maybe there should be some kind of
//'strict' mode (configurable at compile time?) so that this can
// be an error if desired.
try!(input.skip(length));
Ok(MetadataBlock::Reserved)
}
}
}
fn read_streaminfo_block<R: ReadBytes>(input: &mut R) -> Result<StreamInfo> {
let min_block_size = try!(input.read_be_u16());
let max_block_size = try!(input.read_be_u16());
// The frame size fields are 24 bits, or 3 bytes.
let min_frame_size = try!(input.read_be_u24());
let max_frame_size = try!(input.read_be_u24());
// Next up are 20 bits that determine the sample rate.
let sample_rate_msb = try!(input.read_be_u16());
let sample_rate_lsb = try!(input.read_u8());
// Stitch together the value from the first 16 bits,
// and then the 4 most significant bits of the next byte.
let sample_rate = (sample_rate_msb as u32) << 4 | (sample_rate_lsb as u32) >> 4;
// Next three bits are the number of channels - 1. Mask them out and add 1.
let n_channels_bps = sample_rate_lsb;
let n_channels = ((n_channels_bps >> 1) & 0b0000_0111) + 1;
// The final bit is the most significant of bits per sample - 1. Bits per
// sample - 1 is 5 bits in total.
let bps_msb = n_channels_bps & 1;
let bps_lsb_n_samples = try!(input.read_u8());
// Stitch together these values, add 1 because # - 1 is stored.
let bits_per_sample = (bps_msb << 4 | (bps_lsb_n_samples >> 4)) + 1;
// Number of samples in 36 bits, we have 4 already, 32 to go.
let n_samples_msb = bps_lsb_n_samples & 0b0000_1111;
let n_samples_lsb = try!(input.read_be_u32());
let n_samples = (n_samples_msb as u64) << 32 | n_samples_lsb as u64;
// Next are 128 bits (16 bytes) of MD5 signature.
let mut md5sum = [0u8; 16];
try!(input.read_into(&mut md5sum));
// Lower bounds can never be larger than upper bounds. Note that 0 indicates
// unknown for the frame size. Also, the block size must be at least 16.
if min_block_size > max_block_size {
return fmt_err("inconsistent bounds, min block size > max block size");
}
if min_block_size < 16 {
return fmt_err("invalid block size, must be at least 16");
}
if min_frame_size > max_frame_size && max_frame_size!= 0 {
return fmt_err("inconsistent bounds, min frame size > max frame size");
}
// A sample rate of 0 is invalid, and the maximum sample rate is limited by
// the structure of the frame headers to 655350 Hz.
if sample_rate == 0 || sample_rate > 655350 {
return fmt_err("invalid sample rate");
}
let stream_info = StreamInfo {
min_block_size: min_block_size,
max_block_size: max_block_size,
min_frame_size: if min_frame_size == 0 {
None
} else {
Some(min_frame_size)
},
max_frame_size: if max_frame_size == 0 {
None
} else {
Some(max_frame_size)
},
sample_rate: sample_rate,
channels: n_channels as u32,
bits_per_sample: bits_per_sample as u32,
samples: if n_samples == 0 {
None
} else {
Some(n_samples)
},
md5sum: md5sum,
};
Ok(stream_info)
}
fn read_vorbis_comment_block<R: ReadBytes>(input: &mut R, length: u32) -> Result<VorbisComment> {
if length < 8 {
// We expect at a minimum a 32-bit vendor string length, and a 32-bit
// comment count.
return fmt_err("Vorbis comment block is too short")
}
// Fail if the length of the Vorbis comment block is larger than 1 MiB. This
// block is full of length-prefixed strings for which we allocate memory up
// front. If there were no limit on these, a maliciously crafted file could
// cause OOM by claiming to contain large strings. But at least the strings
// cannot be longer than the size of the Vorbis comment block, and by
// limiting the size of that block, we can mitigate such DoS attacks.
//
// The typical size of a the Vorbis comment block is 1 KiB; on a corpus of
// real-world flac files, the 0.05 and 0.95 quantiles were 792 and 1257
// bytes respectively, with even the 0.99 quantile below 2 KiB. The only
// reason for having a large Vorbis comment block is when cover art is
// incorrectly embedded there, but the Vorbis comment block is not the right
// place for that anyway.
if length > 10 * 1024 * 1024 {
let msg = "Vorbis comment blocks larger than 10 MiB are not supported";
return Err(Error::Unsupported(msg))
}
// The Vorbis comment block starts with a length-prefixed "vendor string".
// It cannot be larger than the block length - 8, because there are the
// 32-bit vendor string length, and comment count.
let vendor_len = try!(input.read_le_u32());
if vendor_len > length - 8 { return fmt_err("vendor string too long") }
let mut vendor_bytes = Vec::with_capacity(vendor_len as usize);
// We can safely set the lenght of the vector here; the uninitialized memory
// is not exposed. If `read_into` succeeds, it will have overwritten all
// bytes. If not, an error is returned and the memory is never exposed.
unsafe { vendor_bytes.set_len(vendor_len as usize); }
try!(input.read_into(&mut vendor_bytes));
let vendor = try!(String::from_utf8(vendor_bytes));
// Next up is the number of comments. Because every comment is at least 4
// bytes to indicate its length, there cannot be more comments than the
// length of the block divided by 4. This is only an upper bound to ensure
// that we don't allocate a big vector, to protect against DoS attacks.
let mut comments_len = try!(input.read_le_u32());
if comments_len >= length / 4 {
return fmt_err("too many entries for Vorbis comment block")
}
let mut comments = Vec::with_capacity(comments_len as usize);
let mut bytes_left = length - 8 - vendor_len;
// For every comment, there is a length-prefixed string of the form
// "NAME=value".
while bytes_left >= 4 && comments.len() < comments_len as usize {
let comment_len = try!(input.read_le_u32());
bytes_left -= 4;
if comment_len > bytes_left {
return fmt_err("Vorbis comment too long for Vorbis comment block")
}
// Some older versions of libflac allowed writing zero-length Vorbis
// comments. ALthough such files are invalid, they do occur in the wild,
// so we skip over the empty comment.
if comment_len == 0 {
// Does not overflow because `comments_len > comments.len() >= 0`.
comments_len -= 1;
continue;
}
// For the same reason as above, setting the length is safe here.
let mut comment_bytes = Vec::with_capacity(comment_len as usize);
unsafe { comment_bytes.set_len(comment_len as usize); }
try!(input.read_into(&mut comment_bytes));
bytes_left -= comment_len;
if let Some(sep_index) = comment_bytes.iter().position(|&x| x == b'=') {
{
let name_bytes = &comment_bytes[..sep_index];
// According to the Vorbis spec, the field name may consist of ascii
// bytes 0x20 through 0x7d, 0x3d (`=`) excluded. Verifying this has
// the advantage that if the check passes, the result is valid
// UTF-8, so the conversion to string will not fail.
if name_bytes.iter().any(|&x| x < 0x20 || x > 0x7d) {
return fmt_err("Vorbis comment field name contains invalid byte")
}
}
let comment = try!(String::from_utf8(comment_bytes));
comments.push((comment, sep_index));
} else {
return fmt_err("Vorbis comment does not contain '='")
}
}
if bytes_left!= 0 {
return fmt_err("Vorbis comment block has excess data")
}
if comments.len()!= comments_len as usize {
return fmt_err("Vorbis comment block contains wrong number of entries")
}
let vorbis_comment = VorbisComment {
vendor: vendor,
comments: comments,
};
Ok(vorbis_comment)
}
fn read_padding_block<R: ReadBytes>(input: &mut R, length: u32) -> Result<()> {
// The specification dictates that all bits of the padding block must be 0.
// However, the reference implementation does not issue an error when this
// is not the case, and frankly, when you are going to skip over these
// bytes and do nothing with them whatsoever, why waste all those CPU
// cycles checking that the padding is valid?
Ok(try!(input.skip(length)))
}
fn read_application_block<R: ReadBytes>(input: &mut R, length: u32) -> Result<(u32, Vec<u8>)> {
if length < 4 {
return fmt_err("application block length must be at least 4 bytes")
}
// Reject large application blocks to avoid memory-based denial-
// of-service attacks. See also the more elaborate motivation in
// `read_vorbis_comment_block()`.
if length > 10 * 1024 * 1024 {
let msg = "application blocks larger than 10 MiB are not supported";
return Err(Error::Unsupported(msg))
}
let id = try!(input.read_be_u32());
// Four bytes of the block have been used for the ID, the rest is payload.
// Create a vector of uninitialized memory, and read the block into it. The
// uninitialized memory is never exposed: read_into will either fill the
// buffer completely, or return an err, in which case the memory is not
// exposed.
let mut data = Vec::with_capacity(length as usize - 4);
unsafe { data.set_len(length as usize - 4); }
try!(input.read_into(&mut data));
Ok((id, data))
}
/// Reads metadata blocks from a stream and exposes them as an iterator.
///
/// It is assumed that the next byte that the reader will read, is the first
/// byte of a metadata block header. This means that the iterator will yield at
/// least a single value. If the iterator ever yields an error, then no more
/// data will be read thereafter, and the next value will be `None`.
pub struct MetadataBlockReader<R: ReadBytes> {
input: R,
done: bool,
}
/// Either a `MetadataBlock` or an `Error`.
pub type MetadataBlockResult = Result<MetadataBlock>;
impl<R: ReadBytes> MetadataBlockReader<R> {
/// Creates a metadata block reader that will yield at least one element.
pub fn new(input: R) -> MetadataBlockReader<R> {
MetadataBlockReader {
input: input,
done: false,
}
}
#[inline]
fn read_next(&mut self) -> MetadataBlockResult {
let header = try!(read_metadata_block_header(&mut self.input));
let block = try!(read_metadata_block(&mut self.input, header.block_type, header.length));
self.done = header.is_last;
Ok(block)
}
}
impl<R: ReadBytes> Iterator for MetadataBlockReader<R> {
type Item = MetadataBlockResult;
#[inline]
fn next(&mut self) -> Option<MetadataBlockResult> {
if self.done {
None
} else {
let block = self.read_next();
// After a failure, no more | try!(read_padding_block(input, length));
Ok(MetadataBlock::Padding { length: length })
}
| conditional_block |
metadata.rs | error::{Error, Result, fmt_err};
use input::ReadBytes;
use std::str;
use std::slice;
#[derive(Clone, Copy)]
struct MetadataBlockHeader {
is_last: bool,
block_type: u8,
length: u32,
}
/// The streaminfo metadata block, with important information about the stream.
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub struct StreamInfo {
// TODO: "size" would better be called "duration" for clarity.
/// The minimum block size (in inter-channel samples) used in the stream.
///
/// This number is independent of the number of channels. To get the minimum
/// block duration in seconds, divide this by the sample rate.
pub min_block_size: u16,
/// The maximum block size (in inter-channel samples) used in the stream.
///
/// This number is independent of the number of channels. To get the
/// maximum block duration in seconds, divide by the sample rate. To avoid
/// allocations during decoding, a buffer of this size times the number of
/// channels can be allocated up front and passed into
/// `FrameReader::read_next_or_eof()`.
pub max_block_size: u16,
/// The minimum frame size (in bytes) used in the stream.
pub min_frame_size: Option<u32>,
/// The maximum frame size (in bytes) used in the stream.
pub max_frame_size: Option<u32>,
/// The sample rate in Hz.
pub sample_rate: u32,
/// The number of channels.
pub channels: u32,
/// The number of bits per sample.
pub bits_per_sample: u32,
/// The total number of inter-channel samples in the stream.
// TODO: rename to `duration` for clarity?
pub samples: Option<u64>,
/// MD5 signature of the unencoded audio data.
pub md5sum: [u8; 16],
}
/// A seek point in the seek table.
#[derive(Clone, Copy, PartialEq, Eq)]
pub struct SeekPoint {
/// Sample number of the first sample in the target frame, or 2<sup>64</sup> - 1 for a placeholder.
pub sample: u64,
/// Offset in bytes from the first byte of the first frame header to the first byte of the
/// target frame's header.
pub offset: u64,
/// Number of samples in the target frame.
pub samples: u16,
}
/// A seek table to aid seeking in the stream.
pub struct SeekTable {
/// The seek points, sorted in ascending order by sample number.
#[allow(dead_code)] // TODO: Implement seeking.
seekpoints: Vec<SeekPoint>,
}
/// Vorbis comments, also known as FLAC tags (e.g. artist, title, etc.).
pub struct VorbisComment {
/// The “vendor string”, chosen by the encoder vendor.
///
/// This string usually contains the name and version of the program that
/// encoded the FLAC stream, such as `reference libFLAC 1.3.2 20170101`
/// or `Lavf57.25.100`.
pub vendor: String,
/// Name-value pairs of Vorbis comments, such as `ARTIST=Queen`.
///
/// This struct stores a raw low-level representation of tags. Use
/// `FlacReader::tags()` for a friendlier iterator. The tuple consists of
/// the string in `"NAME=value"` format, and the index of the `'='` into
/// that string.
///
/// The name is supposed to be interpreted case-insensitively, and is
/// guaranteed to consist of ASCII characters. Claxon does not normalize
/// the casing of the name. Use `metadata::GetTag` to do a case-insensitive
/// lookup.
///
/// Names need not be unique. For instance, multiple `ARTIST` comments might
/// be present on a collaboration track.
///
/// See <https://www.xiph.org/vorbis/doc/v-comment.html> for more details.
pub comments: Vec<(String, usize)>,
}
/// A metadata about the flac stream. | /// The number of padding bytes.
length: u32,
},
/// An application block with application-specific data.
Application {
/// The registered application ID.
id: u32,
/// The contents of the application block.
data: Vec<u8>,
},
/// A seek table block.
SeekTable(SeekTable),
/// A Vorbis comment block, also known as FLAC tags.
VorbisComment(VorbisComment),
/// A CUE sheet block.
CueSheet, // TODO
/// A picture block.
Picture, // TODO
/// A block with a reserved block type, not supported by this library.
Reserved,
}
/// Iterates over Vorbis comments (FLAC tags) in a FLAC stream.
///
/// See `FlacReader::tags()` for more details.
pub struct Tags<'a> {
/// The underlying iterator.
iter: slice::Iter<'a, (String, usize)>,
}
impl<'a> Tags<'a> {
/// Returns a new `Tags` iterator.
#[inline]
pub fn new(comments: &'a [(String, usize)]) -> Tags<'a> {
Tags {
iter: comments.iter(),
}
}
}
impl<'a> Iterator for Tags<'a> {
type Item = (&'a str, &'a str);
#[inline]
fn next(&mut self) -> Option<(&'a str, &'a str)> {
return self.iter.next().map(|&(ref comment, sep_idx)| {
(&comment[..sep_idx], &comment[sep_idx+1..])
})
}
#[inline]
fn size_hint(&self) -> (usize, Option<usize>) {
self.iter.size_hint()
}
}
impl<'a> ExactSizeIterator for Tags<'a> {}
/// Iterates over Vorbis comments looking for a specific one; returns its values as `&str`.
///
/// See `FlacReader::get_tag()` for more details.
pub struct GetTag<'a> {
/// The Vorbis comments to search through.
vorbis_comments: &'a [(String, usize)],
/// The tag to look for.
needle: &'a str,
/// The index of the (name, value) pair that should be inspected next.
index: usize,
}
impl<'a> GetTag<'a> {
/// Returns a new `GetTag` iterator.
#[inline]
pub fn new(vorbis_comments: &'a [(String, usize)], needle: &'a str) -> GetTag<'a> {
GetTag {
vorbis_comments: vorbis_comments,
needle: needle,
index: 0,
}
}
}
impl<'a> Iterator for GetTag<'a> {
type Item = &'a str;
#[inline]
fn next(&mut self) -> Option<&'a str> {
// This import is actually required on Rust 1.13.
#[allow(unused_imports)]
use std::ascii::AsciiExt;
while self.index < self.vorbis_comments.len() {
let (ref comment, sep_idx) = self.vorbis_comments[self.index];
self.index += 1;
if comment[..sep_idx].eq_ignore_ascii_case(self.needle) {
return Some(&comment[sep_idx + 1..])
}
}
return None
}
}
#[inline]
fn read_metadata_block_header<R: ReadBytes>(input: &mut R) -> Result<MetadataBlockHeader> {
let byte = try!(input.read_u8());
// The first bit specifies whether this is the last block, the next 7 bits
// specify the type of the metadata block to follow.
let is_last = (byte >> 7) == 1;
let block_type = byte & 0b0111_1111;
// The length field is 24 bits, or 3 bytes.
let length = try!(input.read_be_u24());
let header = MetadataBlockHeader {
is_last: is_last,
block_type: block_type,
length: length,
};
Ok(header)
}
/// Read a single metadata block header and body from the input.
///
/// When reading a regular flac stream, there is no need to use this function
/// directly; constructing a `FlacReader` will read the header and its metadata
/// blocks.
///
/// When a flac stream is embedded in a container format, this function can be
/// used to decode a single metadata block. For instance, the Ogg format embeds
/// metadata blocks including their header verbatim in packets. This function
/// can be used to decode that raw data.
#[inline]
pub fn read_metadata_block_with_header<R: ReadBytes>(input: &mut R)
-> Result<MetadataBlock> {
let header = try!(read_metadata_block_header(input));
read_metadata_block(input, header.block_type, header.length)
}
/// Read a single metadata block of the given type and length from the input.
///
/// When reading a regular flac stream, there is no need to use this function
/// directly; constructing a `FlacReader` will read the header and its metadata
/// blocks.
///
/// When a flac stream is embedded in a container format, this function can be
/// used to decode a single metadata block. For instance, the MP4 format sports
/// a “FLAC Specific Box” which contains the block type and the raw data. This
/// function can be used to decode that raw data.
#[inline]
pub fn read_metadata_block<R: ReadBytes>(input: &mut R,
block_type: u8,
length: u32)
-> Result<MetadataBlock> {
match block_type {
0 => {
// The streaminfo block has a fixed size of 34 bytes.
if length == 34 {
let streaminfo = try!(read_streaminfo_block(input));
Ok(MetadataBlock::StreamInfo(streaminfo))
} else {
fmt_err("invalid streaminfo metadata block length")
}
}
1 => {
try!(read_padding_block(input, length));
Ok(MetadataBlock::Padding { length: length })
}
2 => {
let (id, data) = try!(read_application_block(input, length));
Ok(MetadataBlock::Application {
id: id,
data: data,
})
}
3 => {
// TODO: implement seektable reading. For now, pretend it is padding.
try!(input.skip(length));
Ok(MetadataBlock::Padding { length: length })
}
4 => {
let vorbis_comment = try!(read_vorbis_comment_block(input, length));
Ok(MetadataBlock::VorbisComment(vorbis_comment))
}
5 => {
// TODO: implement CUE sheet reading. For now, pretend it is padding.
try!(input.skip(length));
Ok(MetadataBlock::Padding { length: length })
}
6 => {
// TODO: implement picture reading. For now, pretend it is padding.
try!(input.skip(length));
Ok(MetadataBlock::Padding { length: length })
}
127 => {
// This code is invalid to avoid confusion with a frame sync code.
fmt_err("invalid metadata block type")
}
_ => {
// Any other block type is'reserved' at the moment of writing. The
// reference implementation reads it as an 'unknown' block. That is
// one way of handling it, but maybe there should be some kind of
//'strict' mode (configurable at compile time?) so that this can
// be an error if desired.
try!(input.skip(length));
Ok(MetadataBlock::Reserved)
}
}
}
fn read_streaminfo_block<R: ReadBytes>(input: &mut R) -> Result<StreamInfo> {
let min_block_size = try!(input.read_be_u16());
let max_block_size = try!(input.read_be_u16());
// The frame size fields are 24 bits, or 3 bytes.
let min_frame_size = try!(input.read_be_u24());
let max_frame_size = try!(input.read_be_u24());
// Next up are 20 bits that determine the sample rate.
let sample_rate_msb = try!(input.read_be_u16());
let sample_rate_lsb = try!(input.read_u8());
// Stitch together the value from the first 16 bits,
// and then the 4 most significant bits of the next byte.
let sample_rate = (sample_rate_msb as u32) << 4 | (sample_rate_lsb as u32) >> 4;
// Next three bits are the number of channels - 1. Mask them out and add 1.
let n_channels_bps = sample_rate_lsb;
let n_channels = ((n_channels_bps >> 1) & 0b0000_0111) + 1;
// The final bit is the most significant of bits per sample - 1. Bits per
// sample - 1 is 5 bits in total.
let bps_msb = n_channels_bps & 1;
let bps_lsb_n_samples = try!(input.read_u8());
// Stitch together these values, add 1 because # - 1 is stored.
let bits_per_sample = (bps_msb << 4 | (bps_lsb_n_samples >> 4)) + 1;
// Number of samples in 36 bits, we have 4 already, 32 to go.
let n_samples_msb = bps_lsb_n_samples & 0b0000_1111;
let n_samples_lsb = try!(input.read_be_u32());
let n_samples = (n_samples_msb as u64) << 32 | n_samples_lsb as u64;
// Next are 128 bits (16 bytes) of MD5 signature.
let mut md5sum = [0u8; 16];
try!(input.read_into(&mut md5sum));
// Lower bounds can never be larger than upper bounds. Note that 0 indicates
// unknown for the frame size. Also, the block size must be at least 16.
if min_block_size > max_block_size {
return fmt_err("inconsistent bounds, min block size > max block size");
}
if min_block_size < 16 {
return fmt_err("invalid block size, must be at least 16");
}
if min_frame_size > max_frame_size && max_frame_size!= 0 {
return fmt_err("inconsistent bounds, min frame size > max frame size");
}
// A sample rate of 0 is invalid, and the maximum sample rate is limited by
// the structure of the frame headers to 655350 Hz.
if sample_rate == 0 || sample_rate > 655350 {
return fmt_err("invalid sample rate");
}
let stream_info = StreamInfo {
min_block_size: min_block_size,
max_block_size: max_block_size,
min_frame_size: if min_frame_size == 0 {
None
} else {
Some(min_frame_size)
},
max_frame_size: if max_frame_size == 0 {
None
} else {
Some(max_frame_size)
},
sample_rate: sample_rate,
channels: n_channels as u32,
bits_per_sample: bits_per_sample as u32,
samples: if n_samples == 0 {
None
} else {
Some(n_samples)
},
md5sum: md5sum,
};
Ok(stream_info)
}
fn read_vorbis_comment_block<R: ReadBytes>(input: &mut R, length: u32) -> Result<VorbisComment> {
if length < 8 {
// We expect at a minimum a 32-bit vendor string length, and a 32-bit
// comment count.
return fmt_err("Vorbis comment block is too short")
}
// Fail if the length of the Vorbis comment block is larger than 1 MiB. This
// block is full of length-prefixed strings for which we allocate memory up
// front. If there were no limit on these, a maliciously crafted file could
// cause OOM by claiming to contain large strings. But at least the strings
// cannot be longer than the size of the Vorbis comment block, and by
// limiting the size of that block, we can mitigate such DoS attacks.
//
// The typical size of a the Vorbis comment block is 1 KiB; on a corpus of
// real-world flac files, the 0.05 and 0.95 quantiles were 792 and 1257
// bytes respectively, with even the 0.99 quantile below 2 KiB. The only
// reason for having a large Vorbis comment block is when cover art is
// incorrectly embedded there, but the Vorbis comment block is not the right
// place for that anyway.
if length > 10 * 1024 * 1024 {
let msg = "Vorbis comment blocks larger than 10 MiB are not supported";
return Err(Error::Unsupported(msg))
}
// The Vorbis comment block starts with a length-prefixed "vendor string".
// It cannot be larger than the block length - 8, because there are the
// 32-bit vendor string length, and comment count.
let vendor_len = try!(input.read_le_u32());
if vendor_len > length - 8 { return fmt_err("vendor string too long") }
let mut vendor_bytes = Vec::with_capacity(vendor_len as usize);
// We can safely set the lenght of the vector here; the uninitialized memory
// is not exposed. If `read_into` succeeds, it will have overwritten all
// bytes. If not, an error is returned and the memory is never exposed.
unsafe { vendor_bytes.set_len(vendor_len as usize); }
try!(input.read_into(&mut vendor_bytes));
let vendor = try!(String::from_utf8(vendor_bytes));
// Next up is the number of comments. Because every comment is at least 4
// bytes to indicate its length, there cannot be more comments than the
// length of the block divided by 4. This is only an upper bound to ensure
// that we don't allocate a big vector, to protect against DoS attacks.
let mut comments_len = try!(input.read_le_u32());
if comments_len >= length / 4 {
return fmt_err("too many entries for Vorbis comment block")
}
let mut comments = Vec::with_capacity(comments_len as usize);
let mut bytes_left = length - 8 - vendor_len;
// For every comment, there is a length-prefixed string of the form
// "NAME=value".
while bytes_left >= 4 && comments.len() < comments_len as usize {
let comment_len = try!(input.read_le_u32());
bytes_left -= 4;
if comment_len > bytes_left {
return fmt_err("Vorbis comment too long for Vorbis comment block")
}
// Some older versions of libflac allowed writing zero-length Vorbis
// comments. ALthough such files are invalid, they do occur in the wild,
// so we skip over the empty comment.
if comment_len == 0 {
// Does not overflow because `comments_len > comments.len() >= 0`.
comments_len -= 1;
continue;
}
// For the same reason as above, setting the length is safe here.
let mut comment_bytes = Vec::with_capacity(comment_len as usize);
unsafe { comment_bytes.set_len(comment_len as usize); }
try!(input.read_into(&mut comment_bytes));
bytes_left -= comment_len;
if let Some(sep_index) = comment_bytes.iter().position(|&x| x == b'=') {
{
let name_bytes = &comment_bytes[..sep_index];
// According to the Vorbis spec, the field name may consist of ascii
// bytes 0x20 through 0x7d, 0x3d (`=`) excluded. Verifying this has
// the advantage that if the check passes, the result is valid
// UTF-8, so the conversion to string will not fail.
if name_bytes.iter().any(|&x| x < 0x20 || x > 0x7d) {
return fmt_err("Vorbis comment field name contains invalid byte")
}
}
let comment = try!(String::from_utf8(comment_bytes));
comments.push((comment, sep_index));
} else {
return fmt_err("Vorbis comment does not contain '='")
}
}
if bytes_left!= 0 {
return fmt_err("Vorbis comment block has excess data")
}
if comments.len()!= comments_len as usize {
return fmt_err("Vorbis comment block contains wrong number of entries")
}
let vorbis_comment = VorbisComment {
vendor: vendor,
comments: comments,
};
Ok(vorbis_comment)
}
fn read_padding_block<R: ReadBytes>(input: &mut R, length: u32) -> Result<()> {
// The specification dictates that all bits of the padding block must be 0.
// However, the reference implementation does not issue an error when this
// is not the case, and frankly, when you are going to skip over these
// bytes and do nothing with them whatsoever, why waste all those CPU
// cycles checking that the padding is valid?
Ok(try!(input.skip(length)))
}
fn read_application_block<R: ReadBytes>(input: &mut R, length: u32) -> Result<(u32, Vec<u8>)> {
if length < 4 {
return fmt_err("application block length must be at least 4 bytes")
}
// Reject large application blocks to avoid memory-based denial-
// of-service attacks. See also the more elaborate motivation in
// `read_vorbis_comment_block()`.
if length > 10 * 1024 * 1024 {
let msg = "application blocks larger than 10 MiB are not supported";
return Err(Error::Unsupported(msg))
}
let id = try!(input.read_be_u32());
// Four bytes of the block have been used for the ID, the rest is payload.
// Create a vector of uninitialized memory, and read the block into it. The
// uninitialized memory is never exposed: read_into will either fill the
// buffer completely, or return an err, in which case the memory is not
// exposed.
let mut data = Vec::with_capacity(length as usize - 4);
unsafe { data.set_len(length as usize - 4); }
try!(input.read_into(&mut data));
Ok((id, data))
}
/// Reads metadata blocks from a stream and exposes them as an iterator.
///
/// It is assumed that the next byte that the reader will read, is the first
/// byte of a metadata block header. This means that the iterator will yield at
/// least a single value. If the iterator ever yields an error, then no more
/// data will be read thereafter, and the next value will be `None`.
pub struct MetadataBlockReader<R: ReadBytes> {
input: R,
done: bool,
}
/// Either a `MetadataBlock` or an `Error`.
pub type MetadataBlockResult = Result<MetadataBlock>;
impl<R: ReadBytes> MetadataBlockReader<R> {
/// Creates a metadata block reader that will yield at least one element.
pub fn new(input: R) -> MetadataBlockReader<R> {
MetadataBlockReader {
input: input,
done: false,
}
}
#[inline]
fn read_next(&mut self) -> MetadataBlockResult {
let header = try!(read_metadata_block_header(&mut self.input));
let block = try!(read_metadata_block(&mut self.input, header.block_type, header.length));
self.done = header.is_last;
Ok(block)
}
}
impl<R: ReadBytes> Iterator for MetadataBlockReader<R> {
type Item = MetadataBlockResult;
#[inline]
fn next(&mut self) -> Option<MetadataBlockResult> {
if self.done {
None
} else {
let block = self.read_next();
// After a failure, no more attempts | pub enum MetadataBlock {
/// A stream info block.
StreamInfo(StreamInfo),
/// A padding block (with no meaningful data).
Padding { | random_line_split |
offline_replica.rs | // Copyright 2023 MaidSafe.net limited.
//
// This SAFE Network Software is licensed to you under The General Public License (GPL), version 3.
// Unless required by applicable law or agreed to in writing, the SAFE Network Software distributed
// under the GPL Licence is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. Please review the Licences for the specific language governing
// permissions and limitations relating to use of the SAFE Network Software.
use super::{
super::error::{Error, Result},
Client, Register,
};
use crate::protocol::{
address::RegisterAddress,
authority::DataAuthority,
error::Error as ProtocolError,
messages::{
Cmd, CmdResponse, CreateRegister, EditRegister, Query, QueryResponse, RegisterCmd,
RegisterQuery, Request, Response, SignedRegisterCreate, SignedRegisterEdit,
},
register::{Action, Entry, EntryHash, Permissions, Policy, Register as RegisterReplica, User},
};
use bincode::serialize;
use std::collections::{BTreeSet, LinkedList};
use xor_name::XorName;
/// Ops made to an offline Register instance are applied locally only,
/// and accumulated till the user explicitly calls'sync'. The user can
/// switch back to sync with the network for every op by invoking `online` API.
pub struct RegisterOffline {
client: Client,
register: RegisterReplica,
ops: LinkedList<RegisterCmd>, // Cached operations.
}
impl RegisterOffline {
/// Create a new Register offline.
pub fn create(client: Client, name: XorName, tag: u64) -> Result<Self> {
Self::new(client, name, tag)
}
/// Retrieve a Register from the network to work on it offline.
pub(super) async fn retrieve(client: Client, name: XorName, tag: u64) -> Result<Self> {
let register = Self::get_register(&client, name, tag).await?;
Ok(Self {
client,
register,
ops: LinkedList::new(),
})
}
/// Instantiate a ReplicaOffline from a given Register instance.
pub(super) fn from(replica: Register) -> Self {
Self {
client: replica.offline_reg.client,
register: replica.offline_reg.register,
ops: LinkedList::new(),
}
}
/// Switch to 'online' mode where each op made locally is immediatelly pushed to the network.
pub async fn online(mut self) -> Result<Register> {
self.push().await?;
Ok(Register { offline_reg: self })
}
/// Return the Policy of the Register.
pub fn policy(&self) -> &Policy {
self.register.policy()
}
/// Return the XorName of the Register.
pub fn name(&self) -> &XorName {
self.register.name()
}
/// Return the tag value of the Register.
pub fn tag(&self) -> u64 {
self.register.tag()
}
/// Return the number of items held in the register
pub fn size(&self) -> u64 {
self.register.size()
}
/// Return a value corresponding to the provided 'hash', if present.
pub fn get(&self, hash: EntryHash) -> Result<&Entry> {
let entry = self.register.get(hash)?;
Ok(entry)
}
/// Read the last entry, or entries when there are branches, if the register is not empty.
pub fn read(&self) -> BTreeSet<(EntryHash, Entry)> {
self.register.read()
}
/// Write a new value onto the Register atop latest value.
/// It returns an error if it finds branches in the content/entries; if it is
/// required to merge/resolve the branches, invoke the `write_merging_branches` API.
pub fn | (&mut self, entry: &[u8]) -> Result<()> {
let children = self.register.read();
if children.len() > 1 {
return Err(Error::ContentBranchDetected(children));
}
self.write_atop(entry, children.into_iter().map(|(hash, _)| hash).collect())
}
/// Write a new value onto the Register atop latest value.
/// If there are branches of content/entries, it automatically merges them
/// all leaving the new value as a single latest value of the Register.
/// Note you can use `write` API instead if you need to handle
/// content/entries branches in a diffeerent way.
pub fn write_merging_branches(&mut self, entry: &[u8]) -> Result<()> {
let children: BTreeSet<EntryHash> = self
.register
.read()
.into_iter()
.map(|(hash, _)| hash)
.collect();
self.write_atop(entry, children)
}
/// Write a new value onto the Register atop the set of braches/entries
/// referenced by the provided list of their corresponding entry hash.
/// Note you can use `write_merging_branches` API instead if you
/// want to write atop all exiting branches/entries.
pub fn write_atop(&mut self, entry: &[u8], children: BTreeSet<EntryHash>) -> Result<()> {
// we need to check permissions first
let public_key = self.client.signer_pk();
self.register
.check_permissions(Action::Write, Some(User::Key(public_key)))?;
let (_hash, edit) = self.register.write(entry.into(), children)?;
let op = EditRegister {
address: *self.register.address(),
edit,
};
let auth = DataAuthority {
public_key,
signature: self.client.sign(&serialize(&op)?),
};
let cmd = RegisterCmd::Edit(SignedRegisterEdit { op, auth });
self.ops.push_front(cmd);
Ok(())
}
/// Sync this Register with the replicas on the network.
pub async fn sync(&mut self) -> Result<()> {
debug!("Syncing Register at {}, {}!", self.name(), self.tag(),);
// FIXME: handle the scenario where the Register doesn't exist on the network yet
let remote_replica = Self::get_register(&self.client, *self.name(), self.tag()).await?;
self.register.merge(remote_replica);
self.push().await
}
/// Push all operations made locally to the replicas of this Register on the network.
pub async fn push(&mut self) -> Result<()> {
let ops_len = self.ops.len();
if ops_len > 0 {
let name = *self.name();
let tag = self.tag();
debug!("Pushing {ops_len} cached Register cmds at {name}, {tag}!",);
// TODO: send them all concurrently
while let Some(cmd) = self.ops.pop_back() {
let result = match cmd {
RegisterCmd::Create {.. } => self.publish_register_create(cmd.clone()).await,
RegisterCmd::Edit {.. } => self.publish_register_edit(cmd.clone()).await,
};
if let Err(err) = result {
warn!("Did not push Register cmd on all nodes in the close group!: {err}");
// We keep the cmd for next sync to retry
self.ops.push_back(cmd);
return Err(err);
}
}
debug!("Successfully pushed {ops_len} Register cmds at {name}, {tag}!",);
}
Ok(())
}
// ********* Private helpers *********
// Create a new RegisterOffline instance with the given name and tag.
fn new(client: Client, name: XorName, tag: u64) -> Result<Self> {
let public_key = client.signer_pk();
let owner = User::Key(public_key);
let policy = Policy {
owner,
permissions: [(User::Anyone, Permissions::new(true))]
.into_iter()
.collect(),
};
let op = CreateRegister {
name,
tag,
policy: policy.clone(),
};
let auth = DataAuthority {
public_key,
signature: client.sign(&serialize(&op)?),
};
let create_cmd = RegisterCmd::Create(SignedRegisterCreate { op, auth });
let register = RegisterReplica::new(owner, name, tag, policy);
let reg = Self {
client,
register,
ops: LinkedList::from([create_cmd]),
};
Ok(reg)
}
// Publish a `Register` creation command on the network.
async fn publish_register_create(&self, cmd: RegisterCmd) -> Result<()> {
debug!("Publishing Register create cmd: {:?}", cmd.dst());
let request = Request::Cmd(Cmd::Register(cmd));
let responses = self.client.send_to_closest(request).await?;
let all_ok = responses
.iter()
.all(|resp| matches!(resp, Ok(Response::Cmd(CmdResponse::CreateRegister(Ok(()))))));
if all_ok {
return Ok(());
}
// If not all were Ok, we will return the first error sent to us.
for resp in responses.iter().flatten() {
if let Response::Cmd(CmdResponse::CreateRegister(result)) = resp {
result.clone()?;
};
}
// If there were no success or fail to the expected query,
// we check if there were any send errors.
for resp in responses {
let _ = resp?;
}
// If there were no register errors, then we had unexpected responses.
Err(Error::Protocol(ProtocolError::UnexpectedResponses))
}
// Publish a `Register` edit command in the network.
async fn publish_register_edit(&self, cmd: RegisterCmd) -> Result<()> {
debug!("Publishing Register edit cmd: {:?}", cmd.dst());
let request = Request::Cmd(Cmd::Register(cmd));
let responses = self.client.send_to_closest(request).await?;
let all_ok = responses
.iter()
.all(|resp| matches!(resp, Ok(Response::Cmd(CmdResponse::EditRegister(Ok(()))))));
if all_ok {
return Ok(());
}
// If not all were Ok, we will return the first error sent to us.
for resp in responses.iter().flatten() {
if let Response::Cmd(CmdResponse::EditRegister(result)) = resp {
result.clone()?;
};
}
// If there were no success or fail to the expected query,
// we check if there were any send errors.
for resp in responses {
let _ = resp?;
}
// If there were no register errors, then we had unexpected responses.
Err(Error::Protocol(ProtocolError::UnexpectedResponses))
}
// Retrieve a `Register` from the closest peers.
async fn get_register(client: &Client, name: XorName, tag: u64) -> Result<RegisterReplica> {
let address = RegisterAddress { name, tag };
debug!("Retrieving Register from: {address:?}");
let request = Request::Query(Query::Register(RegisterQuery::Get(address)));
let responses = client.send_to_closest(request).await?;
// We will return the first register we get.
for resp in responses.iter().flatten() {
if let Response::Query(QueryResponse::GetRegister(Ok(register))) = resp {
return Ok(register.clone());
};
}
// If no register was gotten, we will return the first error sent to us.
for resp in responses.iter().flatten() {
if let Response::Query(QueryResponse::GetChunk(result)) = resp {
let _ = result.clone()?;
};
}
// If there were no success or fail to the expected query,
// we check if there were any send errors.
for resp in responses {
let _ = resp?;
}
// If there was none of the above, then we had unexpected responses.
Err(Error::Protocol(ProtocolError::UnexpectedResponses))
}
}
| write | identifier_name |
offline_replica.rs | // Copyright 2023 MaidSafe.net limited.
//
// This SAFE Network Software is licensed to you under The General Public License (GPL), version 3.
// Unless required by applicable law or agreed to in writing, the SAFE Network Software distributed
// under the GPL Licence is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. Please review the Licences for the specific language governing
// permissions and limitations relating to use of the SAFE Network Software.
use super::{
super::error::{Error, Result},
Client, Register,
};
use crate::protocol::{
address::RegisterAddress,
authority::DataAuthority,
error::Error as ProtocolError,
messages::{
Cmd, CmdResponse, CreateRegister, EditRegister, Query, QueryResponse, RegisterCmd,
RegisterQuery, Request, Response, SignedRegisterCreate, SignedRegisterEdit,
},
register::{Action, Entry, EntryHash, Permissions, Policy, Register as RegisterReplica, User},
};
use bincode::serialize;
use std::collections::{BTreeSet, LinkedList};
use xor_name::XorName;
/// Ops made to an offline Register instance are applied locally only,
/// and accumulated till the user explicitly calls'sync'. The user can
/// switch back to sync with the network for every op by invoking `online` API.
pub struct RegisterOffline {
client: Client,
register: RegisterReplica,
ops: LinkedList<RegisterCmd>, // Cached operations.
}
impl RegisterOffline {
/// Create a new Register offline.
pub fn create(client: Client, name: XorName, tag: u64) -> Result<Self> {
Self::new(client, name, tag)
}
/// Retrieve a Register from the network to work on it offline.
pub(super) async fn retrieve(client: Client, name: XorName, tag: u64) -> Result<Self> {
let register = Self::get_register(&client, name, tag).await?;
Ok(Self {
client,
register,
ops: LinkedList::new(),
})
}
/// Instantiate a ReplicaOffline from a given Register instance.
pub(super) fn from(replica: Register) -> Self {
Self {
client: replica.offline_reg.client,
register: replica.offline_reg.register,
ops: LinkedList::new(),
}
}
/// Switch to 'online' mode where each op made locally is immediatelly pushed to the network.
pub async fn online(mut self) -> Result<Register> {
self.push().await?;
Ok(Register { offline_reg: self })
}
/// Return the Policy of the Register.
pub fn policy(&self) -> &Policy {
self.register.policy()
}
/// Return the XorName of the Register.
pub fn name(&self) -> &XorName {
self.register.name()
}
/// Return the tag value of the Register.
pub fn tag(&self) -> u64 {
self.register.tag()
}
/// Return the number of items held in the register
pub fn size(&self) -> u64 {
self.register.size()
}
/// Return a value corresponding to the provided 'hash', if present.
pub fn get(&self, hash: EntryHash) -> Result<&Entry> {
let entry = self.register.get(hash)?;
Ok(entry)
}
/// Read the last entry, or entries when there are branches, if the register is not empty.
pub fn read(&self) -> BTreeSet<(EntryHash, Entry)> {
self.register.read()
}
/// Write a new value onto the Register atop latest value.
/// It returns an error if it finds branches in the content/entries; if it is
/// required to merge/resolve the branches, invoke the `write_merging_branches` API.
pub fn write(&mut self, entry: &[u8]) -> Result<()> {
let children = self.register.read();
if children.len() > 1 {
return Err(Error::ContentBranchDetected(children));
}
self.write_atop(entry, children.into_iter().map(|(hash, _)| hash).collect())
}
/// Write a new value onto the Register atop latest value.
/// If there are branches of content/entries, it automatically merges them
/// all leaving the new value as a single latest value of the Register.
/// Note you can use `write` API instead if you need to handle
/// content/entries branches in a diffeerent way.
pub fn write_merging_branches(&mut self, entry: &[u8]) -> Result<()> {
let children: BTreeSet<EntryHash> = self
.register
.read()
.into_iter()
.map(|(hash, _)| hash)
.collect();
self.write_atop(entry, children)
}
/// Write a new value onto the Register atop the set of braches/entries
/// referenced by the provided list of their corresponding entry hash.
/// Note you can use `write_merging_branches` API instead if you
/// want to write atop all exiting branches/entries.
pub fn write_atop(&mut self, entry: &[u8], children: BTreeSet<EntryHash>) -> Result<()> {
// we need to check permissions first
let public_key = self.client.signer_pk();
self.register
.check_permissions(Action::Write, Some(User::Key(public_key)))?;
let (_hash, edit) = self.register.write(entry.into(), children)?;
let op = EditRegister {
address: *self.register.address(),
edit,
};
let auth = DataAuthority {
public_key,
signature: self.client.sign(&serialize(&op)?),
};
let cmd = RegisterCmd::Edit(SignedRegisterEdit { op, auth });
self.ops.push_front(cmd);
Ok(())
}
/// Sync this Register with the replicas on the network.
pub async fn sync(&mut self) -> Result<()> {
debug!("Syncing Register at {}, {}!", self.name(), self.tag(),);
// FIXME: handle the scenario where the Register doesn't exist on the network yet
let remote_replica = Self::get_register(&self.client, *self.name(), self.tag()).await?;
self.register.merge(remote_replica);
self.push().await
}
/// Push all operations made locally to the replicas of this Register on the network.
pub async fn push(&mut self) -> Result<()> {
let ops_len = self.ops.len();
if ops_len > 0 {
let name = *self.name();
let tag = self.tag();
debug!("Pushing {ops_len} cached Register cmds at {name}, {tag}!",);
// TODO: send them all concurrently
while let Some(cmd) = self.ops.pop_back() {
let result = match cmd {
RegisterCmd::Create {.. } => self.publish_register_create(cmd.clone()).await,
RegisterCmd::Edit {.. } => self.publish_register_edit(cmd.clone()).await,
};
if let Err(err) = result {
warn!("Did not push Register cmd on all nodes in the close group!: {err}");
// We keep the cmd for next sync to retry
self.ops.push_back(cmd);
return Err(err);
}
}
debug!("Successfully pushed {ops_len} Register cmds at {name}, {tag}!",);
}
Ok(())
}
// ********* Private helpers *********
// Create a new RegisterOffline instance with the given name and tag.
fn new(client: Client, name: XorName, tag: u64) -> Result<Self> {
let public_key = client.signer_pk();
let owner = User::Key(public_key);
let policy = Policy {
owner,
permissions: [(User::Anyone, Permissions::new(true))]
.into_iter()
.collect(),
};
let op = CreateRegister {
name,
tag,
policy: policy.clone(),
};
let auth = DataAuthority {
public_key,
signature: client.sign(&serialize(&op)?),
};
let create_cmd = RegisterCmd::Create(SignedRegisterCreate { op, auth });
let register = RegisterReplica::new(owner, name, tag, policy);
let reg = Self {
client,
register,
ops: LinkedList::from([create_cmd]),
};
Ok(reg)
}
// Publish a `Register` creation command on the network.
async fn publish_register_create(&self, cmd: RegisterCmd) -> Result<()> {
debug!("Publishing Register create cmd: {:?}", cmd.dst());
let request = Request::Cmd(Cmd::Register(cmd));
let responses = self.client.send_to_closest(request).await?;
let all_ok = responses
.iter()
.all(|resp| matches!(resp, Ok(Response::Cmd(CmdResponse::CreateRegister(Ok(()))))));
if all_ok {
return Ok(());
}
// If not all were Ok, we will return the first error sent to us.
for resp in responses.iter().flatten() {
if let Response::Cmd(CmdResponse::CreateRegister(result)) = resp {
result.clone()?;
};
}
// If there were no success or fail to the expected query,
// we check if there were any send errors.
for resp in responses {
let _ = resp?;
}
// If there were no register errors, then we had unexpected responses.
Err(Error::Protocol(ProtocolError::UnexpectedResponses))
}
// Publish a `Register` edit command in the network.
async fn publish_register_edit(&self, cmd: RegisterCmd) -> Result<()> {
debug!("Publishing Register edit cmd: {:?}", cmd.dst());
let request = Request::Cmd(Cmd::Register(cmd));
let responses = self.client.send_to_closest(request).await?;
let all_ok = responses
.iter()
.all(|resp| matches!(resp, Ok(Response::Cmd(CmdResponse::EditRegister(Ok(()))))));
if all_ok |
// If not all were Ok, we will return the first error sent to us.
for resp in responses.iter().flatten() {
if let Response::Cmd(CmdResponse::EditRegister(result)) = resp {
result.clone()?;
};
}
// If there were no success or fail to the expected query,
// we check if there were any send errors.
for resp in responses {
let _ = resp?;
}
// If there were no register errors, then we had unexpected responses.
Err(Error::Protocol(ProtocolError::UnexpectedResponses))
}
// Retrieve a `Register` from the closest peers.
async fn get_register(client: &Client, name: XorName, tag: u64) -> Result<RegisterReplica> {
let address = RegisterAddress { name, tag };
debug!("Retrieving Register from: {address:?}");
let request = Request::Query(Query::Register(RegisterQuery::Get(address)));
let responses = client.send_to_closest(request).await?;
// We will return the first register we get.
for resp in responses.iter().flatten() {
if let Response::Query(QueryResponse::GetRegister(Ok(register))) = resp {
return Ok(register.clone());
};
}
// If no register was gotten, we will return the first error sent to us.
for resp in responses.iter().flatten() {
if let Response::Query(QueryResponse::GetChunk(result)) = resp {
let _ = result.clone()?;
};
}
// If there were no success or fail to the expected query,
// we check if there were any send errors.
for resp in responses {
let _ = resp?;
}
// If there was none of the above, then we had unexpected responses.
Err(Error::Protocol(ProtocolError::UnexpectedResponses))
}
}
| {
return Ok(());
} | conditional_block |
offline_replica.rs | // Copyright 2023 MaidSafe.net limited.
//
// This SAFE Network Software is licensed to you under The General Public License (GPL), version 3.
// Unless required by applicable law or agreed to in writing, the SAFE Network Software distributed
// under the GPL Licence is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. Please review the Licences for the specific language governing
// permissions and limitations relating to use of the SAFE Network Software.
use super::{
super::error::{Error, Result},
Client, Register,
};
use crate::protocol::{
address::RegisterAddress,
authority::DataAuthority,
error::Error as ProtocolError,
messages::{
Cmd, CmdResponse, CreateRegister, EditRegister, Query, QueryResponse, RegisterCmd,
RegisterQuery, Request, Response, SignedRegisterCreate, SignedRegisterEdit,
},
register::{Action, Entry, EntryHash, Permissions, Policy, Register as RegisterReplica, User},
};
use bincode::serialize;
use std::collections::{BTreeSet, LinkedList};
use xor_name::XorName;
/// Ops made to an offline Register instance are applied locally only,
/// and accumulated till the user explicitly calls'sync'. The user can
/// switch back to sync with the network for every op by invoking `online` API.
pub struct RegisterOffline {
client: Client,
register: RegisterReplica,
ops: LinkedList<RegisterCmd>, // Cached operations.
}
impl RegisterOffline {
/// Create a new Register offline.
pub fn create(client: Client, name: XorName, tag: u64) -> Result<Self> {
Self::new(client, name, tag)
}
/// Retrieve a Register from the network to work on it offline.
pub(super) async fn retrieve(client: Client, name: XorName, tag: u64) -> Result<Self> {
let register = Self::get_register(&client, name, tag).await?;
Ok(Self {
client,
register,
ops: LinkedList::new(),
})
}
/// Instantiate a ReplicaOffline from a given Register instance.
pub(super) fn from(replica: Register) -> Self {
Self {
client: replica.offline_reg.client,
register: replica.offline_reg.register,
ops: LinkedList::new(),
}
}
/// Switch to 'online' mode where each op made locally is immediatelly pushed to the network.
pub async fn online(mut self) -> Result<Register> {
self.push().await?;
Ok(Register { offline_reg: self })
}
/// Return the Policy of the Register.
pub fn policy(&self) -> &Policy {
self.register.policy()
}
/// Return the XorName of the Register.
pub fn name(&self) -> &XorName {
self.register.name()
}
/// Return the tag value of the Register.
pub fn tag(&self) -> u64 |
/// Return the number of items held in the register
pub fn size(&self) -> u64 {
self.register.size()
}
/// Return a value corresponding to the provided 'hash', if present.
pub fn get(&self, hash: EntryHash) -> Result<&Entry> {
let entry = self.register.get(hash)?;
Ok(entry)
}
/// Read the last entry, or entries when there are branches, if the register is not empty.
pub fn read(&self) -> BTreeSet<(EntryHash, Entry)> {
self.register.read()
}
/// Write a new value onto the Register atop latest value.
/// It returns an error if it finds branches in the content/entries; if it is
/// required to merge/resolve the branches, invoke the `write_merging_branches` API.
pub fn write(&mut self, entry: &[u8]) -> Result<()> {
let children = self.register.read();
if children.len() > 1 {
return Err(Error::ContentBranchDetected(children));
}
self.write_atop(entry, children.into_iter().map(|(hash, _)| hash).collect())
}
/// Write a new value onto the Register atop latest value.
/// If there are branches of content/entries, it automatically merges them
/// all leaving the new value as a single latest value of the Register.
/// Note you can use `write` API instead if you need to handle
/// content/entries branches in a diffeerent way.
pub fn write_merging_branches(&mut self, entry: &[u8]) -> Result<()> {
let children: BTreeSet<EntryHash> = self
.register
.read()
.into_iter()
.map(|(hash, _)| hash)
.collect();
self.write_atop(entry, children)
}
/// Write a new value onto the Register atop the set of braches/entries
/// referenced by the provided list of their corresponding entry hash.
/// Note you can use `write_merging_branches` API instead if you
/// want to write atop all exiting branches/entries.
pub fn write_atop(&mut self, entry: &[u8], children: BTreeSet<EntryHash>) -> Result<()> {
// we need to check permissions first
let public_key = self.client.signer_pk();
self.register
.check_permissions(Action::Write, Some(User::Key(public_key)))?;
let (_hash, edit) = self.register.write(entry.into(), children)?;
let op = EditRegister {
address: *self.register.address(),
edit,
};
let auth = DataAuthority {
public_key,
signature: self.client.sign(&serialize(&op)?),
};
let cmd = RegisterCmd::Edit(SignedRegisterEdit { op, auth });
self.ops.push_front(cmd);
Ok(())
}
/// Sync this Register with the replicas on the network.
pub async fn sync(&mut self) -> Result<()> {
debug!("Syncing Register at {}, {}!", self.name(), self.tag(),);
// FIXME: handle the scenario where the Register doesn't exist on the network yet
let remote_replica = Self::get_register(&self.client, *self.name(), self.tag()).await?;
self.register.merge(remote_replica);
self.push().await
}
/// Push all operations made locally to the replicas of this Register on the network.
pub async fn push(&mut self) -> Result<()> {
let ops_len = self.ops.len();
if ops_len > 0 {
let name = *self.name();
let tag = self.tag();
debug!("Pushing {ops_len} cached Register cmds at {name}, {tag}!",);
// TODO: send them all concurrently
while let Some(cmd) = self.ops.pop_back() {
let result = match cmd {
RegisterCmd::Create {.. } => self.publish_register_create(cmd.clone()).await,
RegisterCmd::Edit {.. } => self.publish_register_edit(cmd.clone()).await,
};
if let Err(err) = result {
warn!("Did not push Register cmd on all nodes in the close group!: {err}");
// We keep the cmd for next sync to retry
self.ops.push_back(cmd);
return Err(err);
}
}
debug!("Successfully pushed {ops_len} Register cmds at {name}, {tag}!",);
}
Ok(())
}
// ********* Private helpers *********
// Create a new RegisterOffline instance with the given name and tag.
fn new(client: Client, name: XorName, tag: u64) -> Result<Self> {
let public_key = client.signer_pk();
let owner = User::Key(public_key);
let policy = Policy {
owner,
permissions: [(User::Anyone, Permissions::new(true))]
.into_iter()
.collect(),
};
let op = CreateRegister {
name,
tag,
policy: policy.clone(),
};
let auth = DataAuthority {
public_key,
signature: client.sign(&serialize(&op)?),
};
let create_cmd = RegisterCmd::Create(SignedRegisterCreate { op, auth });
let register = RegisterReplica::new(owner, name, tag, policy);
let reg = Self {
client,
register,
ops: LinkedList::from([create_cmd]),
};
Ok(reg)
}
// Publish a `Register` creation command on the network.
async fn publish_register_create(&self, cmd: RegisterCmd) -> Result<()> {
debug!("Publishing Register create cmd: {:?}", cmd.dst());
let request = Request::Cmd(Cmd::Register(cmd));
let responses = self.client.send_to_closest(request).await?;
let all_ok = responses
.iter()
.all(|resp| matches!(resp, Ok(Response::Cmd(CmdResponse::CreateRegister(Ok(()))))));
if all_ok {
return Ok(());
}
// If not all were Ok, we will return the first error sent to us.
for resp in responses.iter().flatten() {
if let Response::Cmd(CmdResponse::CreateRegister(result)) = resp {
result.clone()?;
};
}
// If there were no success or fail to the expected query,
// we check if there were any send errors.
for resp in responses {
let _ = resp?;
}
// If there were no register errors, then we had unexpected responses.
Err(Error::Protocol(ProtocolError::UnexpectedResponses))
}
// Publish a `Register` edit command in the network.
async fn publish_register_edit(&self, cmd: RegisterCmd) -> Result<()> {
debug!("Publishing Register edit cmd: {:?}", cmd.dst());
let request = Request::Cmd(Cmd::Register(cmd));
let responses = self.client.send_to_closest(request).await?;
let all_ok = responses
.iter()
.all(|resp| matches!(resp, Ok(Response::Cmd(CmdResponse::EditRegister(Ok(()))))));
if all_ok {
return Ok(());
}
// If not all were Ok, we will return the first error sent to us.
for resp in responses.iter().flatten() {
if let Response::Cmd(CmdResponse::EditRegister(result)) = resp {
result.clone()?;
};
}
// If there were no success or fail to the expected query,
// we check if there were any send errors.
for resp in responses {
let _ = resp?;
}
// If there were no register errors, then we had unexpected responses.
Err(Error::Protocol(ProtocolError::UnexpectedResponses))
}
// Retrieve a `Register` from the closest peers.
async fn get_register(client: &Client, name: XorName, tag: u64) -> Result<RegisterReplica> {
let address = RegisterAddress { name, tag };
debug!("Retrieving Register from: {address:?}");
let request = Request::Query(Query::Register(RegisterQuery::Get(address)));
let responses = client.send_to_closest(request).await?;
// We will return the first register we get.
for resp in responses.iter().flatten() {
if let Response::Query(QueryResponse::GetRegister(Ok(register))) = resp {
return Ok(register.clone());
};
}
// If no register was gotten, we will return the first error sent to us.
for resp in responses.iter().flatten() {
if let Response::Query(QueryResponse::GetChunk(result)) = resp {
let _ = result.clone()?;
};
}
// If there were no success or fail to the expected query,
// we check if there were any send errors.
for resp in responses {
let _ = resp?;
}
// If there was none of the above, then we had unexpected responses.
Err(Error::Protocol(ProtocolError::UnexpectedResponses))
}
}
| {
self.register.tag()
} | identifier_body |
offline_replica.rs | // Copyright 2023 MaidSafe.net limited.
//
// This SAFE Network Software is licensed to you under The General Public License (GPL), version 3.
// Unless required by applicable law or agreed to in writing, the SAFE Network Software distributed
// under the GPL Licence is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. Please review the Licences for the specific language governing
// permissions and limitations relating to use of the SAFE Network Software.
use super::{
super::error::{Error, Result},
Client, Register,
};
use crate::protocol::{
address::RegisterAddress,
authority::DataAuthority,
error::Error as ProtocolError,
messages::{
Cmd, CmdResponse, CreateRegister, EditRegister, Query, QueryResponse, RegisterCmd,
RegisterQuery, Request, Response, SignedRegisterCreate, SignedRegisterEdit,
},
register::{Action, Entry, EntryHash, Permissions, Policy, Register as RegisterReplica, User},
};
use bincode::serialize;
use std::collections::{BTreeSet, LinkedList};
use xor_name::XorName;
/// Ops made to an offline Register instance are applied locally only,
/// and accumulated till the user explicitly calls'sync'. The user can
/// switch back to sync with the network for every op by invoking `online` API.
pub struct RegisterOffline {
client: Client,
register: RegisterReplica,
ops: LinkedList<RegisterCmd>, // Cached operations.
}
impl RegisterOffline {
/// Create a new Register offline.
pub fn create(client: Client, name: XorName, tag: u64) -> Result<Self> {
Self::new(client, name, tag)
}
/// Retrieve a Register from the network to work on it offline.
pub(super) async fn retrieve(client: Client, name: XorName, tag: u64) -> Result<Self> {
let register = Self::get_register(&client, name, tag).await?;
Ok(Self {
client,
register,
ops: LinkedList::new(),
})
}
/// Instantiate a ReplicaOffline from a given Register instance.
pub(super) fn from(replica: Register) -> Self {
Self {
client: replica.offline_reg.client,
register: replica.offline_reg.register,
ops: LinkedList::new(),
}
}
/// Switch to 'online' mode where each op made locally is immediatelly pushed to the network.
pub async fn online(mut self) -> Result<Register> {
self.push().await?;
Ok(Register { offline_reg: self })
}
/// Return the Policy of the Register.
pub fn policy(&self) -> &Policy {
self.register.policy()
}
/// Return the XorName of the Register.
pub fn name(&self) -> &XorName {
self.register.name()
}
/// Return the tag value of the Register.
pub fn tag(&self) -> u64 {
self.register.tag()
}
/// Return the number of items held in the register
pub fn size(&self) -> u64 {
self.register.size()
}
/// Return a value corresponding to the provided 'hash', if present.
pub fn get(&self, hash: EntryHash) -> Result<&Entry> {
let entry = self.register.get(hash)?;
Ok(entry)
}
/// Read the last entry, or entries when there are branches, if the register is not empty.
pub fn read(&self) -> BTreeSet<(EntryHash, Entry)> {
self.register.read()
}
/// Write a new value onto the Register atop latest value.
/// It returns an error if it finds branches in the content/entries; if it is | }
self.write_atop(entry, children.into_iter().map(|(hash, _)| hash).collect())
}
/// Write a new value onto the Register atop latest value.
/// If there are branches of content/entries, it automatically merges them
/// all leaving the new value as a single latest value of the Register.
/// Note you can use `write` API instead if you need to handle
/// content/entries branches in a diffeerent way.
pub fn write_merging_branches(&mut self, entry: &[u8]) -> Result<()> {
let children: BTreeSet<EntryHash> = self
.register
.read()
.into_iter()
.map(|(hash, _)| hash)
.collect();
self.write_atop(entry, children)
}
/// Write a new value onto the Register atop the set of braches/entries
/// referenced by the provided list of their corresponding entry hash.
/// Note you can use `write_merging_branches` API instead if you
/// want to write atop all exiting branches/entries.
pub fn write_atop(&mut self, entry: &[u8], children: BTreeSet<EntryHash>) -> Result<()> {
// we need to check permissions first
let public_key = self.client.signer_pk();
self.register
.check_permissions(Action::Write, Some(User::Key(public_key)))?;
let (_hash, edit) = self.register.write(entry.into(), children)?;
let op = EditRegister {
address: *self.register.address(),
edit,
};
let auth = DataAuthority {
public_key,
signature: self.client.sign(&serialize(&op)?),
};
let cmd = RegisterCmd::Edit(SignedRegisterEdit { op, auth });
self.ops.push_front(cmd);
Ok(())
}
/// Sync this Register with the replicas on the network.
pub async fn sync(&mut self) -> Result<()> {
debug!("Syncing Register at {}, {}!", self.name(), self.tag(),);
// FIXME: handle the scenario where the Register doesn't exist on the network yet
let remote_replica = Self::get_register(&self.client, *self.name(), self.tag()).await?;
self.register.merge(remote_replica);
self.push().await
}
/// Push all operations made locally to the replicas of this Register on the network.
pub async fn push(&mut self) -> Result<()> {
let ops_len = self.ops.len();
if ops_len > 0 {
let name = *self.name();
let tag = self.tag();
debug!("Pushing {ops_len} cached Register cmds at {name}, {tag}!",);
// TODO: send them all concurrently
while let Some(cmd) = self.ops.pop_back() {
let result = match cmd {
RegisterCmd::Create {.. } => self.publish_register_create(cmd.clone()).await,
RegisterCmd::Edit {.. } => self.publish_register_edit(cmd.clone()).await,
};
if let Err(err) = result {
warn!("Did not push Register cmd on all nodes in the close group!: {err}");
// We keep the cmd for next sync to retry
self.ops.push_back(cmd);
return Err(err);
}
}
debug!("Successfully pushed {ops_len} Register cmds at {name}, {tag}!",);
}
Ok(())
}
// ********* Private helpers *********
// Create a new RegisterOffline instance with the given name and tag.
fn new(client: Client, name: XorName, tag: u64) -> Result<Self> {
let public_key = client.signer_pk();
let owner = User::Key(public_key);
let policy = Policy {
owner,
permissions: [(User::Anyone, Permissions::new(true))]
.into_iter()
.collect(),
};
let op = CreateRegister {
name,
tag,
policy: policy.clone(),
};
let auth = DataAuthority {
public_key,
signature: client.sign(&serialize(&op)?),
};
let create_cmd = RegisterCmd::Create(SignedRegisterCreate { op, auth });
let register = RegisterReplica::new(owner, name, tag, policy);
let reg = Self {
client,
register,
ops: LinkedList::from([create_cmd]),
};
Ok(reg)
}
// Publish a `Register` creation command on the network.
async fn publish_register_create(&self, cmd: RegisterCmd) -> Result<()> {
debug!("Publishing Register create cmd: {:?}", cmd.dst());
let request = Request::Cmd(Cmd::Register(cmd));
let responses = self.client.send_to_closest(request).await?;
let all_ok = responses
.iter()
.all(|resp| matches!(resp, Ok(Response::Cmd(CmdResponse::CreateRegister(Ok(()))))));
if all_ok {
return Ok(());
}
// If not all were Ok, we will return the first error sent to us.
for resp in responses.iter().flatten() {
if let Response::Cmd(CmdResponse::CreateRegister(result)) = resp {
result.clone()?;
};
}
// If there were no success or fail to the expected query,
// we check if there were any send errors.
for resp in responses {
let _ = resp?;
}
// If there were no register errors, then we had unexpected responses.
Err(Error::Protocol(ProtocolError::UnexpectedResponses))
}
// Publish a `Register` edit command in the network.
async fn publish_register_edit(&self, cmd: RegisterCmd) -> Result<()> {
debug!("Publishing Register edit cmd: {:?}", cmd.dst());
let request = Request::Cmd(Cmd::Register(cmd));
let responses = self.client.send_to_closest(request).await?;
let all_ok = responses
.iter()
.all(|resp| matches!(resp, Ok(Response::Cmd(CmdResponse::EditRegister(Ok(()))))));
if all_ok {
return Ok(());
}
// If not all were Ok, we will return the first error sent to us.
for resp in responses.iter().flatten() {
if let Response::Cmd(CmdResponse::EditRegister(result)) = resp {
result.clone()?;
};
}
// If there were no success or fail to the expected query,
// we check if there were any send errors.
for resp in responses {
let _ = resp?;
}
// If there were no register errors, then we had unexpected responses.
Err(Error::Protocol(ProtocolError::UnexpectedResponses))
}
// Retrieve a `Register` from the closest peers.
async fn get_register(client: &Client, name: XorName, tag: u64) -> Result<RegisterReplica> {
let address = RegisterAddress { name, tag };
debug!("Retrieving Register from: {address:?}");
let request = Request::Query(Query::Register(RegisterQuery::Get(address)));
let responses = client.send_to_closest(request).await?;
// We will return the first register we get.
for resp in responses.iter().flatten() {
if let Response::Query(QueryResponse::GetRegister(Ok(register))) = resp {
return Ok(register.clone());
};
}
// If no register was gotten, we will return the first error sent to us.
for resp in responses.iter().flatten() {
if let Response::Query(QueryResponse::GetChunk(result)) = resp {
let _ = result.clone()?;
};
}
// If there were no success or fail to the expected query,
// we check if there were any send errors.
for resp in responses {
let _ = resp?;
}
// If there was none of the above, then we had unexpected responses.
Err(Error::Protocol(ProtocolError::UnexpectedResponses))
}
} | /// required to merge/resolve the branches, invoke the `write_merging_branches` API.
pub fn write(&mut self, entry: &[u8]) -> Result<()> {
let children = self.register.read();
if children.len() > 1 {
return Err(Error::ContentBranchDetected(children)); | random_line_split |
main.rs | use anyhow::{anyhow, Context};
use atty::Stream;
use rhc::args::Args;
use rhc::config::Config;
use rhc::environment::Environment;
use rhc::files::{get_all_toml_files, load_file};
use rhc::http;
use rhc::interactive;
use rhc::interactive::SelectedValues;
use rhc::keyvalue::KeyValue;
use rhc::request_definition::RequestDefinition;
use rhc::templating;
use serde_json::{to_string_pretty, Value};
use spinners::{Spinner, Spinners};
use std::borrow::Cow;
use std::env;
use std::fs::OpenOptions;
use std::io::{Stdout, Write};
use std::path::Path;
use std::path::PathBuf;
use structopt::StructOpt;
use syntect::easy::HighlightLines;
use syntect::highlighting::{Style, Theme, ThemeSet};
use syntect::parsing::SyntaxSet;
use syntect::util::{as_24_bit_terminal_escaped, LinesWithEndings};
use syntect::LoadingError;
use termion::input::{Keys, TermRead};
use termion::raw::{IntoRawMode, RawTerminal};
use termion::screen::AlternateScreen;
use termion::AsyncReader;
use tui::backend::TermionBackend;
use tui::Terminal;
// use simplelog::{CombinedLogger, WriteLogger, LevelFilter, Config as LogConfig};
// use std::fs::File;
use std::io::{stdout, BufWriter};
fn main() {
if let Err(e) = run() {
// If an error was raised during an interactive mode call while the alternate screen is in
// use, we have to flush stdout here or the user will not see the error message.
std::io::stdout().flush().unwrap();
// Seems like this initial newline is necessary or the error will be printed with an offset
eprintln!("\nError: {:#}", e);
std::process::exit(1);
}
}
type OurTerminal = Terminal<TermionBackend<AlternateScreen<RawTerminal<Stdout>>>>;
/// Set up/create the terminal for use in interactive mode.
fn get_terminal() -> anyhow::Result<OurTerminal> {
let stdout = std::io::stdout().into_raw_mode()?;
let stdout = AlternateScreen::from(stdout);
let backend = TermionBackend::new(stdout);
let t = Terminal::new(backend)?;
Ok(t)
}
fn | () -> anyhow::Result<()> {
// CombinedLogger::init(
// vec![
// WriteLogger::new(LevelFilter::Debug, LogConfig::default(), File::create("rhc.log").unwrap()),
// ]
// ).unwrap();
let args: Args = Args::from_args();
let output_file = args
.output_file
.map(|path_buf| {
OpenOptions::new()
.create_new(true)
.write(true)
.open(path_buf.as_path())
})
.transpose()?;
let writer: Box<dyn std::io::Write> = match &output_file {
Some(f) => Box::new(f),
None => Box::new(stdout()),
};
let mut writer = BufWriter::new(writer);
// If the user specifies a config location, make sure there's actually a file there
args.config.as_ref().map_or(Ok(()), |c| {
if c.is_file() {
Ok(())
} else {
Err(anyhow!("No config file found at `{}`", c.to_string_lossy()))
}
})?;
// Load the config file using this priority:
// 1. The file specified with the --config arg, if present
// 2. $XDG_CONFIG_HOME/rhc/config.toml, if XDG_CONFIG_HOME is defined
// 3. ~/.config/rhc/config.toml, if present
// If none of the above exist, use the default Config.
let raw_config_location: PathBuf = args.config.unwrap_or_else(|| {
match env::var_os("XDG_CONFIG_HOME") {
Some(xdg_config_home) => PathBuf::from(xdg_config_home),
None => PathBuf::from("~/.config"),
}
.join("rhc")
.join("config.toml")
});
let raw_config_location = raw_config_location.to_string_lossy();
let config_location: Cow<str> = shellexpand::tilde(raw_config_location.as_ref());
let config_path = Path::new(config_location.as_ref());
if args.verbose {
writeln!(
stdout(),
"Looking for config file at {}",
config_path.display()
)?;
}
let config = {
if config_path.is_file() {
Config::new(config_path).context(format!(
"Could not load config file at {}",
config_path.to_string_lossy()
))?
} else {
writeln!(
stdout(),
"No config file found at {}, falling back to default config",
config_path.display()
)?;
Config::default()
}
};
let is_tty = atty::is(Stream::Stdout);
// These two are necessary for use in interactive mode; but conversely, when not at an
// interactive shell, trying to create this `Terminal` will cause an error. So they start as
// None, and will be created on-demand if necessary (no request definition file provided, or
// unbound variables exist).
let mut keys: Option<Keys<AsyncReader>> = None;
let mut terminal: Option<OurTerminal> = None;
// If the user specified a request definition file, just use that; otherwise, enter interactive
// mode to allow them to choose a request definition. In either case, we need to keep track of
// the file names for the request definition that's either provided or selected, as well as the
// environment being used (if any), as these are required for the prompt_for_variables
// function.
let result: anyhow::Result<Option<SelectedValues>> = {
match &args.file {
Some(path) => {
let def: RequestDefinition =
load_file(&path, RequestDefinition::new, "request definition")?;
let env_path: Option<PathBuf> = args.environment;
let env: Option<Environment> = env_path
.as_deref()
.map(|path| load_file(&path, Environment::new, "environment"))
.transpose()?;
Ok(Some(SelectedValues { def, env }))
}
None => {
if is_tty {
// If we have to enter interactive mode, check if there is at least one request
// definition file available. If not, there's nothing that can be done, so
// print a warning and exit.
if get_all_toml_files(&config.request_definition_directory).is_empty() {
Err(anyhow!("No TOML files found under {}. Running rhc in interactive mode requres at least one request definition file.", &config.request_definition_directory))
} else {
// `terminal` and `keys` must be None at this point, so just create them
terminal = Some(get_terminal()?);
keys = Some(termion::async_stdin().keys());
let interactive_result = interactive::interactive_mode(
&config,
args.environment.as_deref(),
&mut keys.as_mut().unwrap(),
&mut terminal.as_mut().unwrap(),
)?;
Ok(interactive_result)
}
} else {
Err(anyhow!("Running in interactive mode requires a TTY"))
}
}
}
};
let result = result?;
// `interactive_mode` will return None if they Ctrl-C out without selecting anything.
// if let Some((mut request_definition, mut vars)) = result {
if let Some(SelectedValues { mut def, env }) = result {
// Split up the variables and environment name immediately to avoid difficulties with borrowing
// `env` later on
let (mut vars, env_name): (Vec<KeyValue>, String) =
env.map_or((vec![], "<none>".to_string()), |e| (e.variables, e.name));
vars.sort();
if let Some(bindings) = args.binding {
for binding in bindings {
match vars.binary_search_by(|item| item.name.cmp(&binding.name)) {
Ok(index) => {
// If variable is already present, overwrite it with the one passed on the
// command line (these have the highest priority)
vars.remove(index);
vars.insert(index, binding);
}
Err(index) => vars.insert(index, binding),
};
}
}
// Substitute the variables that we have at this point into all the places of the
// RequestDefinitions that they can be used (URL, headers, body, query string)
templating::substitute_all(&mut def, &vars);
// // If any unbound variables remain, prompt the user to enter them interactively
let unbound_variables = templating::list_unbound_variables(&def);
let additional_vars: anyhow::Result<Option<Vec<KeyValue>>> = {
if!unbound_variables.is_empty() {
if is_tty {
// `terminal` and `keys` could have been initialized above, so only initialize them
// here if necessary.
if keys.is_none() {
terminal = Some(get_terminal()?);
keys = Some(termion::async_stdin().keys());
}
interactive::prompt_for_variables(
&config,
unbound_variables,
&env_name,
&mut keys.as_mut().unwrap(),
&mut terminal.as_mut().unwrap(),
)
} else {
Err(anyhow!("Running in interactive mode requires a TTY"))
}
} else {
Ok(Some(vec![]))
}
};
// Switch back to the original screen
drop(terminal);
// Flush stdout so the interactive terminal screen is cleared immediately
std::io::stdout().flush().ok();
let additional_vars = additional_vars?;
// `prompt_for_variables` returning None means the user aborted with Ctrl-C and we
// should not send the request
if let Some(additional_vars) = additional_vars {
// Do the final substition with the user-provided variables
templating::substitute_all(&mut def, &additional_vars);
let mut sp: Option<Spinner> = None;
if is_tty {
sp = Some(Spinner::new(Spinners::Dots, "Sending request...".into()));
}
let res = http::send_request(def, &config).context("Failed sending request")?;
if let Some(s) = sp {
s.stop();
writeln!(writer, "\n")?;
}
let headers = res.headers();
if!(&args.only_body) {
writeln!(writer, "{}\n", res.status())?;
for (name, value) in headers {
let value = value.to_str()?;
writeln!(writer, "{}: {}", name.as_str(), value)?;
}
writeln!(writer)?;
}
let is_json = headers
.get("content-type")
.map(|h| {
let value = h.to_str().unwrap_or("");
value.contains("application/json")
|| value.contains("text/json")
|| value.contains("application/javascript")
})
.unwrap_or(false);
if is_json && is_tty && output_file.is_none() {
// If the content-type header on the response suggests that the response is JSON,
// try to parse it as a generic Value, then pretty-print it with highlighting via
// syntect. If the parsing fails, give up on the pretty-printing and just print the
// raw text response (still with JSON highlighting, if possible)
let body: Value = res.json()?;
let body = to_string_pretty(&body).unwrap_or_else(|_| body.to_string());
let ps = SyntaxSet::load_defaults_newlines();
let syntax = ps.find_syntax_by_extension("json").unwrap();
let ts = ThemeSet::load_defaults();
// If the user has specified no theme in their config file, fall back to a default
// included in syntect. If they specify a name of a default syntect theme, use
// that. Otherwise, treat their provided value as a file path and try to load a
// theme.
let theme: Result<Cow<Theme>, LoadingError> = match config.theme.as_ref() {
None => Ok(Cow::Borrowed(&ts.themes["base16-eighties.dark"])),
Some(theme_file) => ts
.themes
.get(theme_file)
.map(|t| Ok(Cow::Borrowed(t)))
.unwrap_or_else(|| {
let expanded: Cow<str> = shellexpand::tilde(theme_file);
let path: &Path = Path::new(expanded.as_ref());
ThemeSet::get_theme(path).map(Cow::Owned)
}),
};
match theme {
Ok(theme) => {
let mut h = HighlightLines::new(syntax, theme.as_ref());
for line in LinesWithEndings::from(&body) {
let ranges: Vec<(Style, &str)> = h.highlight(line, &ps);
let escaped = as_24_bit_terminal_escaped(&ranges[..], false);
write!(writer, "{}", escaped)?;
}
writeln!(writer)?;
}
Err(e) => {
eprintln!(
"Error: Could not load theme at {}: {}, continuing with no theme",
&config.theme.unwrap(),
e
);
writeln!(writer, "{}", body)?;
}
}
} else {
let body = res.text()?;
writeln!(writer, "{}", body)?;
}
}
}
Ok(())
}
| run | identifier_name |
main.rs | use anyhow::{anyhow, Context};
use atty::Stream;
use rhc::args::Args;
use rhc::config::Config;
use rhc::environment::Environment;
use rhc::files::{get_all_toml_files, load_file};
use rhc::http;
use rhc::interactive;
use rhc::interactive::SelectedValues;
use rhc::keyvalue::KeyValue;
use rhc::request_definition::RequestDefinition;
use rhc::templating;
use serde_json::{to_string_pretty, Value};
use spinners::{Spinner, Spinners};
use std::borrow::Cow;
use std::env;
use std::fs::OpenOptions;
use std::io::{Stdout, Write};
use std::path::Path;
use std::path::PathBuf;
use structopt::StructOpt;
use syntect::easy::HighlightLines;
use syntect::highlighting::{Style, Theme, ThemeSet};
use syntect::parsing::SyntaxSet;
use syntect::util::{as_24_bit_terminal_escaped, LinesWithEndings};
use syntect::LoadingError;
use termion::input::{Keys, TermRead};
use termion::raw::{IntoRawMode, RawTerminal};
use termion::screen::AlternateScreen;
use termion::AsyncReader;
use tui::backend::TermionBackend;
use tui::Terminal;
// use simplelog::{CombinedLogger, WriteLogger, LevelFilter, Config as LogConfig};
// use std::fs::File;
use std::io::{stdout, BufWriter};
fn main() {
if let Err(e) = run() {
// If an error was raised during an interactive mode call while the alternate screen is in
// use, we have to flush stdout here or the user will not see the error message.
std::io::stdout().flush().unwrap();
// Seems like this initial newline is necessary or the error will be printed with an offset
eprintln!("\nError: {:#}", e);
std::process::exit(1);
}
}
type OurTerminal = Terminal<TermionBackend<AlternateScreen<RawTerminal<Stdout>>>>;
/// Set up/create the terminal for use in interactive mode.
fn get_terminal() -> anyhow::Result<OurTerminal> {
let stdout = std::io::stdout().into_raw_mode()?;
let stdout = AlternateScreen::from(stdout);
let backend = TermionBackend::new(stdout);
let t = Terminal::new(backend)?;
Ok(t)
}
fn run() -> anyhow::Result<()> {
// CombinedLogger::init(
// vec![
// WriteLogger::new(LevelFilter::Debug, LogConfig::default(), File::create("rhc.log").unwrap()),
// ]
// ).unwrap();
let args: Args = Args::from_args();
let output_file = args
.output_file
.map(|path_buf| {
OpenOptions::new()
.create_new(true)
.write(true)
.open(path_buf.as_path())
})
.transpose()?;
let writer: Box<dyn std::io::Write> = match &output_file {
Some(f) => Box::new(f),
None => Box::new(stdout()),
};
let mut writer = BufWriter::new(writer);
// If the user specifies a config location, make sure there's actually a file there
args.config.as_ref().map_or(Ok(()), |c| {
if c.is_file() {
Ok(())
} else {
Err(anyhow!("No config file found at `{}`", c.to_string_lossy()))
}
})?;
// Load the config file using this priority:
// 1. The file specified with the --config arg, if present
// 2. $XDG_CONFIG_HOME/rhc/config.toml, if XDG_CONFIG_HOME is defined
// 3. ~/.config/rhc/config.toml, if present
// If none of the above exist, use the default Config.
let raw_config_location: PathBuf = args.config.unwrap_or_else(|| {
match env::var_os("XDG_CONFIG_HOME") {
Some(xdg_config_home) => PathBuf::from(xdg_config_home),
None => PathBuf::from("~/.config"),
}
.join("rhc")
.join("config.toml")
});
let raw_config_location = raw_config_location.to_string_lossy();
let config_location: Cow<str> = shellexpand::tilde(raw_config_location.as_ref());
let config_path = Path::new(config_location.as_ref());
if args.verbose {
writeln!(
stdout(),
"Looking for config file at {}",
config_path.display()
)?;
}
let config = {
if config_path.is_file() {
Config::new(config_path).context(format!(
"Could not load config file at {}",
config_path.to_string_lossy()
))?
} else {
writeln!(
stdout(),
"No config file found at {}, falling back to default config",
config_path.display()
)?;
Config::default()
}
};
let is_tty = atty::is(Stream::Stdout);
// These two are necessary for use in interactive mode; but conversely, when not at an
// interactive shell, trying to create this `Terminal` will cause an error. So they start as
// None, and will be created on-demand if necessary (no request definition file provided, or
// unbound variables exist).
let mut keys: Option<Keys<AsyncReader>> = None;
let mut terminal: Option<OurTerminal> = None;
// If the user specified a request definition file, just use that; otherwise, enter interactive
// mode to allow them to choose a request definition. In either case, we need to keep track of | match &args.file {
Some(path) => {
let def: RequestDefinition =
load_file(&path, RequestDefinition::new, "request definition")?;
let env_path: Option<PathBuf> = args.environment;
let env: Option<Environment> = env_path
.as_deref()
.map(|path| load_file(&path, Environment::new, "environment"))
.transpose()?;
Ok(Some(SelectedValues { def, env }))
}
None => {
if is_tty {
// If we have to enter interactive mode, check if there is at least one request
// definition file available. If not, there's nothing that can be done, so
// print a warning and exit.
if get_all_toml_files(&config.request_definition_directory).is_empty() {
Err(anyhow!("No TOML files found under {}. Running rhc in interactive mode requres at least one request definition file.", &config.request_definition_directory))
} else {
// `terminal` and `keys` must be None at this point, so just create them
terminal = Some(get_terminal()?);
keys = Some(termion::async_stdin().keys());
let interactive_result = interactive::interactive_mode(
&config,
args.environment.as_deref(),
&mut keys.as_mut().unwrap(),
&mut terminal.as_mut().unwrap(),
)?;
Ok(interactive_result)
}
} else {
Err(anyhow!("Running in interactive mode requires a TTY"))
}
}
}
};
let result = result?;
// `interactive_mode` will return None if they Ctrl-C out without selecting anything.
// if let Some((mut request_definition, mut vars)) = result {
if let Some(SelectedValues { mut def, env }) = result {
// Split up the variables and environment name immediately to avoid difficulties with borrowing
// `env` later on
let (mut vars, env_name): (Vec<KeyValue>, String) =
env.map_or((vec![], "<none>".to_string()), |e| (e.variables, e.name));
vars.sort();
if let Some(bindings) = args.binding {
for binding in bindings {
match vars.binary_search_by(|item| item.name.cmp(&binding.name)) {
Ok(index) => {
// If variable is already present, overwrite it with the one passed on the
// command line (these have the highest priority)
vars.remove(index);
vars.insert(index, binding);
}
Err(index) => vars.insert(index, binding),
};
}
}
// Substitute the variables that we have at this point into all the places of the
// RequestDefinitions that they can be used (URL, headers, body, query string)
templating::substitute_all(&mut def, &vars);
// // If any unbound variables remain, prompt the user to enter them interactively
let unbound_variables = templating::list_unbound_variables(&def);
let additional_vars: anyhow::Result<Option<Vec<KeyValue>>> = {
if!unbound_variables.is_empty() {
if is_tty {
// `terminal` and `keys` could have been initialized above, so only initialize them
// here if necessary.
if keys.is_none() {
terminal = Some(get_terminal()?);
keys = Some(termion::async_stdin().keys());
}
interactive::prompt_for_variables(
&config,
unbound_variables,
&env_name,
&mut keys.as_mut().unwrap(),
&mut terminal.as_mut().unwrap(),
)
} else {
Err(anyhow!("Running in interactive mode requires a TTY"))
}
} else {
Ok(Some(vec![]))
}
};
// Switch back to the original screen
drop(terminal);
// Flush stdout so the interactive terminal screen is cleared immediately
std::io::stdout().flush().ok();
let additional_vars = additional_vars?;
// `prompt_for_variables` returning None means the user aborted with Ctrl-C and we
// should not send the request
if let Some(additional_vars) = additional_vars {
// Do the final substition with the user-provided variables
templating::substitute_all(&mut def, &additional_vars);
let mut sp: Option<Spinner> = None;
if is_tty {
sp = Some(Spinner::new(Spinners::Dots, "Sending request...".into()));
}
let res = http::send_request(def, &config).context("Failed sending request")?;
if let Some(s) = sp {
s.stop();
writeln!(writer, "\n")?;
}
let headers = res.headers();
if!(&args.only_body) {
writeln!(writer, "{}\n", res.status())?;
for (name, value) in headers {
let value = value.to_str()?;
writeln!(writer, "{}: {}", name.as_str(), value)?;
}
writeln!(writer)?;
}
let is_json = headers
.get("content-type")
.map(|h| {
let value = h.to_str().unwrap_or("");
value.contains("application/json")
|| value.contains("text/json")
|| value.contains("application/javascript")
})
.unwrap_or(false);
if is_json && is_tty && output_file.is_none() {
// If the content-type header on the response suggests that the response is JSON,
// try to parse it as a generic Value, then pretty-print it with highlighting via
// syntect. If the parsing fails, give up on the pretty-printing and just print the
// raw text response (still with JSON highlighting, if possible)
let body: Value = res.json()?;
let body = to_string_pretty(&body).unwrap_or_else(|_| body.to_string());
let ps = SyntaxSet::load_defaults_newlines();
let syntax = ps.find_syntax_by_extension("json").unwrap();
let ts = ThemeSet::load_defaults();
// If the user has specified no theme in their config file, fall back to a default
// included in syntect. If they specify a name of a default syntect theme, use
// that. Otherwise, treat their provided value as a file path and try to load a
// theme.
let theme: Result<Cow<Theme>, LoadingError> = match config.theme.as_ref() {
None => Ok(Cow::Borrowed(&ts.themes["base16-eighties.dark"])),
Some(theme_file) => ts
.themes
.get(theme_file)
.map(|t| Ok(Cow::Borrowed(t)))
.unwrap_or_else(|| {
let expanded: Cow<str> = shellexpand::tilde(theme_file);
let path: &Path = Path::new(expanded.as_ref());
ThemeSet::get_theme(path).map(Cow::Owned)
}),
};
match theme {
Ok(theme) => {
let mut h = HighlightLines::new(syntax, theme.as_ref());
for line in LinesWithEndings::from(&body) {
let ranges: Vec<(Style, &str)> = h.highlight(line, &ps);
let escaped = as_24_bit_terminal_escaped(&ranges[..], false);
write!(writer, "{}", escaped)?;
}
writeln!(writer)?;
}
Err(e) => {
eprintln!(
"Error: Could not load theme at {}: {}, continuing with no theme",
&config.theme.unwrap(),
e
);
writeln!(writer, "{}", body)?;
}
}
} else {
let body = res.text()?;
writeln!(writer, "{}", body)?;
}
}
}
Ok(())
} | // the file names for the request definition that's either provided or selected, as well as the
// environment being used (if any), as these are required for the prompt_for_variables
// function.
let result: anyhow::Result<Option<SelectedValues>> = { | random_line_split |
main.rs | use anyhow::{anyhow, Context};
use atty::Stream;
use rhc::args::Args;
use rhc::config::Config;
use rhc::environment::Environment;
use rhc::files::{get_all_toml_files, load_file};
use rhc::http;
use rhc::interactive;
use rhc::interactive::SelectedValues;
use rhc::keyvalue::KeyValue;
use rhc::request_definition::RequestDefinition;
use rhc::templating;
use serde_json::{to_string_pretty, Value};
use spinners::{Spinner, Spinners};
use std::borrow::Cow;
use std::env;
use std::fs::OpenOptions;
use std::io::{Stdout, Write};
use std::path::Path;
use std::path::PathBuf;
use structopt::StructOpt;
use syntect::easy::HighlightLines;
use syntect::highlighting::{Style, Theme, ThemeSet};
use syntect::parsing::SyntaxSet;
use syntect::util::{as_24_bit_terminal_escaped, LinesWithEndings};
use syntect::LoadingError;
use termion::input::{Keys, TermRead};
use termion::raw::{IntoRawMode, RawTerminal};
use termion::screen::AlternateScreen;
use termion::AsyncReader;
use tui::backend::TermionBackend;
use tui::Terminal;
// use simplelog::{CombinedLogger, WriteLogger, LevelFilter, Config as LogConfig};
// use std::fs::File;
use std::io::{stdout, BufWriter};
fn main() {
if let Err(e) = run() {
// If an error was raised during an interactive mode call while the alternate screen is in
// use, we have to flush stdout here or the user will not see the error message.
std::io::stdout().flush().unwrap();
// Seems like this initial newline is necessary or the error will be printed with an offset
eprintln!("\nError: {:#}", e);
std::process::exit(1);
}
}
type OurTerminal = Terminal<TermionBackend<AlternateScreen<RawTerminal<Stdout>>>>;
/// Set up/create the terminal for use in interactive mode.
fn get_terminal() -> anyhow::Result<OurTerminal> {
let stdout = std::io::stdout().into_raw_mode()?;
let stdout = AlternateScreen::from(stdout);
let backend = TermionBackend::new(stdout);
let t = Terminal::new(backend)?;
Ok(t)
}
fn run() -> anyhow::Result<()> {
// CombinedLogger::init(
// vec![
// WriteLogger::new(LevelFilter::Debug, LogConfig::default(), File::create("rhc.log").unwrap()),
// ]
// ).unwrap();
let args: Args = Args::from_args();
let output_file = args
.output_file
.map(|path_buf| {
OpenOptions::new()
.create_new(true)
.write(true)
.open(path_buf.as_path())
})
.transpose()?;
let writer: Box<dyn std::io::Write> = match &output_file {
Some(f) => Box::new(f),
None => Box::new(stdout()),
};
let mut writer = BufWriter::new(writer);
// If the user specifies a config location, make sure there's actually a file there
args.config.as_ref().map_or(Ok(()), |c| {
if c.is_file() {
Ok(())
} else |
})?;
// Load the config file using this priority:
// 1. The file specified with the --config arg, if present
// 2. $XDG_CONFIG_HOME/rhc/config.toml, if XDG_CONFIG_HOME is defined
// 3. ~/.config/rhc/config.toml, if present
// If none of the above exist, use the default Config.
let raw_config_location: PathBuf = args.config.unwrap_or_else(|| {
match env::var_os("XDG_CONFIG_HOME") {
Some(xdg_config_home) => PathBuf::from(xdg_config_home),
None => PathBuf::from("~/.config"),
}
.join("rhc")
.join("config.toml")
});
let raw_config_location = raw_config_location.to_string_lossy();
let config_location: Cow<str> = shellexpand::tilde(raw_config_location.as_ref());
let config_path = Path::new(config_location.as_ref());
if args.verbose {
writeln!(
stdout(),
"Looking for config file at {}",
config_path.display()
)?;
}
let config = {
if config_path.is_file() {
Config::new(config_path).context(format!(
"Could not load config file at {}",
config_path.to_string_lossy()
))?
} else {
writeln!(
stdout(),
"No config file found at {}, falling back to default config",
config_path.display()
)?;
Config::default()
}
};
let is_tty = atty::is(Stream::Stdout);
// These two are necessary for use in interactive mode; but conversely, when not at an
// interactive shell, trying to create this `Terminal` will cause an error. So they start as
// None, and will be created on-demand if necessary (no request definition file provided, or
// unbound variables exist).
let mut keys: Option<Keys<AsyncReader>> = None;
let mut terminal: Option<OurTerminal> = None;
// If the user specified a request definition file, just use that; otherwise, enter interactive
// mode to allow them to choose a request definition. In either case, we need to keep track of
// the file names for the request definition that's either provided or selected, as well as the
// environment being used (if any), as these are required for the prompt_for_variables
// function.
let result: anyhow::Result<Option<SelectedValues>> = {
match &args.file {
Some(path) => {
let def: RequestDefinition =
load_file(&path, RequestDefinition::new, "request definition")?;
let env_path: Option<PathBuf> = args.environment;
let env: Option<Environment> = env_path
.as_deref()
.map(|path| load_file(&path, Environment::new, "environment"))
.transpose()?;
Ok(Some(SelectedValues { def, env }))
}
None => {
if is_tty {
// If we have to enter interactive mode, check if there is at least one request
// definition file available. If not, there's nothing that can be done, so
// print a warning and exit.
if get_all_toml_files(&config.request_definition_directory).is_empty() {
Err(anyhow!("No TOML files found under {}. Running rhc in interactive mode requres at least one request definition file.", &config.request_definition_directory))
} else {
// `terminal` and `keys` must be None at this point, so just create them
terminal = Some(get_terminal()?);
keys = Some(termion::async_stdin().keys());
let interactive_result = interactive::interactive_mode(
&config,
args.environment.as_deref(),
&mut keys.as_mut().unwrap(),
&mut terminal.as_mut().unwrap(),
)?;
Ok(interactive_result)
}
} else {
Err(anyhow!("Running in interactive mode requires a TTY"))
}
}
}
};
let result = result?;
// `interactive_mode` will return None if they Ctrl-C out without selecting anything.
// if let Some((mut request_definition, mut vars)) = result {
if let Some(SelectedValues { mut def, env }) = result {
// Split up the variables and environment name immediately to avoid difficulties with borrowing
// `env` later on
let (mut vars, env_name): (Vec<KeyValue>, String) =
env.map_or((vec![], "<none>".to_string()), |e| (e.variables, e.name));
vars.sort();
if let Some(bindings) = args.binding {
for binding in bindings {
match vars.binary_search_by(|item| item.name.cmp(&binding.name)) {
Ok(index) => {
// If variable is already present, overwrite it with the one passed on the
// command line (these have the highest priority)
vars.remove(index);
vars.insert(index, binding);
}
Err(index) => vars.insert(index, binding),
};
}
}
// Substitute the variables that we have at this point into all the places of the
// RequestDefinitions that they can be used (URL, headers, body, query string)
templating::substitute_all(&mut def, &vars);
// // If any unbound variables remain, prompt the user to enter them interactively
let unbound_variables = templating::list_unbound_variables(&def);
let additional_vars: anyhow::Result<Option<Vec<KeyValue>>> = {
if!unbound_variables.is_empty() {
if is_tty {
// `terminal` and `keys` could have been initialized above, so only initialize them
// here if necessary.
if keys.is_none() {
terminal = Some(get_terminal()?);
keys = Some(termion::async_stdin().keys());
}
interactive::prompt_for_variables(
&config,
unbound_variables,
&env_name,
&mut keys.as_mut().unwrap(),
&mut terminal.as_mut().unwrap(),
)
} else {
Err(anyhow!("Running in interactive mode requires a TTY"))
}
} else {
Ok(Some(vec![]))
}
};
// Switch back to the original screen
drop(terminal);
// Flush stdout so the interactive terminal screen is cleared immediately
std::io::stdout().flush().ok();
let additional_vars = additional_vars?;
// `prompt_for_variables` returning None means the user aborted with Ctrl-C and we
// should not send the request
if let Some(additional_vars) = additional_vars {
// Do the final substition with the user-provided variables
templating::substitute_all(&mut def, &additional_vars);
let mut sp: Option<Spinner> = None;
if is_tty {
sp = Some(Spinner::new(Spinners::Dots, "Sending request...".into()));
}
let res = http::send_request(def, &config).context("Failed sending request")?;
if let Some(s) = sp {
s.stop();
writeln!(writer, "\n")?;
}
let headers = res.headers();
if!(&args.only_body) {
writeln!(writer, "{}\n", res.status())?;
for (name, value) in headers {
let value = value.to_str()?;
writeln!(writer, "{}: {}", name.as_str(), value)?;
}
writeln!(writer)?;
}
let is_json = headers
.get("content-type")
.map(|h| {
let value = h.to_str().unwrap_or("");
value.contains("application/json")
|| value.contains("text/json")
|| value.contains("application/javascript")
})
.unwrap_or(false);
if is_json && is_tty && output_file.is_none() {
// If the content-type header on the response suggests that the response is JSON,
// try to parse it as a generic Value, then pretty-print it with highlighting via
// syntect. If the parsing fails, give up on the pretty-printing and just print the
// raw text response (still with JSON highlighting, if possible)
let body: Value = res.json()?;
let body = to_string_pretty(&body).unwrap_or_else(|_| body.to_string());
let ps = SyntaxSet::load_defaults_newlines();
let syntax = ps.find_syntax_by_extension("json").unwrap();
let ts = ThemeSet::load_defaults();
// If the user has specified no theme in their config file, fall back to a default
// included in syntect. If they specify a name of a default syntect theme, use
// that. Otherwise, treat their provided value as a file path and try to load a
// theme.
let theme: Result<Cow<Theme>, LoadingError> = match config.theme.as_ref() {
None => Ok(Cow::Borrowed(&ts.themes["base16-eighties.dark"])),
Some(theme_file) => ts
.themes
.get(theme_file)
.map(|t| Ok(Cow::Borrowed(t)))
.unwrap_or_else(|| {
let expanded: Cow<str> = shellexpand::tilde(theme_file);
let path: &Path = Path::new(expanded.as_ref());
ThemeSet::get_theme(path).map(Cow::Owned)
}),
};
match theme {
Ok(theme) => {
let mut h = HighlightLines::new(syntax, theme.as_ref());
for line in LinesWithEndings::from(&body) {
let ranges: Vec<(Style, &str)> = h.highlight(line, &ps);
let escaped = as_24_bit_terminal_escaped(&ranges[..], false);
write!(writer, "{}", escaped)?;
}
writeln!(writer)?;
}
Err(e) => {
eprintln!(
"Error: Could not load theme at {}: {}, continuing with no theme",
&config.theme.unwrap(),
e
);
writeln!(writer, "{}", body)?;
}
}
} else {
let body = res.text()?;
writeln!(writer, "{}", body)?;
}
}
}
Ok(())
}
| {
Err(anyhow!("No config file found at `{}`", c.to_string_lossy()))
} | conditional_block |
main.rs | use anyhow::{anyhow, Context};
use atty::Stream;
use rhc::args::Args;
use rhc::config::Config;
use rhc::environment::Environment;
use rhc::files::{get_all_toml_files, load_file};
use rhc::http;
use rhc::interactive;
use rhc::interactive::SelectedValues;
use rhc::keyvalue::KeyValue;
use rhc::request_definition::RequestDefinition;
use rhc::templating;
use serde_json::{to_string_pretty, Value};
use spinners::{Spinner, Spinners};
use std::borrow::Cow;
use std::env;
use std::fs::OpenOptions;
use std::io::{Stdout, Write};
use std::path::Path;
use std::path::PathBuf;
use structopt::StructOpt;
use syntect::easy::HighlightLines;
use syntect::highlighting::{Style, Theme, ThemeSet};
use syntect::parsing::SyntaxSet;
use syntect::util::{as_24_bit_terminal_escaped, LinesWithEndings};
use syntect::LoadingError;
use termion::input::{Keys, TermRead};
use termion::raw::{IntoRawMode, RawTerminal};
use termion::screen::AlternateScreen;
use termion::AsyncReader;
use tui::backend::TermionBackend;
use tui::Terminal;
// use simplelog::{CombinedLogger, WriteLogger, LevelFilter, Config as LogConfig};
// use std::fs::File;
use std::io::{stdout, BufWriter};
fn main() |
type OurTerminal = Terminal<TermionBackend<AlternateScreen<RawTerminal<Stdout>>>>;
/// Set up/create the terminal for use in interactive mode.
fn get_terminal() -> anyhow::Result<OurTerminal> {
let stdout = std::io::stdout().into_raw_mode()?;
let stdout = AlternateScreen::from(stdout);
let backend = TermionBackend::new(stdout);
let t = Terminal::new(backend)?;
Ok(t)
}
fn run() -> anyhow::Result<()> {
// CombinedLogger::init(
// vec![
// WriteLogger::new(LevelFilter::Debug, LogConfig::default(), File::create("rhc.log").unwrap()),
// ]
// ).unwrap();
let args: Args = Args::from_args();
let output_file = args
.output_file
.map(|path_buf| {
OpenOptions::new()
.create_new(true)
.write(true)
.open(path_buf.as_path())
})
.transpose()?;
let writer: Box<dyn std::io::Write> = match &output_file {
Some(f) => Box::new(f),
None => Box::new(stdout()),
};
let mut writer = BufWriter::new(writer);
// If the user specifies a config location, make sure there's actually a file there
args.config.as_ref().map_or(Ok(()), |c| {
if c.is_file() {
Ok(())
} else {
Err(anyhow!("No config file found at `{}`", c.to_string_lossy()))
}
})?;
// Load the config file using this priority:
// 1. The file specified with the --config arg, if present
// 2. $XDG_CONFIG_HOME/rhc/config.toml, if XDG_CONFIG_HOME is defined
// 3. ~/.config/rhc/config.toml, if present
// If none of the above exist, use the default Config.
let raw_config_location: PathBuf = args.config.unwrap_or_else(|| {
match env::var_os("XDG_CONFIG_HOME") {
Some(xdg_config_home) => PathBuf::from(xdg_config_home),
None => PathBuf::from("~/.config"),
}
.join("rhc")
.join("config.toml")
});
let raw_config_location = raw_config_location.to_string_lossy();
let config_location: Cow<str> = shellexpand::tilde(raw_config_location.as_ref());
let config_path = Path::new(config_location.as_ref());
if args.verbose {
writeln!(
stdout(),
"Looking for config file at {}",
config_path.display()
)?;
}
let config = {
if config_path.is_file() {
Config::new(config_path).context(format!(
"Could not load config file at {}",
config_path.to_string_lossy()
))?
} else {
writeln!(
stdout(),
"No config file found at {}, falling back to default config",
config_path.display()
)?;
Config::default()
}
};
let is_tty = atty::is(Stream::Stdout);
// These two are necessary for use in interactive mode; but conversely, when not at an
// interactive shell, trying to create this `Terminal` will cause an error. So they start as
// None, and will be created on-demand if necessary (no request definition file provided, or
// unbound variables exist).
let mut keys: Option<Keys<AsyncReader>> = None;
let mut terminal: Option<OurTerminal> = None;
// If the user specified a request definition file, just use that; otherwise, enter interactive
// mode to allow them to choose a request definition. In either case, we need to keep track of
// the file names for the request definition that's either provided or selected, as well as the
// environment being used (if any), as these are required for the prompt_for_variables
// function.
let result: anyhow::Result<Option<SelectedValues>> = {
match &args.file {
Some(path) => {
let def: RequestDefinition =
load_file(&path, RequestDefinition::new, "request definition")?;
let env_path: Option<PathBuf> = args.environment;
let env: Option<Environment> = env_path
.as_deref()
.map(|path| load_file(&path, Environment::new, "environment"))
.transpose()?;
Ok(Some(SelectedValues { def, env }))
}
None => {
if is_tty {
// If we have to enter interactive mode, check if there is at least one request
// definition file available. If not, there's nothing that can be done, so
// print a warning and exit.
if get_all_toml_files(&config.request_definition_directory).is_empty() {
Err(anyhow!("No TOML files found under {}. Running rhc in interactive mode requres at least one request definition file.", &config.request_definition_directory))
} else {
// `terminal` and `keys` must be None at this point, so just create them
terminal = Some(get_terminal()?);
keys = Some(termion::async_stdin().keys());
let interactive_result = interactive::interactive_mode(
&config,
args.environment.as_deref(),
&mut keys.as_mut().unwrap(),
&mut terminal.as_mut().unwrap(),
)?;
Ok(interactive_result)
}
} else {
Err(anyhow!("Running in interactive mode requires a TTY"))
}
}
}
};
let result = result?;
// `interactive_mode` will return None if they Ctrl-C out without selecting anything.
// if let Some((mut request_definition, mut vars)) = result {
if let Some(SelectedValues { mut def, env }) = result {
// Split up the variables and environment name immediately to avoid difficulties with borrowing
// `env` later on
let (mut vars, env_name): (Vec<KeyValue>, String) =
env.map_or((vec![], "<none>".to_string()), |e| (e.variables, e.name));
vars.sort();
if let Some(bindings) = args.binding {
for binding in bindings {
match vars.binary_search_by(|item| item.name.cmp(&binding.name)) {
Ok(index) => {
// If variable is already present, overwrite it with the one passed on the
// command line (these have the highest priority)
vars.remove(index);
vars.insert(index, binding);
}
Err(index) => vars.insert(index, binding),
};
}
}
// Substitute the variables that we have at this point into all the places of the
// RequestDefinitions that they can be used (URL, headers, body, query string)
templating::substitute_all(&mut def, &vars);
// // If any unbound variables remain, prompt the user to enter them interactively
let unbound_variables = templating::list_unbound_variables(&def);
let additional_vars: anyhow::Result<Option<Vec<KeyValue>>> = {
if!unbound_variables.is_empty() {
if is_tty {
// `terminal` and `keys` could have been initialized above, so only initialize them
// here if necessary.
if keys.is_none() {
terminal = Some(get_terminal()?);
keys = Some(termion::async_stdin().keys());
}
interactive::prompt_for_variables(
&config,
unbound_variables,
&env_name,
&mut keys.as_mut().unwrap(),
&mut terminal.as_mut().unwrap(),
)
} else {
Err(anyhow!("Running in interactive mode requires a TTY"))
}
} else {
Ok(Some(vec![]))
}
};
// Switch back to the original screen
drop(terminal);
// Flush stdout so the interactive terminal screen is cleared immediately
std::io::stdout().flush().ok();
let additional_vars = additional_vars?;
// `prompt_for_variables` returning None means the user aborted with Ctrl-C and we
// should not send the request
if let Some(additional_vars) = additional_vars {
// Do the final substition with the user-provided variables
templating::substitute_all(&mut def, &additional_vars);
let mut sp: Option<Spinner> = None;
if is_tty {
sp = Some(Spinner::new(Spinners::Dots, "Sending request...".into()));
}
let res = http::send_request(def, &config).context("Failed sending request")?;
if let Some(s) = sp {
s.stop();
writeln!(writer, "\n")?;
}
let headers = res.headers();
if!(&args.only_body) {
writeln!(writer, "{}\n", res.status())?;
for (name, value) in headers {
let value = value.to_str()?;
writeln!(writer, "{}: {}", name.as_str(), value)?;
}
writeln!(writer)?;
}
let is_json = headers
.get("content-type")
.map(|h| {
let value = h.to_str().unwrap_or("");
value.contains("application/json")
|| value.contains("text/json")
|| value.contains("application/javascript")
})
.unwrap_or(false);
if is_json && is_tty && output_file.is_none() {
// If the content-type header on the response suggests that the response is JSON,
// try to parse it as a generic Value, then pretty-print it with highlighting via
// syntect. If the parsing fails, give up on the pretty-printing and just print the
// raw text response (still with JSON highlighting, if possible)
let body: Value = res.json()?;
let body = to_string_pretty(&body).unwrap_or_else(|_| body.to_string());
let ps = SyntaxSet::load_defaults_newlines();
let syntax = ps.find_syntax_by_extension("json").unwrap();
let ts = ThemeSet::load_defaults();
// If the user has specified no theme in their config file, fall back to a default
// included in syntect. If they specify a name of a default syntect theme, use
// that. Otherwise, treat their provided value as a file path and try to load a
// theme.
let theme: Result<Cow<Theme>, LoadingError> = match config.theme.as_ref() {
None => Ok(Cow::Borrowed(&ts.themes["base16-eighties.dark"])),
Some(theme_file) => ts
.themes
.get(theme_file)
.map(|t| Ok(Cow::Borrowed(t)))
.unwrap_or_else(|| {
let expanded: Cow<str> = shellexpand::tilde(theme_file);
let path: &Path = Path::new(expanded.as_ref());
ThemeSet::get_theme(path).map(Cow::Owned)
}),
};
match theme {
Ok(theme) => {
let mut h = HighlightLines::new(syntax, theme.as_ref());
for line in LinesWithEndings::from(&body) {
let ranges: Vec<(Style, &str)> = h.highlight(line, &ps);
let escaped = as_24_bit_terminal_escaped(&ranges[..], false);
write!(writer, "{}", escaped)?;
}
writeln!(writer)?;
}
Err(e) => {
eprintln!(
"Error: Could not load theme at {}: {}, continuing with no theme",
&config.theme.unwrap(),
e
);
writeln!(writer, "{}", body)?;
}
}
} else {
let body = res.text()?;
writeln!(writer, "{}", body)?;
}
}
}
Ok(())
}
| {
if let Err(e) = run() {
// If an error was raised during an interactive mode call while the alternate screen is in
// use, we have to flush stdout here or the user will not see the error message.
std::io::stdout().flush().unwrap();
// Seems like this initial newline is necessary or the error will be printed with an offset
eprintln!("\nError: {:#}", e);
std::process::exit(1);
}
} | identifier_body |
jwt.rs | , Permission, Result, User};
use crate::application::Config;
const SUPPORTED_ALGORITHMS: [Algorithm; 3] = [Algorithm::RS256, Algorithm::RS384, Algorithm::RS512];
/// JSON Web Key. A cryptographic key used to validate JSON Web Tokens.
#[derive(Debug, PartialEq, Deserialize, Serialize)]
struct Key {
alg: Algorithm,
kty: String,
r#use: String,
n: String,
e: String,
kid: String,
x5t: String,
x5c: Vec<String>,
}
/// JSON Web Key Set. A collection of JSON Web Keys used by a given service.
#[derive(Debug, PartialEq, Deserialize, Serialize)]
struct Jwks {
keys: Vec<Key>,
}
impl Jwks {
/// Looks up the key by id, or returns None if it's not present in the key
/// set.
pub fn find(&self, key_id: &str) -> Option<&Key> {
self.keys.iter().find(|key| key.kid == key_id)
}
}
/// A set of values encoded in a JWT that the issuer claims to be true.
#[derive(Debug, Deserialize)]
struct Claims {
/// Audience (who or that the token is intended for). E.g. "https://api.xsnippet.org".
#[allow(unused)]
aud: String,
/// Issuer (who created and signed this token). E.g. "https://xsnippet.eu.auth0.com/".
#[allow(unused)]
iss: String,
/// Subject (whom the token refers to). E.g. "[email protected]".
sub: String,
/// Expiration time (seconds since Unix epoch).
#[allow(unused)]
exp: usize,
/// Subject permissions (e.g. vec!["import"])
permissions: Vec<Permission>,
}
impl Jwks {
/// Returns a Jwks retrieved from the location identified by the given URI.
pub fn from_uri(uri: &str) -> Result<Self> {
let load_err = Error::Configuration(format!("Can't load Jwks state from {}", uri));
let json = match uri.split_once("://") {
Some(("https", _)) => reqwest::blocking::get(uri)
.and_then(|response| response.text())
.map_err(|_| load_err)?,
Some(("file", path)) => std::fs::read_to_string(path).map_err(|_| load_err)?,
_ => {
return Err(Error::Configuration(
"URI scheme is not supported or URI is invalid".to_string(),
))
}
};
let jwks = serde_json::from_slice::<Jwks>(json.as_bytes())
.map_err(|_| Error::Configuration("Can't parse Jwks state as JSON".to_string()))?;
if!jwks.keys.is_empty() {
Ok(jwks)
} else {
Err(Error::Configuration("Jwks is empty".to_string()))
}
}
}
/// A facade for validation of JWT values.
pub struct JwtValidator {
jwks: Jwks,
validation: Validation,
}
impl JwtValidator {
/// Returns a new JwtValidator constructed from the given parameters.
///
/// # Arguments
///
/// * `audience` - The intended recipient of the tokens (e.g. "https://api.xsnippet.org")
/// * `issuer` - The principal that issues the tokens (e.g. "https://xsnippet.eu.auth0.com/")
/// * `jwks_uri` - The location of JWT Key Set with keys used to validate the tokens (e.g. "https://xsnippet.eu.auth0.com/.well-known/jwks.json")
pub fn new(audience: String, issuer: String, jwks_uri: &str) -> Result<Self> {
let jwks = Jwks::from_uri(jwks_uri)?;
// The following token properties are going to be verified:
// * the expiration time
// * the issuer
// * the intended audience
let validation = Validation {
algorithms: SUPPORTED_ALGORITHMS.to_vec(),
aud: Some(std::iter::once(audience).collect()),
iss: Some(issuer),
..Validation::default()
};
Ok(JwtValidator { jwks, validation })
}
/// Returns a new JwtValidator constructed from the application config.
pub fn from_config(config: &Config) -> Result<Self> {
JwtValidator::new(
config.jwt_audience.to_owned(),
config.jwt_issuer.to_owned(),
&config.jwt_jwks_uri,
)
}
}
impl AuthValidator for JwtValidator {
fn validate(&self, token: &str) -> Result<User> {
let header = | .kid
.as_ref()
.ok_or_else(|| Error::Input("Token does not specify the key id".to_string()))?;
let key = self
.jwks
.find(key_id)
.filter(|key| key.alg == header.alg && key.r#use == "sig")
.ok_or_else(|| {
Error::Configuration(format!("Signing key {:?} can't be found", key_id))
})?;
jsonwebtoken::DecodingKey::from_rsa_components(&key.n, &key.e)
}
alg => return Err(Error::Input(format!("Unsupported algorithm: {:?}", alg))),
};
match jsonwebtoken::decode::<Claims>(token, &key, &self.validation) {
Ok(data) => Ok(User::Authenticated {
name: data.claims.sub,
permissions: data.claims.permissions,
}),
Err(err) => Err(Error::Validation(err.to_string())),
}
}
}
#[cfg(test)]
mod tests {
use std::io::BufWriter;
use std::{io::Write, path::Path};
use super::*;
const KID: &str = "test-key";
const AUDIENCE: &str = "xsnippet-api-tests-aud";
const ISSUER: &str = "xsnippet-api-tests-iss";
const N: &str = "qN5dCh1M2RA3aF6ZH4IRXIQYKvaWRG59F7lpQIUyUtkDiURmVem7x86EuGmmTmQPSRhx6fL4jF0GBkduAhYnFn_A8T6WfQXzXyI2cyqXsKaTkDMKvl7nHnGttQIuG8W2m7H74pklsPKxp0rPJ0zjV1N_lr_nZG_dayJhtEnChHpScaoTwqMitcOJfVacNxcRbTSDy1IZY5AW0TLdATUmc-ddJLQXxSV7bMur_4S1MP45tHrttChgtnmPpL3q4MHZjHR8aNRYPurkJkPwY0t6nrERTPk9DE4Mk5NtNzqZRBY7eT94pmodVUBGTVYhh7bFDGB26oyTk8_5aedO6syB6w==";
const E: &str = "AQAB";
const USER_TOKEN: &str = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6InRlc3Qta2V5In0.eyJzdWIiOiJ1c2VyIiwiYXVkIjoieHNuaXBwZXQtYXBpLXRlc3RzLWF1ZCIsImlzcyI6InhzbmlwcGV0LWFwaS10ZXN0cy1pc3MiLCJleHAiOjQ3NzAzNzU1NDQsInBlcm1pc3Npb25zIjpbXX0.doA6EeVLnp-MLNRTRUzg03rw9oUn5vDGv59zNysrcFfvkEiiYAtZMu-YW_N3YtE0qv2FTaGAXHryMqsEk8rsFv4uepDuOpzutnRoB4JDFTpvJkKYE4HZjsd8eHSAjFEuCvDjm7wnxoW0zDXH_zj1FITht-c3ua6KbgeevvDjpUgaR52Zou9HRyNa6ns5OKO7yJofA32IZaO7QH69iQiZ4o9WA8PfFNyuVqyQVkvZwpr68JLgl4qTTX4NIWV4wU4OWbIGN6-p4QSkS_Ljkau9sRKjnx4NYPbICMGWVThn_MKOfg26DjGZlI_0HFYDBLogJkTmmyT-5IIIWUqBgUKWYA";
const EXPIRED_TOKEN: &str = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6InRlc3Qta2V5In0.eyJzdWIiOiJ1c2VyIiwiYXVkIjoieHNuaXBwZXQtYXBpLXRlc3RzLWF1ZCIsImlzcyI6InhzbmlwcGV0LWFwaS10ZXN0cy1pc3MiLCJleHAiOjE2MTY3NzIwMDYsInBlcm1pc3Npb25zIjpbXX0.AkC-xzeJ7OXi5fN-DXs43vKAjgsep5Cwx2e1c3hbv1jPpJVnwTD2M_A8Bphd8-mzMsuO017a_rZQIj30dzt3I5Z730Z4zHA_xPV4nl_6zsGzCYTwecT1qmOhTuiyP1PhdgveVQz-ImNDbAzD80PTUwW8Bv-r4R1wyrc5lRtj2ofF1h2_rqxWtRbQwvqmm_J4K8oklYWOrBPNFXJVOGVcji97LelBY6llWbfVUO2unNZBA7MbJLDMtuQHMIRSHn1PXSLA4MJbxOzT-kZC01OlpQWtGstxnITHc34ZDVe5M0v092PSe5J0o3_OBVCR405-rPK_EjLD8saPE3SK7X0Cfw";
const INVALID_TOKEN: &str = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6InRlc3Qta2V5In0.eyJzdWIiOiJ1c2VyIiwiYXVkIjoieHNuaXBwZXQtYXBpLXRlc3RzLWF1ZCIsImlzcyI6InhzbmlwcGV0LWFwaS10ZXN0cy1pc3MiLCJleHAiOjQ3NzAzNzU1NDQsInBlcm1pc3Npb25zIjpbXX0.doA6EeVLnp-MLNRTRUzg03rw9oUn5vDGv59zNysrcFfvkEiiYAtZMu-YW_N3YtE0qv2FTaGAXHryMqsEk8rsFv4uepDuOpzutnRoB4JDFTpvJkKYE4HZjsd8eHSAjFEuCvDjm7wnxoW0zDXH_zj1FITht-c3ua6KbgeevvDjpUgaR52Zou9HRyNa6ns5OKO7yJofA32IZaO7QH69iQiZ4o9WA8PfFNyuVqyQVkvZwpr68JLgl4qTTX4NIWV4wU4OWbIGN6-p3QSkS_Ljkau9sRKjnx4NYPbICMGWVThn_MKOfg26DjGZlI_0HFYDBLogJkTmmyT-5IIIWUqBgUKWYA";
const INVALID_HEADER_TOKEN: &str = "eyJ0eXAiOiJKV1QiLCJraWQiOiJ0ZXN0LWtleSJ9.eyJzdWIiOiJ1c2VyIiwiYXVkIjoieHNuaXBwZXQtYXBpLXRlc3RzLWF1ZCIsImlzcyI6InhzbmlwcGV0LWFwaS10ZXN0cy1pc3MiLCJleHAiOjQ3NzAzNzU1NDQsInBlcm1pc3Npb25zIjpbXX0.doA6EeVLnp-MLNRTRUzg03rw9oUn5vDGv59zNysrcFfvkEiiYAtZMu-YW_N3YtE0qv2FTaGAXHryMqsEk8rsFv4uepDuOpzutnRoB4JDFTpvJkKYE4HZjsd8eHSAjFEuCvDjm7wnxoW0zDXH_zj1FITht-c3ua6KbgeevvDjpUgaR52Zou9HRyNa6ns5OKO7yJofA32IZaO7QH69iQiZ4o9WA8PfFNyuVqyQVkvZwpr68JLgl4qTTX4NIWV4wU4OWbIGN6-p4QSkS_Ljkau9sRKjnx4NYPbICMGWVThn_MKOfg26DjGZlI_0HFYDBLogJkTmmyT-5IIIWUqBgUKWYA";
const NO_KID_TOKEN: &str = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJ1c2VyIiwiYXVkIjoieHNuaXBwZXQtYXBpLXRlc3RzLWF1ZCIsImlzcyI6InhzbmlwcGV0LWFwaS10ZXN0cy1pc3MiLCJleHAiOjQ3NzAzNzU1NDQsInBlcm1pc3Npb25zIjpbXX0.doA6EeVLnp-MLNRTRUzg03rw9oUn5vDGv59zNysrcFfvkEiiYAtZMu-YW_N3YtE0qv2FTaGAXHryMqsEk8rsFv4uepDuOpzutnRoB4JDFTpvJkKYE4HZjsd8eHSAjFEuCvDjm7wnxoW0zDXH_zj1FITht-c3ua6KbgeevvDjpUgaR52Zou9HRyNa6ns5OKO7yJofA32IZaO7QH69iQiZ4o9WA8PfFNyuVqyQVkvZwpr68JLgl4qTTX4NIWV4wU4OWbIGN6-p4QSkS_Ljkau9sRKjnx4NYPbICMGWVThn_MKOfg26DjGZlI_0HFYDBLogJkTmmyT-5IIIWUqBgUKWYA";
const UNKNOWN_KID_TOKEN: &str = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6ImVnZ3MifQ.eyJzdWIiOiJ1c2VyIiwiYXVkIjoieHNuaXBwZXQtYXBpLXRlc3RzLWF1ZCIsImlzcyI6InhzbmlwcGV0LWFwaS10ZXN0cy1pc3MiLCJleHAiOjQ3NzAzNzU1NDQsInBlcm1pc3Npb25zIjpbXX0.doA6EeVLnp-MLNRTRUzg03rw9oUn5vDGv59zNysrcFfvkEiiYAtZMu-YW_N3YtE0qv2FTaGAXHryMqsEk8rsFv4uepDuOpzutnRoB4JDFTpvJkKYE4HZjsd8eHSAjFEuCvDjm7wnxoW0zDXH_zj1FITht-c3ua6KbgeevvDjpUgaR52Zou9HRyNa6ns5OKO7yJofA32IZaO7QH69iQiZ4o9WA8PfFNyuVqyQVkvZwpr68JLgl4qTTX4NIWV4wU4OWbIGN6-p4QSkS_Ljkau9sRKjnx4NYPbICMGWVThn_MKOfg26DjGZlI_0HFYDBLogJkTmmyT-5IIIWUqBgUKWYA";
const UNSUPPORTED_ALG_TOKEN: &str = "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiIsImtpZCI6InRlc3Qta2V5In0.eyJzdWIiOiJ1c2VyIiwiYXVkIjoieHNuaXBwZXQtYXBpLXRlc3RzLWF1ZCIsImlzcyI6InhzbmlwcGV0LWFwaS10ZXN0cy1pc3MiLCJleHAiOjQ3NzAzNzU1NDQsInBlcm1pc3Npb25zIjpbXX0.doA6EeVLnp-MLNRTRUzg03rw9oUn5vDGv59zNysrcFfvkEiiYAtZMu-YW_N3YtE0qv2FTaGAXHryMqsEk8rsFv4uepDuOpzutnRoB4JDFTpvJkKYE4HZjsd8eHSAjFEuCvDjm7wnxoW0zDXH_zj1FITht-c3ua6KbgeevvDjpUgaR52Zou9HRyNa6ns5OKO7yJofA32IZaO7QH69iQiZ4o9WA8PfFNyuVqyQVkvZwpr68JLgl4qTTX4NIWV4wU4OWbIGN6-p4QSkS_Ljkau9sRKjnx4NYPbICMGWVThn_MKOfg26DjGZlI_0HFYDBLogJkTmmyT-5IIIWUqBgUKWYA";
fn with_jwks<F>(test: F)
where
F: FnOnce(&Path),
{
let jwks = Jwks {
keys: vec![Key {
alg: Algorithm::RS256,
kty: String::from("RSA"),
r#use: String::from("sig"),
n: String::from(N),
e: String::from(E),
kid: String::from(KID),
x5t: String::default(),
x5c: Vec::default(),
}],
};
let mut file = tempfile::NamedTempFile::new().unwrap();
{
let mut writer = BufWriter::new(&mut file);
serde_json::to_writer(&mut writer, &jwks).unwrap();
}
test(file.path());
}
#[test]
fn jwks_from_uri() {
with_jwks(|path| {
let jwks = Jwks::from_uri(&(String::from("file://") + path.to_str().unwrap())).unwrap();
let expected_key = Key {
alg: Algorithm::RS256,
kty: String::from("RSA"),
r#use: String::from("sig"),
n: String::from(N),
e: String::from(E),
kid: String::from(KID),
x5t: String::default(),
x5c: Vec::default(),
};
assert_eq!(jwks.find(KID), Some(&expected_key));
assert_eq!(jwks.find("spam"), None);
});
}
#[test]
fn jwks_from_uri_empty() {
let mut file = tempfile::NamedTempFile::new().unwrap();
{
let mut writer = BufWriter::new(&mut file);
serde_json::to_writer(&mut writer, &Jwks { keys: Vec::new() }).unwrap();
}
match Jwks::from_uri(&(String::from("file://") + file.path().to_str().unwrap())) {
Err(Error::Configuration(msg)) => assert!(msg.contains("Jwks is empty")),
_ => panic!("unexpected result"),
};
}
#[test]
fn jwks_from_uri_invalid_json() {
let mut file = tempfile::NamedTempFile::new().unwrap();
{
let mut writer = BufWriter::new(&mut file);
writer.write_all(b"[]").unwrap();
}
match Jwks::from_uri(&(String::from("file://") + file.path().to_str().unwrap())) {
Err(Error::Configuration(msg)) => {
assert!(msg.contains("Can't parse Jwks state as JSON"))
}
_ => panic!("unexpected result"),
};
}
#[test]
fn jwks_from_uri_invalid_uri() {
match Jwks::from_uri("ftp://foo/bar") {
Err(Error::Configuration(msg)) => assert!(msg.contains("URI is invalid")),
_ => panic!("unexpected result"),
};
match Jwks::from_uri("http:/foo/bar") {
Err(Error::Configuration(msg)) => assert!(msg.contains("URI is invalid")),
_ => panic!("unexpected result"),
};
}
#[test]
fn jwks_from_uri_failed_load() {
match Jwks::from_uri(&String::from("file://spam")) {
Err(Error::Configuration(msg)) => assert!(msg.contains("Can't load Jwks state")),
_ => panic!("unexpected result"),
};
}
#[test]
fn validate() {
with_jwks(|path| {
let validator = JwtValidator::new(
AUDIENCE.to_string(),
ISSUER.to_string(),
&(String::from("file://") + path.to_str().unwrap()),
)
.unwrap();
assert_eq!(
validator.validate(USER_TOKEN).unwrap(),
User::Authenticated {
name: String::from("user"),
permissions: Vec::new()
}
);
match validator.validate(EXPIRED_TOKEN) {
Err(Error::Validation(msg)) => assert!(msg.contains("ExpiredSignature")),
_ => panic!("unexpected result"),
};
match validator.validate(INVALID_TOKEN) {
Err(Error::Validation(msg)) => assert!(msg.contains("InvalidSignature")),
_ => panic!("unexpected result"),
};
match validator.validate(INVALID_HEADER_TOKEN) {
Err(Error::Input(msg)) => assert!(msg.contains("missing field `alg`")),
_ => panic!("unexpected result"),
};
match validator.validate(NO_KID_TOKEN) {
Err(Error::Input(msg)) => {
assert!(msg.contains("Token does not specify the key id"))
}
_ => panic!("unexpected result"),
};
match validator.validate(UNKNOWN_KID_TOKEN) {
Err(Error::Configuration(msg)) => {
assert!(msg.contains("Signing key \"eggs\" can't be found"))
}
_ => panic!("unexpected result"),
};
match validator.validate(UNSUPPORTED_ALG_TOKEN) {
Err(Error::Input(msg)) => {
assert!(msg.contains("Unsupported algorithm: HS256"))
}
_ => panic!("unexpected result"),
};
});
}
#[test]
fn validate_invalid_params() {
with_jwks(|path| {
let validator = JwtValidator::new(
"spam".to_string(),
ISSUER.to_string(),
&(String::from("file://") + path.to_str().unwrap()),
)
.unwrap();
match validator.validate(USER_TOKEN) {
Err(Error::Validation(msg)) => assert!(msg.contains("InvalidAudience")),
_ => panic!("unexpected result"),
};
let validator = JwtValidator::new(
AUDIENCE.to_string(),
"eggs".to_string(),
&(String::from("file://") + path.to_str().unwrap()),
)
.unwrap();
match validator.validate(USER_TOKEN) {
Err(Error::Validation(msg)) => assert!( | jsonwebtoken::decode_header(token).map_err(|err| Error::Input(err.to_string()))?;
let key = match header.alg {
Algorithm::RS256 | Algorithm::RS384 | Algorithm::RS512 => {
let key_id = header | random_line_split |
jwt.rs | Permission, Result, User};
use crate::application::Config;
const SUPPORTED_ALGORITHMS: [Algorithm; 3] = [Algorithm::RS256, Algorithm::RS384, Algorithm::RS512];
/// JSON Web Key. A cryptographic key used to validate JSON Web Tokens.
#[derive(Debug, PartialEq, Deserialize, Serialize)]
struct Key {
alg: Algorithm,
kty: String,
r#use: String,
n: String,
e: String,
kid: String,
x5t: String,
x5c: Vec<String>,
}
/// JSON Web Key Set. A collection of JSON Web Keys used by a given service.
#[derive(Debug, PartialEq, Deserialize, Serialize)]
struct Jwks {
keys: Vec<Key>,
}
impl Jwks {
/// Looks up the key by id, or returns None if it's not present in the key
/// set.
pub fn | (&self, key_id: &str) -> Option<&Key> {
self.keys.iter().find(|key| key.kid == key_id)
}
}
/// A set of values encoded in a JWT that the issuer claims to be true.
#[derive(Debug, Deserialize)]
struct Claims {
/// Audience (who or that the token is intended for). E.g. "https://api.xsnippet.org".
#[allow(unused)]
aud: String,
/// Issuer (who created and signed this token). E.g. "https://xsnippet.eu.auth0.com/".
#[allow(unused)]
iss: String,
/// Subject (whom the token refers to). E.g. "[email protected]".
sub: String,
/// Expiration time (seconds since Unix epoch).
#[allow(unused)]
exp: usize,
/// Subject permissions (e.g. vec!["import"])
permissions: Vec<Permission>,
}
impl Jwks {
/// Returns a Jwks retrieved from the location identified by the given URI.
pub fn from_uri(uri: &str) -> Result<Self> {
let load_err = Error::Configuration(format!("Can't load Jwks state from {}", uri));
let json = match uri.split_once("://") {
Some(("https", _)) => reqwest::blocking::get(uri)
.and_then(|response| response.text())
.map_err(|_| load_err)?,
Some(("file", path)) => std::fs::read_to_string(path).map_err(|_| load_err)?,
_ => {
return Err(Error::Configuration(
"URI scheme is not supported or URI is invalid".to_string(),
))
}
};
let jwks = serde_json::from_slice::<Jwks>(json.as_bytes())
.map_err(|_| Error::Configuration("Can't parse Jwks state as JSON".to_string()))?;
if!jwks.keys.is_empty() {
Ok(jwks)
} else {
Err(Error::Configuration("Jwks is empty".to_string()))
}
}
}
/// A facade for validation of JWT values.
pub struct JwtValidator {
jwks: Jwks,
validation: Validation,
}
impl JwtValidator {
/// Returns a new JwtValidator constructed from the given parameters.
///
/// # Arguments
///
/// * `audience` - The intended recipient of the tokens (e.g. "https://api.xsnippet.org")
/// * `issuer` - The principal that issues the tokens (e.g. "https://xsnippet.eu.auth0.com/")
/// * `jwks_uri` - The location of JWT Key Set with keys used to validate the tokens (e.g. "https://xsnippet.eu.auth0.com/.well-known/jwks.json")
pub fn new(audience: String, issuer: String, jwks_uri: &str) -> Result<Self> {
let jwks = Jwks::from_uri(jwks_uri)?;
// The following token properties are going to be verified:
// * the expiration time
// * the issuer
// * the intended audience
let validation = Validation {
algorithms: SUPPORTED_ALGORITHMS.to_vec(),
aud: Some(std::iter::once(audience).collect()),
iss: Some(issuer),
..Validation::default()
};
Ok(JwtValidator { jwks, validation })
}
/// Returns a new JwtValidator constructed from the application config.
pub fn from_config(config: &Config) -> Result<Self> {
JwtValidator::new(
config.jwt_audience.to_owned(),
config.jwt_issuer.to_owned(),
&config.jwt_jwks_uri,
)
}
}
impl AuthValidator for JwtValidator {
fn validate(&self, token: &str) -> Result<User> {
let header =
jsonwebtoken::decode_header(token).map_err(|err| Error::Input(err.to_string()))?;
let key = match header.alg {
Algorithm::RS256 | Algorithm::RS384 | Algorithm::RS512 => {
let key_id = header
.kid
.as_ref()
.ok_or_else(|| Error::Input("Token does not specify the key id".to_string()))?;
let key = self
.jwks
.find(key_id)
.filter(|key| key.alg == header.alg && key.r#use == "sig")
.ok_or_else(|| {
Error::Configuration(format!("Signing key {:?} can't be found", key_id))
})?;
jsonwebtoken::DecodingKey::from_rsa_components(&key.n, &key.e)
}
alg => return Err(Error::Input(format!("Unsupported algorithm: {:?}", alg))),
};
match jsonwebtoken::decode::<Claims>(token, &key, &self.validation) {
Ok(data) => Ok(User::Authenticated {
name: data.claims.sub,
permissions: data.claims.permissions,
}),
Err(err) => Err(Error::Validation(err.to_string())),
}
}
}
#[cfg(test)]
mod tests {
use std::io::BufWriter;
use std::{io::Write, path::Path};
use super::*;
const KID: &str = "test-key";
const AUDIENCE: &str = "xsnippet-api-tests-aud";
const ISSUER: &str = "xsnippet-api-tests-iss";
const N: &str = "qN5dCh1M2RA3aF6ZH4IRXIQYKvaWRG59F7lpQIUyUtkDiURmVem7x86EuGmmTmQPSRhx6fL4jF0GBkduAhYnFn_A8T6WfQXzXyI2cyqXsKaTkDMKvl7nHnGttQIuG8W2m7H74pklsPKxp0rPJ0zjV1N_lr_nZG_dayJhtEnChHpScaoTwqMitcOJfVacNxcRbTSDy1IZY5AW0TLdATUmc-ddJLQXxSV7bMur_4S1MP45tHrttChgtnmPpL3q4MHZjHR8aNRYPurkJkPwY0t6nrERTPk9DE4Mk5NtNzqZRBY7eT94pmodVUBGTVYhh7bFDGB26oyTk8_5aedO6syB6w==";
const E: &str = "AQAB";
const USER_TOKEN: &str = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6InRlc3Qta2V5In0.eyJzdWIiOiJ1c2VyIiwiYXVkIjoieHNuaXBwZXQtYXBpLXRlc3RzLWF1ZCIsImlzcyI6InhzbmlwcGV0LWFwaS10ZXN0cy1pc3MiLCJleHAiOjQ3NzAzNzU1NDQsInBlcm1pc3Npb25zIjpbXX0.doA6EeVLnp-MLNRTRUzg03rw9oUn5vDGv59zNysrcFfvkEiiYAtZMu-YW_N3YtE0qv2FTaGAXHryMqsEk8rsFv4uepDuOpzutnRoB4JDFTpvJkKYE4HZjsd8eHSAjFEuCvDjm7wnxoW0zDXH_zj1FITht-c3ua6KbgeevvDjpUgaR52Zou9HRyNa6ns5OKO7yJofA32IZaO7QH69iQiZ4o9WA8PfFNyuVqyQVkvZwpr68JLgl4qTTX4NIWV4wU4OWbIGN6-p4QSkS_Ljkau9sRKjnx4NYPbICMGWVThn_MKOfg26DjGZlI_0HFYDBLogJkTmmyT-5IIIWUqBgUKWYA";
const EXPIRED_TOKEN: &str = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6InRlc3Qta2V5In0.eyJzdWIiOiJ1c2VyIiwiYXVkIjoieHNuaXBwZXQtYXBpLXRlc3RzLWF1ZCIsImlzcyI6InhzbmlwcGV0LWFwaS10ZXN0cy1pc3MiLCJleHAiOjE2MTY3NzIwMDYsInBlcm1pc3Npb25zIjpbXX0.AkC-xzeJ7OXi5fN-DXs43vKAjgsep5Cwx2e1c3hbv1jPpJVnwTD2M_A8Bphd8-mzMsuO017a_rZQIj30dzt3I5Z730Z4zHA_xPV4nl_6zsGzCYTwecT1qmOhTuiyP1PhdgveVQz-ImNDbAzD80PTUwW8Bv-r4R1wyrc5lRtj2ofF1h2_rqxWtRbQwvqmm_J4K8oklYWOrBPNFXJVOGVcji97LelBY6llWbfVUO2unNZBA7MbJLDMtuQHMIRSHn1PXSLA4MJbxOzT-kZC01OlpQWtGstxnITHc34ZDVe5M0v092PSe5J0o3_OBVCR405-rPK_EjLD8saPE3SK7X0Cfw";
const INVALID_TOKEN: &str = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6InRlc3Qta2V5In0.eyJzdWIiOiJ1c2VyIiwiYXVkIjoieHNuaXBwZXQtYXBpLXRlc3RzLWF1ZCIsImlzcyI6InhzbmlwcGV0LWFwaS10ZXN0cy1pc3MiLCJleHAiOjQ3NzAzNzU1NDQsInBlcm1pc3Npb25zIjpbXX0.doA6EeVLnp-MLNRTRUzg03rw9oUn5vDGv59zNysrcFfvkEiiYAtZMu-YW_N3YtE0qv2FTaGAXHryMqsEk8rsFv4uepDuOpzutnRoB4JDFTpvJkKYE4HZjsd8eHSAjFEuCvDjm7wnxoW0zDXH_zj1FITht-c3ua6KbgeevvDjpUgaR52Zou9HRyNa6ns5OKO7yJofA32IZaO7QH69iQiZ4o9WA8PfFNyuVqyQVkvZwpr68JLgl4qTTX4NIWV4wU4OWbIGN6-p3QSkS_Ljkau9sRKjnx4NYPbICMGWVThn_MKOfg26DjGZlI_0HFYDBLogJkTmmyT-5IIIWUqBgUKWYA";
const INVALID_HEADER_TOKEN: &str = "eyJ0eXAiOiJKV1QiLCJraWQiOiJ0ZXN0LWtleSJ9.eyJzdWIiOiJ1c2VyIiwiYXVkIjoieHNuaXBwZXQtYXBpLXRlc3RzLWF1ZCIsImlzcyI6InhzbmlwcGV0LWFwaS10ZXN0cy1pc3MiLCJleHAiOjQ3NzAzNzU1NDQsInBlcm1pc3Npb25zIjpbXX0.doA6EeVLnp-MLNRTRUzg03rw9oUn5vDGv59zNysrcFfvkEiiYAtZMu-YW_N3YtE0qv2FTaGAXHryMqsEk8rsFv4uepDuOpzutnRoB4JDFTpvJkKYE4HZjsd8eHSAjFEuCvDjm7wnxoW0zDXH_zj1FITht-c3ua6KbgeevvDjpUgaR52Zou9HRyNa6ns5OKO7yJofA32IZaO7QH69iQiZ4o9WA8PfFNyuVqyQVkvZwpr68JLgl4qTTX4NIWV4wU4OWbIGN6-p4QSkS_Ljkau9sRKjnx4NYPbICMGWVThn_MKOfg26DjGZlI_0HFYDBLogJkTmmyT-5IIIWUqBgUKWYA";
const NO_KID_TOKEN: &str = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJ1c2VyIiwiYXVkIjoieHNuaXBwZXQtYXBpLXRlc3RzLWF1ZCIsImlzcyI6InhzbmlwcGV0LWFwaS10ZXN0cy1pc3MiLCJleHAiOjQ3NzAzNzU1NDQsInBlcm1pc3Npb25zIjpbXX0.doA6EeVLnp-MLNRTRUzg03rw9oUn5vDGv59zNysrcFfvkEiiYAtZMu-YW_N3YtE0qv2FTaGAXHryMqsEk8rsFv4uepDuOpzutnRoB4JDFTpvJkKYE4HZjsd8eHSAjFEuCvDjm7wnxoW0zDXH_zj1FITht-c3ua6KbgeevvDjpUgaR52Zou9HRyNa6ns5OKO7yJofA32IZaO7QH69iQiZ4o9WA8PfFNyuVqyQVkvZwpr68JLgl4qTTX4NIWV4wU4OWbIGN6-p4QSkS_Ljkau9sRKjnx4NYPbICMGWVThn_MKOfg26DjGZlI_0HFYDBLogJkTmmyT-5IIIWUqBgUKWYA";
const UNKNOWN_KID_TOKEN: &str = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6ImVnZ3MifQ.eyJzdWIiOiJ1c2VyIiwiYXVkIjoieHNuaXBwZXQtYXBpLXRlc3RzLWF1ZCIsImlzcyI6InhzbmlwcGV0LWFwaS10ZXN0cy1pc3MiLCJleHAiOjQ3NzAzNzU1NDQsInBlcm1pc3Npb25zIjpbXX0.doA6EeVLnp-MLNRTRUzg03rw9oUn5vDGv59zNysrcFfvkEiiYAtZMu-YW_N3YtE0qv2FTaGAXHryMqsEk8rsFv4uepDuOpzutnRoB4JDFTpvJkKYE4HZjsd8eHSAjFEuCvDjm7wnxoW0zDXH_zj1FITht-c3ua6KbgeevvDjpUgaR52Zou9HRyNa6ns5OKO7yJofA32IZaO7QH69iQiZ4o9WA8PfFNyuVqyQVkvZwpr68JLgl4qTTX4NIWV4wU4OWbIGN6-p4QSkS_Ljkau9sRKjnx4NYPbICMGWVThn_MKOfg26DjGZlI_0HFYDBLogJkTmmyT-5IIIWUqBgUKWYA";
const UNSUPPORTED_ALG_TOKEN: &str = "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiIsImtpZCI6InRlc3Qta2V5In0.eyJzdWIiOiJ1c2VyIiwiYXVkIjoieHNuaXBwZXQtYXBpLXRlc3RzLWF1ZCIsImlzcyI6InhzbmlwcGV0LWFwaS10ZXN0cy1pc3MiLCJleHAiOjQ3NzAzNzU1NDQsInBlcm1pc3Npb25zIjpbXX0.doA6EeVLnp-MLNRTRUzg03rw9oUn5vDGv59zNysrcFfvkEiiYAtZMu-YW_N3YtE0qv2FTaGAXHryMqsEk8rsFv4uepDuOpzutnRoB4JDFTpvJkKYE4HZjsd8eHSAjFEuCvDjm7wnxoW0zDXH_zj1FITht-c3ua6KbgeevvDjpUgaR52Zou9HRyNa6ns5OKO7yJofA32IZaO7QH69iQiZ4o9WA8PfFNyuVqyQVkvZwpr68JLgl4qTTX4NIWV4wU4OWbIGN6-p4QSkS_Ljkau9sRKjnx4NYPbICMGWVThn_MKOfg26DjGZlI_0HFYDBLogJkTmmyT-5IIIWUqBgUKWYA";
fn with_jwks<F>(test: F)
where
F: FnOnce(&Path),
{
let jwks = Jwks {
keys: vec![Key {
alg: Algorithm::RS256,
kty: String::from("RSA"),
r#use: String::from("sig"),
n: String::from(N),
e: String::from(E),
kid: String::from(KID),
x5t: String::default(),
x5c: Vec::default(),
}],
};
let mut file = tempfile::NamedTempFile::new().unwrap();
{
let mut writer = BufWriter::new(&mut file);
serde_json::to_writer(&mut writer, &jwks).unwrap();
}
test(file.path());
}
#[test]
fn jwks_from_uri() {
with_jwks(|path| {
let jwks = Jwks::from_uri(&(String::from("file://") + path.to_str().unwrap())).unwrap();
let expected_key = Key {
alg: Algorithm::RS256,
kty: String::from("RSA"),
r#use: String::from("sig"),
n: String::from(N),
e: String::from(E),
kid: String::from(KID),
x5t: String::default(),
x5c: Vec::default(),
};
assert_eq!(jwks.find(KID), Some(&expected_key));
assert_eq!(jwks.find("spam"), None);
});
}
#[test]
fn jwks_from_uri_empty() {
let mut file = tempfile::NamedTempFile::new().unwrap();
{
let mut writer = BufWriter::new(&mut file);
serde_json::to_writer(&mut writer, &Jwks { keys: Vec::new() }).unwrap();
}
match Jwks::from_uri(&(String::from("file://") + file.path().to_str().unwrap())) {
Err(Error::Configuration(msg)) => assert!(msg.contains("Jwks is empty")),
_ => panic!("unexpected result"),
};
}
#[test]
fn jwks_from_uri_invalid_json() {
let mut file = tempfile::NamedTempFile::new().unwrap();
{
let mut writer = BufWriter::new(&mut file);
writer.write_all(b"[]").unwrap();
}
match Jwks::from_uri(&(String::from("file://") + file.path().to_str().unwrap())) {
Err(Error::Configuration(msg)) => {
assert!(msg.contains("Can't parse Jwks state as JSON"))
}
_ => panic!("unexpected result"),
};
}
#[test]
fn jwks_from_uri_invalid_uri() {
match Jwks::from_uri("ftp://foo/bar") {
Err(Error::Configuration(msg)) => assert!(msg.contains("URI is invalid")),
_ => panic!("unexpected result"),
};
match Jwks::from_uri("http:/foo/bar") {
Err(Error::Configuration(msg)) => assert!(msg.contains("URI is invalid")),
_ => panic!("unexpected result"),
};
}
#[test]
fn jwks_from_uri_failed_load() {
match Jwks::from_uri(&String::from("file://spam")) {
Err(Error::Configuration(msg)) => assert!(msg.contains("Can't load Jwks state")),
_ => panic!("unexpected result"),
};
}
#[test]
fn validate() {
with_jwks(|path| {
let validator = JwtValidator::new(
AUDIENCE.to_string(),
ISSUER.to_string(),
&(String::from("file://") + path.to_str().unwrap()),
)
.unwrap();
assert_eq!(
validator.validate(USER_TOKEN).unwrap(),
User::Authenticated {
name: String::from("user"),
permissions: Vec::new()
}
);
match validator.validate(EXPIRED_TOKEN) {
Err(Error::Validation(msg)) => assert!(msg.contains("ExpiredSignature")),
_ => panic!("unexpected result"),
};
match validator.validate(INVALID_TOKEN) {
Err(Error::Validation(msg)) => assert!(msg.contains("InvalidSignature")),
_ => panic!("unexpected result"),
};
match validator.validate(INVALID_HEADER_TOKEN) {
Err(Error::Input(msg)) => assert!(msg.contains("missing field `alg`")),
_ => panic!("unexpected result"),
};
match validator.validate(NO_KID_TOKEN) {
Err(Error::Input(msg)) => {
assert!(msg.contains("Token does not specify the key id"))
}
_ => panic!("unexpected result"),
};
match validator.validate(UNKNOWN_KID_TOKEN) {
Err(Error::Configuration(msg)) => {
assert!(msg.contains("Signing key \"eggs\" can't be found"))
}
_ => panic!("unexpected result"),
};
match validator.validate(UNSUPPORTED_ALG_TOKEN) {
Err(Error::Input(msg)) => {
assert!(msg.contains("Unsupported algorithm: HS256"))
}
_ => panic!("unexpected result"),
};
});
}
#[test]
fn validate_invalid_params() {
with_jwks(|path| {
let validator = JwtValidator::new(
"spam".to_string(),
ISSUER.to_string(),
&(String::from("file://") + path.to_str().unwrap()),
)
.unwrap();
match validator.validate(USER_TOKEN) {
Err(Error::Validation(msg)) => assert!(msg.contains("InvalidAudience")),
_ => panic!("unexpected result"),
};
let validator = JwtValidator::new(
AUDIENCE.to_string(),
"eggs".to_string(),
&(String::from("file://") + path.to_str().unwrap()),
)
.unwrap();
match validator.validate(USER_TOKEN) {
Err(Error::Validation(msg)) => | find | identifier_name |
jwt.rs | Permission, Result, User};
use crate::application::Config;
const SUPPORTED_ALGORITHMS: [Algorithm; 3] = [Algorithm::RS256, Algorithm::RS384, Algorithm::RS512];
/// JSON Web Key. A cryptographic key used to validate JSON Web Tokens.
#[derive(Debug, PartialEq, Deserialize, Serialize)]
struct Key {
alg: Algorithm,
kty: String,
r#use: String,
n: String,
e: String,
kid: String,
x5t: String,
x5c: Vec<String>,
}
/// JSON Web Key Set. A collection of JSON Web Keys used by a given service.
#[derive(Debug, PartialEq, Deserialize, Serialize)]
struct Jwks {
keys: Vec<Key>,
}
impl Jwks {
/// Looks up the key by id, or returns None if it's not present in the key
/// set.
pub fn find(&self, key_id: &str) -> Option<&Key> {
self.keys.iter().find(|key| key.kid == key_id)
}
}
/// A set of values encoded in a JWT that the issuer claims to be true.
#[derive(Debug, Deserialize)]
struct Claims {
/// Audience (who or that the token is intended for). E.g. "https://api.xsnippet.org".
#[allow(unused)]
aud: String,
/// Issuer (who created and signed this token). E.g. "https://xsnippet.eu.auth0.com/".
#[allow(unused)]
iss: String,
/// Subject (whom the token refers to). E.g. "[email protected]".
sub: String,
/// Expiration time (seconds since Unix epoch).
#[allow(unused)]
exp: usize,
/// Subject permissions (e.g. vec!["import"])
permissions: Vec<Permission>,
}
impl Jwks {
/// Returns a Jwks retrieved from the location identified by the given URI.
pub fn from_uri(uri: &str) -> Result<Self> {
let load_err = Error::Configuration(format!("Can't load Jwks state from {}", uri));
let json = match uri.split_once("://") {
Some(("https", _)) => reqwest::blocking::get(uri)
.and_then(|response| response.text())
.map_err(|_| load_err)?,
Some(("file", path)) => std::fs::read_to_string(path).map_err(|_| load_err)?,
_ => {
return Err(Error::Configuration(
"URI scheme is not supported or URI is invalid".to_string(),
))
}
};
let jwks = serde_json::from_slice::<Jwks>(json.as_bytes())
.map_err(|_| Error::Configuration("Can't parse Jwks state as JSON".to_string()))?;
if!jwks.keys.is_empty() {
Ok(jwks)
} else {
Err(Error::Configuration("Jwks is empty".to_string()))
}
}
}
/// A facade for validation of JWT values.
pub struct JwtValidator {
jwks: Jwks,
validation: Validation,
}
impl JwtValidator {
/// Returns a new JwtValidator constructed from the given parameters.
///
/// # Arguments
///
/// * `audience` - The intended recipient of the tokens (e.g. "https://api.xsnippet.org")
/// * `issuer` - The principal that issues the tokens (e.g. "https://xsnippet.eu.auth0.com/")
/// * `jwks_uri` - The location of JWT Key Set with keys used to validate the tokens (e.g. "https://xsnippet.eu.auth0.com/.well-known/jwks.json")
pub fn new(audience: String, issuer: String, jwks_uri: &str) -> Result<Self> {
let jwks = Jwks::from_uri(jwks_uri)?;
// The following token properties are going to be verified:
// * the expiration time
// * the issuer
// * the intended audience
let validation = Validation {
algorithms: SUPPORTED_ALGORITHMS.to_vec(),
aud: Some(std::iter::once(audience).collect()),
iss: Some(issuer),
..Validation::default()
};
Ok(JwtValidator { jwks, validation })
}
/// Returns a new JwtValidator constructed from the application config.
pub fn from_config(config: &Config) -> Result<Self> {
JwtValidator::new(
config.jwt_audience.to_owned(),
config.jwt_issuer.to_owned(),
&config.jwt_jwks_uri,
)
}
}
impl AuthValidator for JwtValidator {
fn validate(&self, token: &str) -> Result<User> | }
alg => return Err(Error::Input(format!("Unsupported algorithm: {:?}", alg))),
};
match jsonwebtoken::decode::<Claims>(token, &key, &self.validation) {
Ok(data) => Ok(User::Authenticated {
name: data.claims.sub,
permissions: data.claims.permissions,
}),
Err(err) => Err(Error::Validation(err.to_string())),
}
}
}
#[cfg(test)]
mod tests {
use std::io::BufWriter;
use std::{io::Write, path::Path};
use super::*;
const KID: &str = "test-key";
const AUDIENCE: &str = "xsnippet-api-tests-aud";
const ISSUER: &str = "xsnippet-api-tests-iss";
const N: &str = "qN5dCh1M2RA3aF6ZH4IRXIQYKvaWRG59F7lpQIUyUtkDiURmVem7x86EuGmmTmQPSRhx6fL4jF0GBkduAhYnFn_A8T6WfQXzXyI2cyqXsKaTkDMKvl7nHnGttQIuG8W2m7H74pklsPKxp0rPJ0zjV1N_lr_nZG_dayJhtEnChHpScaoTwqMitcOJfVacNxcRbTSDy1IZY5AW0TLdATUmc-ddJLQXxSV7bMur_4S1MP45tHrttChgtnmPpL3q4MHZjHR8aNRYPurkJkPwY0t6nrERTPk9DE4Mk5NtNzqZRBY7eT94pmodVUBGTVYhh7bFDGB26oyTk8_5aedO6syB6w==";
const E: &str = "AQAB";
const USER_TOKEN: &str = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6InRlc3Qta2V5In0.eyJzdWIiOiJ1c2VyIiwiYXVkIjoieHNuaXBwZXQtYXBpLXRlc3RzLWF1ZCIsImlzcyI6InhzbmlwcGV0LWFwaS10ZXN0cy1pc3MiLCJleHAiOjQ3NzAzNzU1NDQsInBlcm1pc3Npb25zIjpbXX0.doA6EeVLnp-MLNRTRUzg03rw9oUn5vDGv59zNysrcFfvkEiiYAtZMu-YW_N3YtE0qv2FTaGAXHryMqsEk8rsFv4uepDuOpzutnRoB4JDFTpvJkKYE4HZjsd8eHSAjFEuCvDjm7wnxoW0zDXH_zj1FITht-c3ua6KbgeevvDjpUgaR52Zou9HRyNa6ns5OKO7yJofA32IZaO7QH69iQiZ4o9WA8PfFNyuVqyQVkvZwpr68JLgl4qTTX4NIWV4wU4OWbIGN6-p4QSkS_Ljkau9sRKjnx4NYPbICMGWVThn_MKOfg26DjGZlI_0HFYDBLogJkTmmyT-5IIIWUqBgUKWYA";
const EXPIRED_TOKEN: &str = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6InRlc3Qta2V5In0.eyJzdWIiOiJ1c2VyIiwiYXVkIjoieHNuaXBwZXQtYXBpLXRlc3RzLWF1ZCIsImlzcyI6InhzbmlwcGV0LWFwaS10ZXN0cy1pc3MiLCJleHAiOjE2MTY3NzIwMDYsInBlcm1pc3Npb25zIjpbXX0.AkC-xzeJ7OXi5fN-DXs43vKAjgsep5Cwx2e1c3hbv1jPpJVnwTD2M_A8Bphd8-mzMsuO017a_rZQIj30dzt3I5Z730Z4zHA_xPV4nl_6zsGzCYTwecT1qmOhTuiyP1PhdgveVQz-ImNDbAzD80PTUwW8Bv-r4R1wyrc5lRtj2ofF1h2_rqxWtRbQwvqmm_J4K8oklYWOrBPNFXJVOGVcji97LelBY6llWbfVUO2unNZBA7MbJLDMtuQHMIRSHn1PXSLA4MJbxOzT-kZC01OlpQWtGstxnITHc34ZDVe5M0v092PSe5J0o3_OBVCR405-rPK_EjLD8saPE3SK7X0Cfw";
const INVALID_TOKEN: &str = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6InRlc3Qta2V5In0.eyJzdWIiOiJ1c2VyIiwiYXVkIjoieHNuaXBwZXQtYXBpLXRlc3RzLWF1ZCIsImlzcyI6InhzbmlwcGV0LWFwaS10ZXN0cy1pc3MiLCJleHAiOjQ3NzAzNzU1NDQsInBlcm1pc3Npb25zIjpbXX0.doA6EeVLnp-MLNRTRUzg03rw9oUn5vDGv59zNysrcFfvkEiiYAtZMu-YW_N3YtE0qv2FTaGAXHryMqsEk8rsFv4uepDuOpzutnRoB4JDFTpvJkKYE4HZjsd8eHSAjFEuCvDjm7wnxoW0zDXH_zj1FITht-c3ua6KbgeevvDjpUgaR52Zou9HRyNa6ns5OKO7yJofA32IZaO7QH69iQiZ4o9WA8PfFNyuVqyQVkvZwpr68JLgl4qTTX4NIWV4wU4OWbIGN6-p3QSkS_Ljkau9sRKjnx4NYPbICMGWVThn_MKOfg26DjGZlI_0HFYDBLogJkTmmyT-5IIIWUqBgUKWYA";
const INVALID_HEADER_TOKEN: &str = "eyJ0eXAiOiJKV1QiLCJraWQiOiJ0ZXN0LWtleSJ9.eyJzdWIiOiJ1c2VyIiwiYXVkIjoieHNuaXBwZXQtYXBpLXRlc3RzLWF1ZCIsImlzcyI6InhzbmlwcGV0LWFwaS10ZXN0cy1pc3MiLCJleHAiOjQ3NzAzNzU1NDQsInBlcm1pc3Npb25zIjpbXX0.doA6EeVLnp-MLNRTRUzg03rw9oUn5vDGv59zNysrcFfvkEiiYAtZMu-YW_N3YtE0qv2FTaGAXHryMqsEk8rsFv4uepDuOpzutnRoB4JDFTpvJkKYE4HZjsd8eHSAjFEuCvDjm7wnxoW0zDXH_zj1FITht-c3ua6KbgeevvDjpUgaR52Zou9HRyNa6ns5OKO7yJofA32IZaO7QH69iQiZ4o9WA8PfFNyuVqyQVkvZwpr68JLgl4qTTX4NIWV4wU4OWbIGN6-p4QSkS_Ljkau9sRKjnx4NYPbICMGWVThn_MKOfg26DjGZlI_0HFYDBLogJkTmmyT-5IIIWUqBgUKWYA";
const NO_KID_TOKEN: &str = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJ1c2VyIiwiYXVkIjoieHNuaXBwZXQtYXBpLXRlc3RzLWF1ZCIsImlzcyI6InhzbmlwcGV0LWFwaS10ZXN0cy1pc3MiLCJleHAiOjQ3NzAzNzU1NDQsInBlcm1pc3Npb25zIjpbXX0.doA6EeVLnp-MLNRTRUzg03rw9oUn5vDGv59zNysrcFfvkEiiYAtZMu-YW_N3YtE0qv2FTaGAXHryMqsEk8rsFv4uepDuOpzutnRoB4JDFTpvJkKYE4HZjsd8eHSAjFEuCvDjm7wnxoW0zDXH_zj1FITht-c3ua6KbgeevvDjpUgaR52Zou9HRyNa6ns5OKO7yJofA32IZaO7QH69iQiZ4o9WA8PfFNyuVqyQVkvZwpr68JLgl4qTTX4NIWV4wU4OWbIGN6-p4QSkS_Ljkau9sRKjnx4NYPbICMGWVThn_MKOfg26DjGZlI_0HFYDBLogJkTmmyT-5IIIWUqBgUKWYA";
const UNKNOWN_KID_TOKEN: &str = "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6ImVnZ3MifQ.eyJzdWIiOiJ1c2VyIiwiYXVkIjoieHNuaXBwZXQtYXBpLXRlc3RzLWF1ZCIsImlzcyI6InhzbmlwcGV0LWFwaS10ZXN0cy1pc3MiLCJleHAiOjQ3NzAzNzU1NDQsInBlcm1pc3Npb25zIjpbXX0.doA6EeVLnp-MLNRTRUzg03rw9oUn5vDGv59zNysrcFfvkEiiYAtZMu-YW_N3YtE0qv2FTaGAXHryMqsEk8rsFv4uepDuOpzutnRoB4JDFTpvJkKYE4HZjsd8eHSAjFEuCvDjm7wnxoW0zDXH_zj1FITht-c3ua6KbgeevvDjpUgaR52Zou9HRyNa6ns5OKO7yJofA32IZaO7QH69iQiZ4o9WA8PfFNyuVqyQVkvZwpr68JLgl4qTTX4NIWV4wU4OWbIGN6-p4QSkS_Ljkau9sRKjnx4NYPbICMGWVThn_MKOfg26DjGZlI_0HFYDBLogJkTmmyT-5IIIWUqBgUKWYA";
const UNSUPPORTED_ALG_TOKEN: &str = "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiIsImtpZCI6InRlc3Qta2V5In0.eyJzdWIiOiJ1c2VyIiwiYXVkIjoieHNuaXBwZXQtYXBpLXRlc3RzLWF1ZCIsImlzcyI6InhzbmlwcGV0LWFwaS10ZXN0cy1pc3MiLCJleHAiOjQ3NzAzNzU1NDQsInBlcm1pc3Npb25zIjpbXX0.doA6EeVLnp-MLNRTRUzg03rw9oUn5vDGv59zNysrcFfvkEiiYAtZMu-YW_N3YtE0qv2FTaGAXHryMqsEk8rsFv4uepDuOpzutnRoB4JDFTpvJkKYE4HZjsd8eHSAjFEuCvDjm7wnxoW0zDXH_zj1FITht-c3ua6KbgeevvDjpUgaR52Zou9HRyNa6ns5OKO7yJofA32IZaO7QH69iQiZ4o9WA8PfFNyuVqyQVkvZwpr68JLgl4qTTX4NIWV4wU4OWbIGN6-p4QSkS_Ljkau9sRKjnx4NYPbICMGWVThn_MKOfg26DjGZlI_0HFYDBLogJkTmmyT-5IIIWUqBgUKWYA";
fn with_jwks<F>(test: F)
where
F: FnOnce(&Path),
{
let jwks = Jwks {
keys: vec![Key {
alg: Algorithm::RS256,
kty: String::from("RSA"),
r#use: String::from("sig"),
n: String::from(N),
e: String::from(E),
kid: String::from(KID),
x5t: String::default(),
x5c: Vec::default(),
}],
};
let mut file = tempfile::NamedTempFile::new().unwrap();
{
let mut writer = BufWriter::new(&mut file);
serde_json::to_writer(&mut writer, &jwks).unwrap();
}
test(file.path());
}
#[test]
fn jwks_from_uri() {
with_jwks(|path| {
let jwks = Jwks::from_uri(&(String::from("file://") + path.to_str().unwrap())).unwrap();
let expected_key = Key {
alg: Algorithm::RS256,
kty: String::from("RSA"),
r#use: String::from("sig"),
n: String::from(N),
e: String::from(E),
kid: String::from(KID),
x5t: String::default(),
x5c: Vec::default(),
};
assert_eq!(jwks.find(KID), Some(&expected_key));
assert_eq!(jwks.find("spam"), None);
});
}
#[test]
fn jwks_from_uri_empty() {
let mut file = tempfile::NamedTempFile::new().unwrap();
{
let mut writer = BufWriter::new(&mut file);
serde_json::to_writer(&mut writer, &Jwks { keys: Vec::new() }).unwrap();
}
match Jwks::from_uri(&(String::from("file://") + file.path().to_str().unwrap())) {
Err(Error::Configuration(msg)) => assert!(msg.contains("Jwks is empty")),
_ => panic!("unexpected result"),
};
}
#[test]
fn jwks_from_uri_invalid_json() {
let mut file = tempfile::NamedTempFile::new().unwrap();
{
let mut writer = BufWriter::new(&mut file);
writer.write_all(b"[]").unwrap();
}
match Jwks::from_uri(&(String::from("file://") + file.path().to_str().unwrap())) {
Err(Error::Configuration(msg)) => {
assert!(msg.contains("Can't parse Jwks state as JSON"))
}
_ => panic!("unexpected result"),
};
}
#[test]
fn jwks_from_uri_invalid_uri() {
match Jwks::from_uri("ftp://foo/bar") {
Err(Error::Configuration(msg)) => assert!(msg.contains("URI is invalid")),
_ => panic!("unexpected result"),
};
match Jwks::from_uri("http:/foo/bar") {
Err(Error::Configuration(msg)) => assert!(msg.contains("URI is invalid")),
_ => panic!("unexpected result"),
};
}
#[test]
fn jwks_from_uri_failed_load() {
match Jwks::from_uri(&String::from("file://spam")) {
Err(Error::Configuration(msg)) => assert!(msg.contains("Can't load Jwks state")),
_ => panic!("unexpected result"),
};
}
#[test]
fn validate() {
with_jwks(|path| {
let validator = JwtValidator::new(
AUDIENCE.to_string(),
ISSUER.to_string(),
&(String::from("file://") + path.to_str().unwrap()),
)
.unwrap();
assert_eq!(
validator.validate(USER_TOKEN).unwrap(),
User::Authenticated {
name: String::from("user"),
permissions: Vec::new()
}
);
match validator.validate(EXPIRED_TOKEN) {
Err(Error::Validation(msg)) => assert!(msg.contains("ExpiredSignature")),
_ => panic!("unexpected result"),
};
match validator.validate(INVALID_TOKEN) {
Err(Error::Validation(msg)) => assert!(msg.contains("InvalidSignature")),
_ => panic!("unexpected result"),
};
match validator.validate(INVALID_HEADER_TOKEN) {
Err(Error::Input(msg)) => assert!(msg.contains("missing field `alg`")),
_ => panic!("unexpected result"),
};
match validator.validate(NO_KID_TOKEN) {
Err(Error::Input(msg)) => {
assert!(msg.contains("Token does not specify the key id"))
}
_ => panic!("unexpected result"),
};
match validator.validate(UNKNOWN_KID_TOKEN) {
Err(Error::Configuration(msg)) => {
assert!(msg.contains("Signing key \"eggs\" can't be found"))
}
_ => panic!("unexpected result"),
};
match validator.validate(UNSUPPORTED_ALG_TOKEN) {
Err(Error::Input(msg)) => {
assert!(msg.contains("Unsupported algorithm: HS256"))
}
_ => panic!("unexpected result"),
};
});
}
#[test]
fn validate_invalid_params() {
with_jwks(|path| {
let validator = JwtValidator::new(
"spam".to_string(),
ISSUER.to_string(),
&(String::from("file://") + path.to_str().unwrap()),
)
.unwrap();
match validator.validate(USER_TOKEN) {
Err(Error::Validation(msg)) => assert!(msg.contains("InvalidAudience")),
_ => panic!("unexpected result"),
};
let validator = JwtValidator::new(
AUDIENCE.to_string(),
"eggs".to_string(),
&(String::from("file://") + path.to_str().unwrap()),
)
.unwrap();
match validator.validate(USER_TOKEN) {
Err(Error::Validation(msg)) => | {
let header =
jsonwebtoken::decode_header(token).map_err(|err| Error::Input(err.to_string()))?;
let key = match header.alg {
Algorithm::RS256 | Algorithm::RS384 | Algorithm::RS512 => {
let key_id = header
.kid
.as_ref()
.ok_or_else(|| Error::Input("Token does not specify the key id".to_string()))?;
let key = self
.jwks
.find(key_id)
.filter(|key| key.alg == header.alg && key.r#use == "sig")
.ok_or_else(|| {
Error::Configuration(format!("Signing key {:?} can't be found", key_id))
})?;
jsonwebtoken::DecodingKey::from_rsa_components(&key.n, &key.e) | identifier_body |
patterns.rs | //! This module contains the data pertaining to Noise handshake patterns. For more information on
//! these patterns, consult the
//! [Noise specification](https://noiseprotocol.org/noise.html#handshake-patterns).
use failure::Fail;
use std::str::FromStr;
use HandshakePattern::*;
/// Role in the handshake process.
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum Role {
/// Initiates the handshake.
Initiator,
/// Responds to the handshake.
Responder,
}
/// The tokens which describe message patterns.
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum Token {
E,
S,
ES,
SE,
SS,
EE,
Psk,
}
pub type MessagePattern = &'static [Token];
/// Handshake protocol specification.
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub struct HandshakeTokens {
pub(crate) name: &'static str,
pub(crate) initiator: MessagePattern,
pub(crate) responder: MessagePattern,
pub(crate) handshake: &'static [MessagePattern],
}
/// Pattern error.
#[derive(Debug, Fail)]
pub enum PatternError {
#[fail(display = "Unsupported handshake type")]
UnsupportedHandshakeType,
#[fail(display = "Unsupported modifier")]
UnsupportedModifier,
#[fail(display = "Invalid psk")]
InvalidPsk,
}
/// The basic handshake patterns.
#[allow(missing_docs)]
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub enum HandshakePattern {
// 7.4 One-way handshake patterns
N,
K,
X,
// 7.5. Interactive handshake patterns (fundamental)
NN,
NK,
NX,
KN,
KK,
KX,
XN,
XK,
XX,
IN,
IK,
IX,
// 7.6. Interactive handshake patterns (deferred)
// TODO
}
impl HandshakePattern {
/// If the protocol is one-way only.
pub fn is_oneway(&self) -> bool {
match self {
N | X | K => true,
_ => false,
}
}
/// Whether this pattern requires a long-term static key.
pub fn needs_local_static_key(&self, role: Role) -> bool {
match role {
Role::Initiator => match self {
N | NN | NK | NX => false,
_ => true,
},
Role::Responder => match self {
NN | XN | KN | IN => false,
_ => true,
},
}
}
/// Whether this pattern demands a remote public key pre-message.
pub fn needs_known_remote_pubkey(&self, role: Role) -> bool {
match role {
Role::Initiator => match self {
N | K | X | NK | XK | KK | IK => true,
_ => false,
},
Role::Responder => match self {
K | KN | KK | KX => true,
_ => false,
},
}
}
/// Returns the tokens of a handshake.
pub fn tokens(&self) -> HandshakeTokens {
match self {
N => tokens::N,
K => tokens::K,
X => tokens::X,
NN => tokens::NN,
NK => tokens::NK,
NX => tokens::NX,
XN => tokens::XN,
XK => tokens::XK,
XX => tokens::XX,
KN => tokens::KN,
KK => tokens::KK,
KX => tokens::KX,
IN => tokens::IN,
IK => tokens::IK,
IX => tokens::IX,
}
}
}
impl FromStr for HandshakePattern {
type Err = PatternError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"N" => Ok(N),
"K" => Ok(K),
"X" => Ok(X),
"NN" => Ok(NN),
"NK" => Ok(NK),
"NX" => Ok(NX),
"XN" => Ok(XN),
"XK" => Ok(XK),
"XX" => Ok(XX),
"KN" => Ok(KN),
"KK" => Ok(KK),
"KX" => Ok(KX),
"IN" => Ok(IN),
"IK" => Ok(IK),
"IX" => Ok(IX),
_ => Err(PatternError::UnsupportedHandshakeType),
}
}
}
/// A modifier applied to the base pattern as defined in the Noise spec.
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
enum HandshakeModifier {
/// Insert a PSK to mix at the associated position.
Psk(u8),
/// Modify the base pattern to its "fallback" form.
Fallback,
}
impl FromStr for HandshakeModifier {
type Err = PatternError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
if s.starts_with("psk") {
let n: u8 = s[3..].parse().map_err(|_| PatternError::InvalidPsk)?;
Ok(Self::Psk(n))
} else if s == "fallback" {
Ok(Self::Fallback)
} else {
Err(PatternError::UnsupportedModifier)
}
}
}
#[derive(Clone, Debug, PartialEq, Eq)]
struct HandshakeModifierList(Vec<HandshakeModifier>);
impl FromStr for HandshakeModifierList {
type Err = PatternError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
if s.is_empty() {
Ok(Self(vec![]))
} else {
let modifier_names = s.split('+');
let mut modifiers = vec![];
for modifier_name in modifier_names {
modifiers.push(modifier_name.parse()?);
}
Ok(Self(modifiers))
}
}
}
/// The pattern/modifier combination choice (no primitives specified) for a
/// full noise protocol definition.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Handshake {
name: String,
pattern: HandshakePattern,
modifiers: HandshakeModifierList,
}
impl Handshake {
/// Returns the name of the handshake.
pub fn name(&self) -> &str {
&self.name
}
/// Parse and split a base HandshakePattern from its optional modifiers.
fn parse_pattern_and_modifier(s: &str) -> Result<(HandshakePattern, &str), PatternError> {
for i in (1..=4).rev() {
if s.len() > i - 1 && s.is_char_boundary(i) {
if let Ok(p) = s[..i].parse() {
return Ok((p, &s[i..]));
}
}
}
Err(PatternError::UnsupportedHandshakeType)
}
/// Returns the base pattern of the handshake.
pub fn pattern(&self) -> &HandshakePattern {
&self.pattern
}
/// Returns the number of psks used in the handshake.
pub fn number_of_psks(&self) -> usize {
self.modifiers
.0
.iter()
.filter(|modifier| {
if let HandshakeModifier::Psk(_) = modifier {
return true;
}
false
})
.count()
}
/// Whether the pattern has a fallback modifier.
#[allow(unused)]
pub fn is_fallback(&self) -> bool {
self.modifiers
.0
.iter()
.find(|modifier| {
if let HandshakeModifier::Fallback = modifier {
return true;
}
false
})
.is_some()
}
/// Returns the tokens of a handshake pattern.
pub fn tokens(&self) -> (&'static [Token], &'static [Token], Vec<Vec<Token>>) {
let base = self.pattern.tokens();
let mut handshake: Vec<Vec<Token>> = base.handshake.iter().map(|p| p.to_vec()).collect();
for modifier in self.modifiers.0.iter() {
if let HandshakeModifier::Psk(n) = modifier {
if *n == 0 {
handshake[0 as usize].insert(0, Token::Psk);
} else {
handshake[*n as usize - 1].push(Token::Psk);
}
}
}
(base.initiator, base.responder, handshake)
}
}
impl FromStr for Handshake {
type Err = PatternError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let (pattern, remainder) = Self::parse_pattern_and_modifier(s)?;
let modifiers = remainder.parse()?;
let name = s.to_string();
Ok(Self {
name,
pattern,
modifiers,
})
}
}
macro_rules! pattern {
($name:ident {
$initiator:expr,
$responder:expr,
...
$($handshake:expr,)*
}) => {
pattern!($name, stringify!($name), $initiator, $responder, $($handshake,)*);
};
($const_name:ident, $name:expr, $initiator:expr, $responder:expr, $($handshake:expr,)*) => {
pub const $const_name: HandshakeTokens = HandshakeTokens {
name: $name,
initiator: &$initiator,
responder: &$responder,
handshake: &[$(&$handshake,)*],
};
};
}
mod tokens {
use super::{HandshakeTokens, Token::*};
// 7.2 - One-way Patterns
pattern!(N {
[],
[S],
...
[E, ES],
});
pattern!(K {
[S],
[S],
...
[E, ES, SS],
});
pattern!(X {
[],
[S],
...
[E, ES, S, SS],
});
// 7.3 - Interactive patterns (fundamental)
pattern!(NN {
[],
[],
...
[E],
[E, EE], |
pattern!(NK {
[],
[S],
...
[E, ES],
[E, EE],
});
pattern!(NX {
[],
[],
...
[E],
[E, EE, S, ES],
});
pattern!(KN {
[S],
[],
...
[E],
[E, EE, SE],
});
pattern!(KK {
[S],
[S],
...
[E, ES, SS],
[E, EE, SE],
});
pattern!(KX {
[S],
[],
...
[E],
[E, EE, SE, S, ES],
});
pattern!(XN {
[],
[],
...
[E],
[E, EE],
[S, SE],
});
pattern!(XK {
[],
[S],
...
[E, ES],
[E, EE],
[S, SE],
});
pattern!(XX {
[],
[],
...
[E],
[E, EE, S, ES],
[S, SE],
});
pattern!(IN {
[],
[],
...
[E, S],
[E, EE, SE],
});
pattern!(IK {
[],
[S],
...
[E, ES, S, SS],
[E, EE, SE],
});
pattern!(IX {
[],
[],
...
[E, S],
[E, EE, SE, S, ES],
});
} | }); | random_line_split |
patterns.rs | //! This module contains the data pertaining to Noise handshake patterns. For more information on
//! these patterns, consult the
//! [Noise specification](https://noiseprotocol.org/noise.html#handshake-patterns).
use failure::Fail;
use std::str::FromStr;
use HandshakePattern::*;
/// Role in the handshake process.
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum Role {
/// Initiates the handshake.
Initiator,
/// Responds to the handshake.
Responder,
}
/// The tokens which describe message patterns.
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum Token {
E,
S,
ES,
SE,
SS,
EE,
Psk,
}
pub type MessagePattern = &'static [Token];
/// Handshake protocol specification.
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub struct HandshakeTokens {
pub(crate) name: &'static str,
pub(crate) initiator: MessagePattern,
pub(crate) responder: MessagePattern,
pub(crate) handshake: &'static [MessagePattern],
}
/// Pattern error.
#[derive(Debug, Fail)]
pub enum PatternError {
#[fail(display = "Unsupported handshake type")]
UnsupportedHandshakeType,
#[fail(display = "Unsupported modifier")]
UnsupportedModifier,
#[fail(display = "Invalid psk")]
InvalidPsk,
}
/// The basic handshake patterns.
#[allow(missing_docs)]
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub enum HandshakePattern {
// 7.4 One-way handshake patterns
N,
K,
X,
// 7.5. Interactive handshake patterns (fundamental)
NN,
NK,
NX,
KN,
KK,
KX,
XN,
XK,
XX,
IN,
IK,
IX,
// 7.6. Interactive handshake patterns (deferred)
// TODO
}
impl HandshakePattern {
/// If the protocol is one-way only.
pub fn is_oneway(&self) -> bool {
match self {
N | X | K => true,
_ => false,
}
}
/// Whether this pattern requires a long-term static key.
pub fn needs_local_static_key(&self, role: Role) -> bool {
match role {
Role::Initiator => match self {
N | NN | NK | NX => false,
_ => true,
},
Role::Responder => match self {
NN | XN | KN | IN => false,
_ => true,
},
}
}
/// Whether this pattern demands a remote public key pre-message.
pub fn needs_known_remote_pubkey(&self, role: Role) -> bool {
match role {
Role::Initiator => match self {
N | K | X | NK | XK | KK | IK => true,
_ => false,
},
Role::Responder => match self {
K | KN | KK | KX => true,
_ => false,
},
}
}
/// Returns the tokens of a handshake.
pub fn tokens(&self) -> HandshakeTokens {
match self {
N => tokens::N,
K => tokens::K,
X => tokens::X,
NN => tokens::NN,
NK => tokens::NK,
NX => tokens::NX,
XN => tokens::XN,
XK => tokens::XK,
XX => tokens::XX,
KN => tokens::KN,
KK => tokens::KK,
KX => tokens::KX,
IN => tokens::IN,
IK => tokens::IK,
IX => tokens::IX,
}
}
}
impl FromStr for HandshakePattern {
type Err = PatternError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"N" => Ok(N),
"K" => Ok(K),
"X" => Ok(X),
"NN" => Ok(NN),
"NK" => Ok(NK),
"NX" => Ok(NX),
"XN" => Ok(XN),
"XK" => Ok(XK),
"XX" => Ok(XX),
"KN" => Ok(KN),
"KK" => Ok(KK),
"KX" => Ok(KX),
"IN" => Ok(IN),
"IK" => Ok(IK),
"IX" => Ok(IX),
_ => Err(PatternError::UnsupportedHandshakeType),
}
}
}
/// A modifier applied to the base pattern as defined in the Noise spec.
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
enum HandshakeModifier {
/// Insert a PSK to mix at the associated position.
Psk(u8),
/// Modify the base pattern to its "fallback" form.
Fallback,
}
impl FromStr for HandshakeModifier {
type Err = PatternError;
fn | (s: &str) -> Result<Self, Self::Err> {
if s.starts_with("psk") {
let n: u8 = s[3..].parse().map_err(|_| PatternError::InvalidPsk)?;
Ok(Self::Psk(n))
} else if s == "fallback" {
Ok(Self::Fallback)
} else {
Err(PatternError::UnsupportedModifier)
}
}
}
#[derive(Clone, Debug, PartialEq, Eq)]
struct HandshakeModifierList(Vec<HandshakeModifier>);
impl FromStr for HandshakeModifierList {
type Err = PatternError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
if s.is_empty() {
Ok(Self(vec![]))
} else {
let modifier_names = s.split('+');
let mut modifiers = vec![];
for modifier_name in modifier_names {
modifiers.push(modifier_name.parse()?);
}
Ok(Self(modifiers))
}
}
}
/// The pattern/modifier combination choice (no primitives specified) for a
/// full noise protocol definition.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Handshake {
name: String,
pattern: HandshakePattern,
modifiers: HandshakeModifierList,
}
impl Handshake {
/// Returns the name of the handshake.
pub fn name(&self) -> &str {
&self.name
}
/// Parse and split a base HandshakePattern from its optional modifiers.
fn parse_pattern_and_modifier(s: &str) -> Result<(HandshakePattern, &str), PatternError> {
for i in (1..=4).rev() {
if s.len() > i - 1 && s.is_char_boundary(i) {
if let Ok(p) = s[..i].parse() {
return Ok((p, &s[i..]));
}
}
}
Err(PatternError::UnsupportedHandshakeType)
}
/// Returns the base pattern of the handshake.
pub fn pattern(&self) -> &HandshakePattern {
&self.pattern
}
/// Returns the number of psks used in the handshake.
pub fn number_of_psks(&self) -> usize {
self.modifiers
.0
.iter()
.filter(|modifier| {
if let HandshakeModifier::Psk(_) = modifier {
return true;
}
false
})
.count()
}
/// Whether the pattern has a fallback modifier.
#[allow(unused)]
pub fn is_fallback(&self) -> bool {
self.modifiers
.0
.iter()
.find(|modifier| {
if let HandshakeModifier::Fallback = modifier {
return true;
}
false
})
.is_some()
}
/// Returns the tokens of a handshake pattern.
pub fn tokens(&self) -> (&'static [Token], &'static [Token], Vec<Vec<Token>>) {
let base = self.pattern.tokens();
let mut handshake: Vec<Vec<Token>> = base.handshake.iter().map(|p| p.to_vec()).collect();
for modifier in self.modifiers.0.iter() {
if let HandshakeModifier::Psk(n) = modifier {
if *n == 0 {
handshake[0 as usize].insert(0, Token::Psk);
} else {
handshake[*n as usize - 1].push(Token::Psk);
}
}
}
(base.initiator, base.responder, handshake)
}
}
impl FromStr for Handshake {
type Err = PatternError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let (pattern, remainder) = Self::parse_pattern_and_modifier(s)?;
let modifiers = remainder.parse()?;
let name = s.to_string();
Ok(Self {
name,
pattern,
modifiers,
})
}
}
macro_rules! pattern {
($name:ident {
$initiator:expr,
$responder:expr,
...
$($handshake:expr,)*
}) => {
pattern!($name, stringify!($name), $initiator, $responder, $($handshake,)*);
};
($const_name:ident, $name:expr, $initiator:expr, $responder:expr, $($handshake:expr,)*) => {
pub const $const_name: HandshakeTokens = HandshakeTokens {
name: $name,
initiator: &$initiator,
responder: &$responder,
handshake: &[$(&$handshake,)*],
};
};
}
mod tokens {
use super::{HandshakeTokens, Token::*};
// 7.2 - One-way Patterns
pattern!(N {
[],
[S],
...
[E, ES],
});
pattern!(K {
[S],
[S],
...
[E, ES, SS],
});
pattern!(X {
[],
[S],
...
[E, ES, S, SS],
});
// 7.3 - Interactive patterns (fundamental)
pattern!(NN {
[],
[],
...
[E],
[E, EE],
});
pattern!(NK {
[],
[S],
...
[E, ES],
[E, EE],
});
pattern!(NX {
[],
[],
...
[E],
[E, EE, S, ES],
});
pattern!(KN {
[S],
[],
...
[E],
[E, EE, SE],
});
pattern!(KK {
[S],
[S],
...
[E, ES, SS],
[E, EE, SE],
});
pattern!(KX {
[S],
[],
...
[E],
[E, EE, SE, S, ES],
});
pattern!(XN {
[],
[],
...
[E],
[E, EE],
[S, SE],
});
pattern!(XK {
[],
[S],
...
[E, ES],
[E, EE],
[S, SE],
});
pattern!(XX {
[],
[],
...
[E],
[E, EE, S, ES],
[S, SE],
});
pattern!(IN {
[],
[],
...
[E, S],
[E, EE, SE],
});
pattern!(IK {
[],
[S],
...
[E, ES, S, SS],
[E, EE, SE],
});
pattern!(IX {
[],
[],
...
[E, S],
[E, EE, SE, S, ES],
});
}
| from_str | identifier_name |
patterns.rs | //! This module contains the data pertaining to Noise handshake patterns. For more information on
//! these patterns, consult the
//! [Noise specification](https://noiseprotocol.org/noise.html#handshake-patterns).
use failure::Fail;
use std::str::FromStr;
use HandshakePattern::*;
/// Role in the handshake process.
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum Role {
/// Initiates the handshake.
Initiator,
/// Responds to the handshake.
Responder,
}
/// The tokens which describe message patterns.
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum Token {
E,
S,
ES,
SE,
SS,
EE,
Psk,
}
pub type MessagePattern = &'static [Token];
/// Handshake protocol specification.
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub struct HandshakeTokens {
pub(crate) name: &'static str,
pub(crate) initiator: MessagePattern,
pub(crate) responder: MessagePattern,
pub(crate) handshake: &'static [MessagePattern],
}
/// Pattern error.
#[derive(Debug, Fail)]
pub enum PatternError {
#[fail(display = "Unsupported handshake type")]
UnsupportedHandshakeType,
#[fail(display = "Unsupported modifier")]
UnsupportedModifier,
#[fail(display = "Invalid psk")]
InvalidPsk,
}
/// The basic handshake patterns.
#[allow(missing_docs)]
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub enum HandshakePattern {
// 7.4 One-way handshake patterns
N,
K,
X,
// 7.5. Interactive handshake patterns (fundamental)
NN,
NK,
NX,
KN,
KK,
KX,
XN,
XK,
XX,
IN,
IK,
IX,
// 7.6. Interactive handshake patterns (deferred)
// TODO
}
impl HandshakePattern {
/// If the protocol is one-way only.
pub fn is_oneway(&self) -> bool {
match self {
N | X | K => true,
_ => false,
}
}
/// Whether this pattern requires a long-term static key.
pub fn needs_local_static_key(&self, role: Role) -> bool {
match role {
Role::Initiator => match self {
N | NN | NK | NX => false,
_ => true,
},
Role::Responder => match self {
NN | XN | KN | IN => false,
_ => true,
},
}
}
/// Whether this pattern demands a remote public key pre-message.
pub fn needs_known_remote_pubkey(&self, role: Role) -> bool {
match role {
Role::Initiator => match self {
N | K | X | NK | XK | KK | IK => true,
_ => false,
},
Role::Responder => match self {
K | KN | KK | KX => true,
_ => false,
},
}
}
/// Returns the tokens of a handshake.
pub fn tokens(&self) -> HandshakeTokens {
match self {
N => tokens::N,
K => tokens::K,
X => tokens::X,
NN => tokens::NN,
NK => tokens::NK,
NX => tokens::NX,
XN => tokens::XN,
XK => tokens::XK,
XX => tokens::XX,
KN => tokens::KN,
KK => tokens::KK,
KX => tokens::KX,
IN => tokens::IN,
IK => tokens::IK,
IX => tokens::IX,
}
}
}
impl FromStr for HandshakePattern {
type Err = PatternError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"N" => Ok(N),
"K" => Ok(K),
"X" => Ok(X),
"NN" => Ok(NN),
"NK" => Ok(NK),
"NX" => Ok(NX),
"XN" => Ok(XN),
"XK" => Ok(XK),
"XX" => Ok(XX),
"KN" => Ok(KN),
"KK" => Ok(KK),
"KX" => Ok(KX),
"IN" => Ok(IN),
"IK" => Ok(IK),
"IX" => Ok(IX),
_ => Err(PatternError::UnsupportedHandshakeType),
}
}
}
/// A modifier applied to the base pattern as defined in the Noise spec.
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
enum HandshakeModifier {
/// Insert a PSK to mix at the associated position.
Psk(u8),
/// Modify the base pattern to its "fallback" form.
Fallback,
}
impl FromStr for HandshakeModifier {
type Err = PatternError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
if s.starts_with("psk") {
let n: u8 = s[3..].parse().map_err(|_| PatternError::InvalidPsk)?;
Ok(Self::Psk(n))
} else if s == "fallback" {
Ok(Self::Fallback)
} else {
Err(PatternError::UnsupportedModifier)
}
}
}
#[derive(Clone, Debug, PartialEq, Eq)]
struct HandshakeModifierList(Vec<HandshakeModifier>);
impl FromStr for HandshakeModifierList {
type Err = PatternError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
if s.is_empty() {
Ok(Self(vec![]))
} else {
let modifier_names = s.split('+');
let mut modifiers = vec![];
for modifier_name in modifier_names {
modifiers.push(modifier_name.parse()?);
}
Ok(Self(modifiers))
}
}
}
/// The pattern/modifier combination choice (no primitives specified) for a
/// full noise protocol definition.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Handshake {
name: String,
pattern: HandshakePattern,
modifiers: HandshakeModifierList,
}
impl Handshake {
/// Returns the name of the handshake.
pub fn name(&self) -> &str {
&self.name
}
/// Parse and split a base HandshakePattern from its optional modifiers.
fn parse_pattern_and_modifier(s: &str) -> Result<(HandshakePattern, &str), PatternError> {
for i in (1..=4).rev() {
if s.len() > i - 1 && s.is_char_boundary(i) {
if let Ok(p) = s[..i].parse() {
return Ok((p, &s[i..]));
}
}
}
Err(PatternError::UnsupportedHandshakeType)
}
/// Returns the base pattern of the handshake.
pub fn pattern(&self) -> &HandshakePattern |
/// Returns the number of psks used in the handshake.
pub fn number_of_psks(&self) -> usize {
self.modifiers
.0
.iter()
.filter(|modifier| {
if let HandshakeModifier::Psk(_) = modifier {
return true;
}
false
})
.count()
}
/// Whether the pattern has a fallback modifier.
#[allow(unused)]
pub fn is_fallback(&self) -> bool {
self.modifiers
.0
.iter()
.find(|modifier| {
if let HandshakeModifier::Fallback = modifier {
return true;
}
false
})
.is_some()
}
/// Returns the tokens of a handshake pattern.
pub fn tokens(&self) -> (&'static [Token], &'static [Token], Vec<Vec<Token>>) {
let base = self.pattern.tokens();
let mut handshake: Vec<Vec<Token>> = base.handshake.iter().map(|p| p.to_vec()).collect();
for modifier in self.modifiers.0.iter() {
if let HandshakeModifier::Psk(n) = modifier {
if *n == 0 {
handshake[0 as usize].insert(0, Token::Psk);
} else {
handshake[*n as usize - 1].push(Token::Psk);
}
}
}
(base.initiator, base.responder, handshake)
}
}
impl FromStr for Handshake {
type Err = PatternError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let (pattern, remainder) = Self::parse_pattern_and_modifier(s)?;
let modifiers = remainder.parse()?;
let name = s.to_string();
Ok(Self {
name,
pattern,
modifiers,
})
}
}
macro_rules! pattern {
($name:ident {
$initiator:expr,
$responder:expr,
...
$($handshake:expr,)*
}) => {
pattern!($name, stringify!($name), $initiator, $responder, $($handshake,)*);
};
($const_name:ident, $name:expr, $initiator:expr, $responder:expr, $($handshake:expr,)*) => {
pub const $const_name: HandshakeTokens = HandshakeTokens {
name: $name,
initiator: &$initiator,
responder: &$responder,
handshake: &[$(&$handshake,)*],
};
};
}
mod tokens {
use super::{HandshakeTokens, Token::*};
// 7.2 - One-way Patterns
pattern!(N {
[],
[S],
...
[E, ES],
});
pattern!(K {
[S],
[S],
...
[E, ES, SS],
});
pattern!(X {
[],
[S],
...
[E, ES, S, SS],
});
// 7.3 - Interactive patterns (fundamental)
pattern!(NN {
[],
[],
...
[E],
[E, EE],
});
pattern!(NK {
[],
[S],
...
[E, ES],
[E, EE],
});
pattern!(NX {
[],
[],
...
[E],
[E, EE, S, ES],
});
pattern!(KN {
[S],
[],
...
[E],
[E, EE, SE],
});
pattern!(KK {
[S],
[S],
...
[E, ES, SS],
[E, EE, SE],
});
pattern!(KX {
[S],
[],
...
[E],
[E, EE, SE, S, ES],
});
pattern!(XN {
[],
[],
...
[E],
[E, EE],
[S, SE],
});
pattern!(XK {
[],
[S],
...
[E, ES],
[E, EE],
[S, SE],
});
pattern!(XX {
[],
[],
...
[E],
[E, EE, S, ES],
[S, SE],
});
pattern!(IN {
[],
[],
...
[E, S],
[E, EE, SE],
});
pattern!(IK {
[],
[S],
...
[E, ES, S, SS],
[E, EE, SE],
});
pattern!(IX {
[],
[],
...
[E, S],
[E, EE, SE, S, ES],
});
}
| {
&self.pattern
} | identifier_body |
lib.rs | //! ZStandard compression format encoder and decoder implemented in pure Rust
//! without unsafe.
// Reference: https://github.com/facebook/zstd/blob/dev/doc/zstd_compression_format.md#frame_header
#![doc(
html_logo_url = "https://raw.githubusercontent.com/facebook/zstd/dev/doc/images/zstd_logo86.png",
html_favicon_url = "https://raw.githubusercontent.com/facebook/zstd/dev/doc/images/zstd_logo86.png",
html_root_url = "https://docs.rs/zstandard"
)]
// FIXME
#![allow(unreachable_code)]
use std::convert::TryInto;
use std::error::Error;
use std::io::{Read, Write, Error as IoErr, ErrorKind as Kind};
use std::fmt::{Display, Formatter, Error as FmtError};
use std::cmp::Ord;
mod encoder;
mod decoder;
mod parser;
pub use decoder::Decoder;
pub use encoder::Encoder;
use parser::LeDecoder;
/*
*
*
*/
// A tree of prefix codes.
struct HuffmanTreeBuilder {
literal: u8,
// List of weight, value.
list: Vec<(u8, u8)>,
}
impl HuffmanTreeBuilder {
/// Create an empty huffman tree builder.
pub fn new() -> Self {
Self {
literal: 0,
list: Vec::new(),
}
}
/// Add a weight for the next literal.
pub fn weight(&mut self, weight: u8) {
if weight!= 0 {
self.list.push((weight, self.literal));
}
self.literal += 1;
}
// FIXME https://github.com/facebook/zstd/blob/dev/doc/zstd_compression_format.md#representation
/*/// Finish building the Huffman tree.
pub fn finish(self) -> HuffmanTree {
// Stable sort by weight, secondary sort by natural seq. order stays.
self.list.sort_by(|a, b| a.0.cmp(b.0));
// Initialize empty huffman tree.
let mut tree = HuffmanTree {
list: Vec::new(),
};
//
let mut old_weight = 0;
let mut counter = 0;
for (weight, value) in self.list {
number_of_bits(max_bits, weight);
}
// Return the created tree
tree
}*/
}
struct HuffmanTree {
// List of bits, bit sequence, value.
list: Vec<(u8, u8, u8)>,
}
// Get the number of bits for a weight.
fn number_of_bits(max_bits: u8, weight: u8) -> u8 {
if weight > 0 {
max_bits + 1 - weight
} else {
0
}
}
fn huffman_stream(stream: &[u8]) {
let mut stream = stream.iter();
while let Some(byte) = stream.next_back() {
todo!()
}
}
// ZStd magic number.
const MAGIC_NUMBER: u32 = 0xFD2FB528;
#[derive(PartialEq)]
enum BlockType {
RawBlock,
RleBlock,
ZstdBlock,
}
#[derive(PartialEq)]
enum LiteralType {
Raw,
Rle,
HuffmanTree,
HuffmanTreeless,
}
/// Decoder Error.
#[derive(Debug)]
enum DecError {
MagicNumber,
FrameHeaderDesc,
WindowSize,
NoBlocks,
InvalidBlockType,
}
impl Display for DecError {
fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), FmtError> {
use DecError::*;
let message = match self {
MagicNumber => "Magic number does not match",
FrameHeaderDesc => "Invalid values in the frame header descriptor.",
WindowSize => "Window size is too large or too small.",
NoBlocks => "There were no blocks in the frame.",
InvalidBlockType => "Block type is invalid (reserved value used).",
};
write!(f, "{}", message)
}
}
impl Error for DecError {
}
impl From<DecError> for IoErr {
fn from(dec_error: DecError) -> IoErr {
IoErr::new(Kind::InvalidInput, dec_error)
}
}
#[derive(Default)]
struct Frame {
data: Vec<u8>,
}
impl Frame {
fn encode<W: Write>(&mut self, writer: &mut W) -> Result<(), IoErr> {
///////////////////// Magic_Number ////////////////////
let data = &self.data[..];
writer.write_all(&MAGIC_NUMBER.to_le_bytes())?;
///////////////////// Frame_Header ////////////////////
// Encode frame header descriptor.
let mut frame_head_desc = 0b0110_0000;
// 16 bit Frame Content Size
// Single segment
// No Checksum
// No Dictionary
writer.write_all(&[frame_head_desc])?;
///////////////////// Data_Block(s) ////////////////////
todo!();
///////////////////// Content_Checksum ////////////////////
todo!();
self.data.clear();
Ok(())
}
fn decode<R: Read>(&mut self, reader: &mut R) -> Result<(), IoErr> {
let mut dec = LeDecoder::new(reader);
///////////////////// Magic_Number ////////////////////
if dec.u32()?!= MAGIC_NUMBER {
Err(DecError::MagicNumber)?
}
///////////////////// Frame_Header ////////////////////
// Decode the frame header descriptor.
let frame_head_desc = dec.u8()?;
let frame_content_size_flag = frame_head_desc & 0b1100_0000;
let single_segment_flag = frame_head_desc & 0b0010_0000;
let unused_reserved_bits = frame_head_desc & 0b0001_1000;
let content_checksum_flag = frame_head_desc & 0b0000_0100;
let dictionary_id_flag = frame_head_desc & 0b0000_0011;
// Interpret frame header descriptor.
let fcs_field_size = match frame_content_size_flag {
0b0000_0000 => single_segment_flag >> 5,
0b0100_0000 => 2,
0b1000_0000 => 4,
0b1100_0000 => 8,
_ => unreachable!(),
};
if unused_reserved_bits!= 0 {
Err(DecError::FrameHeaderDesc)?
}
let content_checksum = content_checksum_flag!= 0;
// Check for window descriptor if it exists.
let window_size: Option<u64> = if single_segment_flag == 0 {
let window_descriptor: u64 = dec.u8()?.into();
let exponent = (window_descriptor & 0b1111_1000) >> 3;
let mantissa = window_descriptor & 0b0000_0111;
let window_log = 10 + exponent;
let window_base = 1 << window_log;
let window_add = (window_base / 8) * mantissa;
Some(window_base + window_add)
} else {
None
};
// Check dictionary ID field.
let dictionary_id: Option<u32> = match dictionary_id_flag {
0 => None,
1 => {
let did = dec.u8()?.into();
Some(did)
},
2 => {
let did = dec.u16()?.into(); | 3 => {
let did = dec.u32()?;
Some(did)
},
_ => unreachable!(),
};
// Check frame content size.
let window_size: u64 = if let Some(window_size) = window_size {
window_size
} else {
let window_size: u64 = match fcs_field_size {
1 => dec.u8()?.into(),
2 => dec.u16()?.into(),
4 => dec.u32()?.into(),
8 => dec.u64()?,
_ => unreachable!(),
};
window_size
};
// Support From 1KB to 8MB
if window_size > 8_000_000 || window_size < 1_000 {
Err(DecError::WindowSize)?
}
// Resize buffer (to be overwritten)
self.data.resize(window_size.try_into().unwrap(), 0);
///////////////////// Data_Block(s) ////////////////////
// FIXME:
let block_header = dec.u24()?;
let mut last_block = (block_header & 1)!= 0;
let mut block_type = match block_header & 0b0110 {
0b000 => BlockType::RawBlock,
0b010 => BlockType::RleBlock,
0b100 => BlockType::ZstdBlock,
_ => Err(DecError::InvalidBlockType)?,
};
if last_block {
Err(DecError::NoBlocks)?
}
let mut block_size = ((block_header >> 3) as usize).min(128_000);
let mut buf = &mut self.data[..];
loop {
// Decode this block.
match block_type {
BlockType::RawBlock => {
// No decompression necessary
dec.bytes(&mut buf[..block_size])?;
buf = &mut buf[block_size..];
}
BlockType::RleBlock => {
// Run length decompression of a single byte
let single_byte = dec.u8()?;
for i in &mut buf[..block_size] {
*i = single_byte;
}
buf = &mut buf[block_size..];
}
BlockType::ZstdBlock => {
// ZStandard decompression
//////////// Literals section //////////
// Literals Section header
let first_nibble = dec.u(4, 0)?;
let literal_type = match first_nibble & 0b0011 {
0b00 => LiteralType::Raw,
0b01 => LiteralType::Rle,
0b10 => LiteralType::HuffmanTree,
0b11 => LiteralType::HuffmanTreeless,
_ => unreachable!(),
};
use LiteralType::*;
let (regenerated_size, compressed_size, four_huffman_streams) = match literal_type {
Rle | Raw => {
// Size format uses 1 or 2 bits.
let rs = match first_nibble & 0b1100 {
// 1 Bit (Regenerated Size: u5)
0b0000 | 0b1000 => dec.u(5, 5)?,
// 2 Bit (Regenerated Size: u12)
0b0100 => dec.u(12, 4)?,
// 2 Bit (Regenerated Size: u20)
0b1100 => dec.u(20, 4)?,
_ => unreachable!(),
};
(rs, None, false)
}
HuffmanTree | HuffmanTreeless => {
// Size format always uses 2 bits.
let (rs, cs, fh) = match first_nibble & 0b1100 {
// 3 Byte Header
// Single Stream: Regenerated Size: u10
0b0000 => (dec.u(10, 4)?, dec.u(10, 2)?, false),
// 4 Streams: Regenerated Size: u10
0b0100 => (dec.u(10, 4)?, dec.u(10, 2)?, true),
// 4 Byte Header
// 4 Streams: Regenerated Size: u14
0b1000 => (dec.u(14, 4)?, dec.u(14, 6)?, true),
// 5 Byte Header
// 4 Streams: Regenerated Size: u18
0b1100 => (dec.u(18, 4)?, dec.u(18, 2)?, true),
_ => unreachable!(),
};
(rs, Some(cs), fh)
}
};
// Huffman tree description (compressed size include this).
let tree = if literal_type == LiteralType::HuffmanTree {
// FIXME: Use HuffmanTreeBuilder (Next thing to do).
let mut tree = Vec::new();
let header_byte = dec.u8()?;
if header_byte >= 128 { // List of 4 bit weights
// 0 upto 128 are possible literal values.
let num_weights = header_byte - 127;
tree.reserve_exact(num_weights.into());
for _ in 0..num_weights >> 1 {
let byte = dec.u8()?;
tree.push(byte >> 4);
tree.push(byte & 0xf);
}
if num_weights & 1!= 0 {
let byte = dec.u8()?;
tree.push(byte >> 4);
}
} else { // List of FSE-compressed weights
let compressed_size = header_byte;
tree.reserve_exact(255);
// decompress using finite state entropy bitstream
// FSE HEADER
let first_byte = dec.u8()?;
let accuracy_log = (first_byte & 0xF) + 5;
todo!()
}
Some(tree)
} else {
None
};
// Jump Table: compressed sizes of first three streams
// (compressed size include this as well).
let jump_table = if four_huffman_streams {
Some([dec.u16()?, dec.u16()?, dec.u16()?])
} else {
None
};
// Decode Streams
if let Some(compressed_size) = compressed_size {
let mut streams = vec![0; compressed_size as usize];
dec.bytes(&mut streams)?;
if let Some(jump_table) = jump_table {
huffman_stream(&streams[..jump_table[0].into()]);
huffman_stream(&streams[..jump_table[1].into()]);
huffman_stream(&streams[..jump_table[2].into()]);
huffman_stream(&streams[..]);
} else {
huffman_stream(&streams[..]);
}
}
//////////// Sequences section //////////
todo!();
//////////// **Sequence Execution** ////////////
// Now that we've calculated the literals and sequences, we
// can turn it into data.
todo!();
}
}
// Check if there are more blocks
if last_block {
break;
}
let block_header = dec.u24()?;
last_block = (block_header & 1)!= 0;
block_type = match block_header & 0b0110 {
0b000 => BlockType::RawBlock,
0b010 => BlockType::RleBlock,
0b100 => BlockType::ZstdBlock,
_ => Err(DecError::InvalidBlockType)?,
};
block_size = ((block_header >> 3) as usize).min(128_000);
}
///////////////////// Content_Checksum ////////////////////
// FIXME:
Ok(())
}
}
#[cfg(test)]
mod tests {
#[test]
fn it_works() {
assert_eq!(2 + 2, 4);
}
} | Some(did)
}, | random_line_split |
lib.rs | //! ZStandard compression format encoder and decoder implemented in pure Rust
//! without unsafe.
// Reference: https://github.com/facebook/zstd/blob/dev/doc/zstd_compression_format.md#frame_header
#![doc(
html_logo_url = "https://raw.githubusercontent.com/facebook/zstd/dev/doc/images/zstd_logo86.png",
html_favicon_url = "https://raw.githubusercontent.com/facebook/zstd/dev/doc/images/zstd_logo86.png",
html_root_url = "https://docs.rs/zstandard"
)]
// FIXME
#![allow(unreachable_code)]
use std::convert::TryInto;
use std::error::Error;
use std::io::{Read, Write, Error as IoErr, ErrorKind as Kind};
use std::fmt::{Display, Formatter, Error as FmtError};
use std::cmp::Ord;
mod encoder;
mod decoder;
mod parser;
pub use decoder::Decoder;
pub use encoder::Encoder;
use parser::LeDecoder;
/*
*
*
*/
// A tree of prefix codes.
struct HuffmanTreeBuilder {
literal: u8,
// List of weight, value.
list: Vec<(u8, u8)>,
}
impl HuffmanTreeBuilder {
/// Create an empty huffman tree builder.
pub fn | () -> Self {
Self {
literal: 0,
list: Vec::new(),
}
}
/// Add a weight for the next literal.
pub fn weight(&mut self, weight: u8) {
if weight!= 0 {
self.list.push((weight, self.literal));
}
self.literal += 1;
}
// FIXME https://github.com/facebook/zstd/blob/dev/doc/zstd_compression_format.md#representation
/*/// Finish building the Huffman tree.
pub fn finish(self) -> HuffmanTree {
// Stable sort by weight, secondary sort by natural seq. order stays.
self.list.sort_by(|a, b| a.0.cmp(b.0));
// Initialize empty huffman tree.
let mut tree = HuffmanTree {
list: Vec::new(),
};
//
let mut old_weight = 0;
let mut counter = 0;
for (weight, value) in self.list {
number_of_bits(max_bits, weight);
}
// Return the created tree
tree
}*/
}
struct HuffmanTree {
// List of bits, bit sequence, value.
list: Vec<(u8, u8, u8)>,
}
// Get the number of bits for a weight.
fn number_of_bits(max_bits: u8, weight: u8) -> u8 {
if weight > 0 {
max_bits + 1 - weight
} else {
0
}
}
fn huffman_stream(stream: &[u8]) {
let mut stream = stream.iter();
while let Some(byte) = stream.next_back() {
todo!()
}
}
// ZStd magic number.
const MAGIC_NUMBER: u32 = 0xFD2FB528;
#[derive(PartialEq)]
enum BlockType {
RawBlock,
RleBlock,
ZstdBlock,
}
#[derive(PartialEq)]
enum LiteralType {
Raw,
Rle,
HuffmanTree,
HuffmanTreeless,
}
/// Decoder Error.
#[derive(Debug)]
enum DecError {
MagicNumber,
FrameHeaderDesc,
WindowSize,
NoBlocks,
InvalidBlockType,
}
impl Display for DecError {
fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), FmtError> {
use DecError::*;
let message = match self {
MagicNumber => "Magic number does not match",
FrameHeaderDesc => "Invalid values in the frame header descriptor.",
WindowSize => "Window size is too large or too small.",
NoBlocks => "There were no blocks in the frame.",
InvalidBlockType => "Block type is invalid (reserved value used).",
};
write!(f, "{}", message)
}
}
impl Error for DecError {
}
impl From<DecError> for IoErr {
fn from(dec_error: DecError) -> IoErr {
IoErr::new(Kind::InvalidInput, dec_error)
}
}
#[derive(Default)]
struct Frame {
data: Vec<u8>,
}
impl Frame {
fn encode<W: Write>(&mut self, writer: &mut W) -> Result<(), IoErr> {
///////////////////// Magic_Number ////////////////////
let data = &self.data[..];
writer.write_all(&MAGIC_NUMBER.to_le_bytes())?;
///////////////////// Frame_Header ////////////////////
// Encode frame header descriptor.
let mut frame_head_desc = 0b0110_0000;
// 16 bit Frame Content Size
// Single segment
// No Checksum
// No Dictionary
writer.write_all(&[frame_head_desc])?;
///////////////////// Data_Block(s) ////////////////////
todo!();
///////////////////// Content_Checksum ////////////////////
todo!();
self.data.clear();
Ok(())
}
fn decode<R: Read>(&mut self, reader: &mut R) -> Result<(), IoErr> {
let mut dec = LeDecoder::new(reader);
///////////////////// Magic_Number ////////////////////
if dec.u32()?!= MAGIC_NUMBER {
Err(DecError::MagicNumber)?
}
///////////////////// Frame_Header ////////////////////
// Decode the frame header descriptor.
let frame_head_desc = dec.u8()?;
let frame_content_size_flag = frame_head_desc & 0b1100_0000;
let single_segment_flag = frame_head_desc & 0b0010_0000;
let unused_reserved_bits = frame_head_desc & 0b0001_1000;
let content_checksum_flag = frame_head_desc & 0b0000_0100;
let dictionary_id_flag = frame_head_desc & 0b0000_0011;
// Interpret frame header descriptor.
let fcs_field_size = match frame_content_size_flag {
0b0000_0000 => single_segment_flag >> 5,
0b0100_0000 => 2,
0b1000_0000 => 4,
0b1100_0000 => 8,
_ => unreachable!(),
};
if unused_reserved_bits!= 0 {
Err(DecError::FrameHeaderDesc)?
}
let content_checksum = content_checksum_flag!= 0;
// Check for window descriptor if it exists.
let window_size: Option<u64> = if single_segment_flag == 0 {
let window_descriptor: u64 = dec.u8()?.into();
let exponent = (window_descriptor & 0b1111_1000) >> 3;
let mantissa = window_descriptor & 0b0000_0111;
let window_log = 10 + exponent;
let window_base = 1 << window_log;
let window_add = (window_base / 8) * mantissa;
Some(window_base + window_add)
} else {
None
};
// Check dictionary ID field.
let dictionary_id: Option<u32> = match dictionary_id_flag {
0 => None,
1 => {
let did = dec.u8()?.into();
Some(did)
},
2 => {
let did = dec.u16()?.into();
Some(did)
},
3 => {
let did = dec.u32()?;
Some(did)
},
_ => unreachable!(),
};
// Check frame content size.
let window_size: u64 = if let Some(window_size) = window_size {
window_size
} else {
let window_size: u64 = match fcs_field_size {
1 => dec.u8()?.into(),
2 => dec.u16()?.into(),
4 => dec.u32()?.into(),
8 => dec.u64()?,
_ => unreachable!(),
};
window_size
};
// Support From 1KB to 8MB
if window_size > 8_000_000 || window_size < 1_000 {
Err(DecError::WindowSize)?
}
// Resize buffer (to be overwritten)
self.data.resize(window_size.try_into().unwrap(), 0);
///////////////////// Data_Block(s) ////////////////////
// FIXME:
let block_header = dec.u24()?;
let mut last_block = (block_header & 1)!= 0;
let mut block_type = match block_header & 0b0110 {
0b000 => BlockType::RawBlock,
0b010 => BlockType::RleBlock,
0b100 => BlockType::ZstdBlock,
_ => Err(DecError::InvalidBlockType)?,
};
if last_block {
Err(DecError::NoBlocks)?
}
let mut block_size = ((block_header >> 3) as usize).min(128_000);
let mut buf = &mut self.data[..];
loop {
// Decode this block.
match block_type {
BlockType::RawBlock => {
// No decompression necessary
dec.bytes(&mut buf[..block_size])?;
buf = &mut buf[block_size..];
}
BlockType::RleBlock => {
// Run length decompression of a single byte
let single_byte = dec.u8()?;
for i in &mut buf[..block_size] {
*i = single_byte;
}
buf = &mut buf[block_size..];
}
BlockType::ZstdBlock => {
// ZStandard decompression
//////////// Literals section //////////
// Literals Section header
let first_nibble = dec.u(4, 0)?;
let literal_type = match first_nibble & 0b0011 {
0b00 => LiteralType::Raw,
0b01 => LiteralType::Rle,
0b10 => LiteralType::HuffmanTree,
0b11 => LiteralType::HuffmanTreeless,
_ => unreachable!(),
};
use LiteralType::*;
let (regenerated_size, compressed_size, four_huffman_streams) = match literal_type {
Rle | Raw => {
// Size format uses 1 or 2 bits.
let rs = match first_nibble & 0b1100 {
// 1 Bit (Regenerated Size: u5)
0b0000 | 0b1000 => dec.u(5, 5)?,
// 2 Bit (Regenerated Size: u12)
0b0100 => dec.u(12, 4)?,
// 2 Bit (Regenerated Size: u20)
0b1100 => dec.u(20, 4)?,
_ => unreachable!(),
};
(rs, None, false)
}
HuffmanTree | HuffmanTreeless => {
// Size format always uses 2 bits.
let (rs, cs, fh) = match first_nibble & 0b1100 {
// 3 Byte Header
// Single Stream: Regenerated Size: u10
0b0000 => (dec.u(10, 4)?, dec.u(10, 2)?, false),
// 4 Streams: Regenerated Size: u10
0b0100 => (dec.u(10, 4)?, dec.u(10, 2)?, true),
// 4 Byte Header
// 4 Streams: Regenerated Size: u14
0b1000 => (dec.u(14, 4)?, dec.u(14, 6)?, true),
// 5 Byte Header
// 4 Streams: Regenerated Size: u18
0b1100 => (dec.u(18, 4)?, dec.u(18, 2)?, true),
_ => unreachable!(),
};
(rs, Some(cs), fh)
}
};
// Huffman tree description (compressed size include this).
let tree = if literal_type == LiteralType::HuffmanTree {
// FIXME: Use HuffmanTreeBuilder (Next thing to do).
let mut tree = Vec::new();
let header_byte = dec.u8()?;
if header_byte >= 128 { // List of 4 bit weights
// 0 upto 128 are possible literal values.
let num_weights = header_byte - 127;
tree.reserve_exact(num_weights.into());
for _ in 0..num_weights >> 1 {
let byte = dec.u8()?;
tree.push(byte >> 4);
tree.push(byte & 0xf);
}
if num_weights & 1!= 0 {
let byte = dec.u8()?;
tree.push(byte >> 4);
}
} else { // List of FSE-compressed weights
let compressed_size = header_byte;
tree.reserve_exact(255);
// decompress using finite state entropy bitstream
// FSE HEADER
let first_byte = dec.u8()?;
let accuracy_log = (first_byte & 0xF) + 5;
todo!()
}
Some(tree)
} else {
None
};
// Jump Table: compressed sizes of first three streams
// (compressed size include this as well).
let jump_table = if four_huffman_streams {
Some([dec.u16()?, dec.u16()?, dec.u16()?])
} else {
None
};
// Decode Streams
if let Some(compressed_size) = compressed_size {
let mut streams = vec![0; compressed_size as usize];
dec.bytes(&mut streams)?;
if let Some(jump_table) = jump_table {
huffman_stream(&streams[..jump_table[0].into()]);
huffman_stream(&streams[..jump_table[1].into()]);
huffman_stream(&streams[..jump_table[2].into()]);
huffman_stream(&streams[..]);
} else {
huffman_stream(&streams[..]);
}
}
//////////// Sequences section //////////
todo!();
//////////// **Sequence Execution** ////////////
// Now that we've calculated the literals and sequences, we
// can turn it into data.
todo!();
}
}
// Check if there are more blocks
if last_block {
break;
}
let block_header = dec.u24()?;
last_block = (block_header & 1)!= 0;
block_type = match block_header & 0b0110 {
0b000 => BlockType::RawBlock,
0b010 => BlockType::RleBlock,
0b100 => BlockType::ZstdBlock,
_ => Err(DecError::InvalidBlockType)?,
};
block_size = ((block_header >> 3) as usize).min(128_000);
}
///////////////////// Content_Checksum ////////////////////
// FIXME:
Ok(())
}
}
#[cfg(test)]
mod tests {
#[test]
fn it_works() {
assert_eq!(2 + 2, 4);
}
}
| new | identifier_name |
vr.rs | use nalgebra::{self as na, Similarity3, Transform3, Matrix4, Vector3, Point3, Vector2, Point2, Isometry3, Quaternion, Translation3, Unit};
use webvr::*;
use draw::EyeParams;
use fnv::FnvHashMap;
use gfx::{Rect};
use ::NativeRepr;
const VEL_SMOOTHING: f64 = 1e-90;
/// Provides access to VR hardware.
pub struct VrContext {
vrsm: VRServiceManager,
disp: VRDisplayPtr,
/// Set the eyes' near clipping plane
pub near: f64,
/// Set the eyes' far clipping plane
pub far: f64,
layer: VRLayer,
exit: bool,
paused: bool,
}
fn size_from_data(data: &VRDisplayData) -> (u32, u32) {
let w = data.left_eye_parameters.render_width + data.right_eye_parameters.render_width;
let h = data.left_eye_parameters.render_height.max(data.right_eye_parameters.render_height);
(w, h)
}
impl VrContext {
pub fn init(mut vrsm: VRServiceManager) -> Option<VrContext> {
let display = match vrsm.get_displays().get(0) {
Some(d) => d.clone(),
None => {
error!("No VR display present");
return None
},
};
info!("VR Device: {}", display.borrow().data().display_name);
Some(VrContext {
vrsm: vrsm,
disp: display,
near: 0.1,
far: 100.0,
layer: Default::default(),
exit: false,
paused: false,
})
}
/// Connect to default hardware devices.
pub fn new() -> Option<VrContext> {
let mut vrsm = VRServiceManager::new();
vrsm.register_defaults();
VrContext::init(vrsm)
}
/// Connect to a mock HMD.
pub fn mock() -> Option<VrContext> {
let mut vrsm = VRServiceManager::new();
vrsm.register_mock();
VrContext::init(vrsm)
}
/// Set the OpenGL texture id to display on the HMD.
pub fn set_texture(&mut self, texture_id: u32) {
info!("Attaching texture {} to HMD", texture_id);
self.layer.texture_id = texture_id;
}
/// Start drawing to the HMD.
pub fn start(&mut self) {
info!("Starting HMD presentation");
self.disp.borrow_mut().start_present(Some(VRFramebufferAttributes {
multiview: false,
depth: false,
multisampling: false,
}));
}
/// Stop drawing to the HMD.
pub fn stop(&mut self) {
info!("Stopping HMD presentation");
self.disp.borrow_mut().stop_present();
}
/// Retrieve the HMD device from the hardware API.
pub fn retrieve_size(&mut self) -> (u32, u32) {
size_from_data(&self.disp.borrow().data())
}
/// Synchronize with the hardware, returning transient details about the VR
/// system at the specific moment in time. This data can be used directly or
/// to update state variables.
pub fn sync(&mut self) -> VrMoment {
{
let mut disp = self.disp.borrow_mut();
disp.sync_poses();
}
let mut new_controllers = Vec::new();
for event in self.vrsm.poll_events() {
match event {
VREvent::Display(VRDisplayEvent::Pause(_)) => self.paused = true,
VREvent::Display(VRDisplayEvent::Resume(_)) => self.paused = false,
VREvent::Display(VRDisplayEvent::Exit(_)) => self.exit = true,
VREvent::Gamepad(VRGamepadEvent::Connect(_, state)) =>
new_controllers.push(ControllerRef::Indexed(state.gamepad_id)),
_ => (),
}
}
let mut moment = VrMoment {
cont: FnvHashMap::default(),
hmd: None,
primary: None,
secondary: None,
tertiary: None,
layer: self.layer.clone(),
stage: na::one(),
inverse_stage: na::one(),
exit: self.exit,
paused: self.paused,
new_controllers: new_controllers,
timestamp: 0.,
};
{
let disp = self.disp.borrow();
let data = disp.data();
let state = disp.synced_frame_data(self.near, self.far);
let (w, h) = size_from_data(&data);
moment.timestamp = state.timestamp / 1000.;
moment.inverse_stage = data.stage_parameters
.map(|stage| Matrix4::upgrade(stage.sitting_to_standing_transform))
.and_then(|stage| na::try_convert(stage))
.unwrap_or(Similarity3::identity());
moment.stage = moment.inverse_stage.inverse();
let left_view = Transform3::upgrade(state.left_view_matrix);
let right_view = Transform3::upgrade(state.right_view_matrix);
let left_projection = Transform3::upgrade(state.left_projection_matrix);
let right_projection = Transform3::upgrade(state.right_projection_matrix);
if let (Some(pose), true) = (pose_transform(&state.pose, &moment.inverse_stage), data.connected) {
moment.hmd = Some(HmdMoment {
name: data.display_name.clone(),
size: (w, h),
pose: pose,
left: EyeParams {
eye: moment.inverse_stage * left_view.try_inverse().unwrap() * Point3::origin(),
view: left_view * moment.stage,
proj: left_projection,
clip_offset: -0.5,
clip: Rect {
x: 0,
y: 0,
w: data.left_eye_parameters.render_width as u16,
h: h as u16,
},
},
right: EyeParams {
eye: moment.inverse_stage * right_view.try_inverse().unwrap() * Point3::origin(),
view: right_view * moment.stage,
proj: right_projection,
clip_offset: 0.5,
clip: Rect {
x: data.left_eye_parameters.render_width as u16,
y: 0,
w: data.right_eye_parameters.render_width as u16,
h: h as u16,
},
},
});
}
}
let gamepads = self.vrsm.get_gamepads();
{
let mut gpiter = gamepads.iter().filter_map(|gp| {
let gp = gp.borrow();
if gp.state().connected { Some(gp.id()) } else { None }
});
moment.primary = gpiter.next();
moment.secondary = gpiter.next();
moment.tertiary = gpiter.next();
}
for gp in gamepads {
let gp = gp.borrow();
let data = gp.data();
let state = gp.state();
if let (Some(pose), true) = (pose_transform(&state.pose, &moment.inverse_stage), state.connected) {
moment.cont.insert(state.gamepad_id, ControllerMoment {
id: state.gamepad_id,
name: data.name.clone(),
pose: pose,
axes: state.axes.clone(),
buttons: state.buttons.clone(),
});
}
}
moment
}
}
/// Instantaneous information about the VR system retrieved from `VrContext::sync()`.
/// This can be used directly or to update some persistent state.
pub struct VrMoment {
cont: FnvHashMap<u32, ControllerMoment>,
hmd: Option<HmdMoment>,
primary: Option<u32>,
secondary: Option<u32>,
tertiary: Option<u32>,
layer: VRLayer,
/// The stage transform (moves the origin to the center of the room)
pub stage: Similarity3<f32>,
/// The inverse stage transform (moves the center of the room to the origin)
pub inverse_stage: Similarity3<f32>,
/// Has the VR system requested the application to exit
pub exit: bool,
/// Has the VR system requested the application to pause movement (should still sync and submit frames)
pub paused: bool,
/// References to controllers that have connected since the last sync
pub new_controllers: Vec<ControllerRef>,
/// Relative time of this moment (seconds)
pub timestamp: f64,
}
impl VrMoment {
/// Get a controller by reference if such a controller is connected.
pub fn controller(&self, role: ControllerRef) -> Option<&ControllerMoment> {
if let Some(ref i) = role.index(self) { self.cont.get(i) } else { None }
}
/// Iterate over all connected controllers.
pub fn controllers<'a>(&'a self) -> ControllerIter<'a> {
self.cont.values()
}
/// Get instantaneous information about the HMD if it is connected.
pub fn hmd(&self) -> Option<&HmdMoment> {
self.hmd.as_ref()
}
/// Submit the rendered scene. This ends the applicability
/// of this information, since it only applies to the
/// state of the VR system at the last sync.
pub fn submit(self, ctx: &mut VrContext) {
let mut d = ctx.disp.borrow_mut();
d.render_layer(&self.layer);
d.submit_frame();
}
}
/// Iterator over momentary controller information.
pub type ControllerIter<'a> = ::std::collections::hash_map::Values<'a, u32, ControllerMoment>;
/// Used to persistently identity a controller, either by internal
/// id or by role. Note that roles can refer to different physical devices
/// at different times, while the internal id will remain locked
/// to a particular device.
#[derive(Copy, Clone, Debug)]
pub enum ControllerRef {
Primary,
Secondary,
Tertiary,
Indexed(u32),
}
impl ControllerRef {
/// Get the internal id of the controller at a particular moment.
fn index(&self, moment: &VrMoment) -> Option<u32> {
use self::ControllerRef::*;
match *self {
Primary => moment.primary,
Secondary => moment.secondary,
Tertiary => moment.tertiary,
Indexed(i) => Some(i),
}
}
/// Make thus reference specific to a device (internal id)
/// rather than dynamically updating (role).
pub fn fixed(&self, moment: &VrMoment) -> ControllerRef {
match self.index(moment) {
Some(i) => ControllerRef::Indexed(i),
None => *self,
}
}
}
/// Create a reference to the primary controller.
pub fn primary() -> ControllerRef {
ControllerRef::Primary
}
/// Create a reference to the secondary controller.
pub fn secondary() -> ControllerRef {
ControllerRef::Secondary
}
/// Create a reference to the tertiary controller.
pub fn tertiary() -> ControllerRef {
ControllerRef::Tertiary
}
/// Instantaneous information about a button.
pub type ButtonMoment = VRGamepadButton;
/// A device that provides instantaneous position and orientation information.
pub trait Trackable {
/// Get the location and orientation of the device.
fn pose(&self) -> Isometry3<f32>;
/// Get the direction of the device's x axis.
fn x_dir(&self) -> Vector3<f32> { self.pose() * Vector3::x() }
/// Get the direction of the device's y axis.
fn y_dir(&self) -> Vector3<f32> { self.pose() * Vector3::y() }
/// Get the direction of the device's z axis.
fn z_dir(&self) -> Vector3<f32> { self.pose() * Vector3::z() }
/// The the location of the device's origin.
fn origin(&self) -> Point3<f32> { self.pose() * Point3::origin() }
/// Get the direction the device is pointing.
fn pointing(&self) -> Vector3<f32> { -self.z_dir() }
}
/// Instantaneous information about the HMD. This can be used directly
/// or to update some persistent state.
#[derive(Clone)]
pub struct HmdMoment {
/// The textual name of the HMD
pub name: String,
/// The resolution of the HMD
pub size: (u32, u32),
/// The location and orientation of the HMD
pub pose: Isometry3<f32>,
/// The drawing parameters for the left eye
pub left: EyeParams,
/// The drawing parameters for the right eye
pub right: EyeParams,
}
impl Trackable for HmdMoment {
fn | (&self) -> Isometry3<f32> {
self.pose
}
}
/// Instantaneous information about a controller. This can be used directly
/// or to update some persistent state.
#[derive(Clone, Debug)]
pub struct ControllerMoment {
id: u32,
/// The textual name of the controller
pub name: String,
/// The location and orientation of the controller
pub pose: Isometry3<f32>,
/// The state of the floating point inputs on the controller
pub axes: Vec<f64>,
/// The state of the button inputs on the controller
pub buttons: Vec<ButtonMoment>,
}
impl ControllerMoment {
/// Create a reference to this particular hardware device (not to its role).
pub fn reference(&self) -> ControllerRef {
ControllerRef::Indexed(self.id)
}
}
impl Trackable for ControllerMoment {
fn pose(&self) -> Isometry3<f32> {
self.pose
}
}
fn pose_transform(ctr: &VRPose, inverse_stage: &Similarity3<f32>) -> Option<Isometry3<f32>> {
let or = Unit::new_normalize(Quaternion::upgrade(
match ctr.orientation { Some(o) => o, None => return None }));
let pos = Translation3::upgrade(
match ctr.position { Some(o) => o, None => return None });
Some((inverse_stage * Isometry3::from_parts(pos, or)).isometry)
}
/// A structure for tracking the state of a mapped controller.
#[derive(Clone, Debug)]
pub struct MappedController {
/// The controller that updates this state object
pub is: ControllerRef,
/// Time of last update
pub last_timestamp: Option<f64>,
/// Time since last update in seconds
pub dt: f64,
/// The controller connection status.
pub connected: bool,
/// The pose of the controller
pub pose: Isometry3<f32>,
/// The linear velocity of the controller
pub lin_vel: Vector3<f32>,
/// The rotational axis of the controller multiplied by the rotation velocity (rad/s)
pub ang_vel: Vector3<f32>,
/// The transformation of the controller between the second most and most recent updates
pub pose_delta: Isometry3<f32>,
/// How far is the trigger pulled
pub trigger: f64,
/// The change in the trigger between the second most and most recent updates
pub trigger_delta: f64,
/// The last touched location on the circular pad
pub pad: Point2<f64>,
/// The change in touch location on the circular pad between the second most and most recent updates
pub pad_delta: Vector2<f64>,
/// Is the circular pad touched
pub pad_touched: bool,
/// Is the menu button pressed
pub menu: bool,
/// Are the grip buttons pressed
pub grip: bool,
/// Maximum time step for velocity calculations
pub max_step: f64,
}
impl Default for MappedController {
fn default() -> Self {
MappedController {
is: primary(),
last_timestamp: None,
dt: 0.,
connected: false,
pose: na::one(),
lin_vel: na::zero(),
ang_vel: na::zero(),
pose_delta: na::one(),
trigger: 0.,
trigger_delta: 0.,
pad: Point2::origin(),
pad_delta: na::zero(),
pad_touched: false,
menu: false,
grip: false,
max_step: ::std::f64::INFINITY,
}
}
}
impl MappedController {
/// Create a simple default state that will be updated with data from the given controller.
pub fn new(reference: ControllerRef) -> MappedController {
MappedController {
is: reference,
.. Default::default()
}
}
/// Update the controller state using the provided instantaneous information.
pub fn update(&mut self, mom: &VrMoment) -> Result<(), ()> {
if let Some(cont) = mom.controller(self.is) {
if cont.axes.len() < 3 || cont.buttons.len() < 2 { return Err(()) }
self.connected = true;
self.dt = self.last_timestamp.map(|t| mom.timestamp - t).unwrap_or(0.);
self.last_timestamp = Some(mom.timestamp);
if self.dt > ::std::f64::EPSILON {
// Translation change
let lin_delta = cont.pose.translation.vector - self.pose.translation.vector;
// Rotation change
let ang_delta = (cont.pose.rotation * self.pose.rotation.inverse()).scaled_axis();
// Time change (clamped below max_step)
let dt = self.dt.min(self.max_step);
// Smoothing param
let frac = VEL_SMOOTHING.powf(dt) as f32;
let invfrac = 1. - frac;
// Calculate velocity
self.lin_vel = frac * self.lin_vel + invfrac * lin_delta / dt as f32;
self.ang_vel = frac * self.ang_vel + invfrac * ang_delta / dt as f32;
}
self.pose_delta = cont.pose * self.pose.inverse();
self.pose = cont.pose;
let (x, y) = (cont.axes[0], cont.axes[1]);
if x!= 0. || y!= 0. {
let pad = Point2::new(x, y);
if self.pad_touched {
self.pad_delta = pad - self.pad;
} else {
self.pad_delta = na::zero();
}
self.pad = pad;
self.pad_touched = true;
} else {
self.pad_touched = false;
self.pad_delta = na::zero();
}
self.trigger_delta = cont.axes[2] - self.trigger;
self.trigger = cont.axes[2];
self.menu = cont.buttons[0].pressed;
self.grip = cont.buttons[1].pressed;
} else {
*self = MappedController {
is: self.is,
pose: self.pose,
.. Default::default()
};
}
Ok(())
}
/// Get the radial location of the last circular pad touch.
pub fn pad_theta(&self) -> f64 {
self.pad[1].atan2(self.pad[0])
}
}
impl Trackable for MappedController {
fn pose(&self) -> Isometry3<f32> {
self.pose
}
}
| pose | identifier_name |
vr.rs | use nalgebra::{self as na, Similarity3, Transform3, Matrix4, Vector3, Point3, Vector2, Point2, Isometry3, Quaternion, Translation3, Unit};
use webvr::*;
use draw::EyeParams;
use fnv::FnvHashMap;
use gfx::{Rect};
use ::NativeRepr;
const VEL_SMOOTHING: f64 = 1e-90;
/// Provides access to VR hardware.
pub struct VrContext {
vrsm: VRServiceManager,
disp: VRDisplayPtr,
/// Set the eyes' near clipping plane
pub near: f64,
/// Set the eyes' far clipping plane
pub far: f64,
layer: VRLayer,
exit: bool,
paused: bool,
}
fn size_from_data(data: &VRDisplayData) -> (u32, u32) {
let w = data.left_eye_parameters.render_width + data.right_eye_parameters.render_width;
let h = data.left_eye_parameters.render_height.max(data.right_eye_parameters.render_height);
(w, h)
}
impl VrContext {
pub fn init(mut vrsm: VRServiceManager) -> Option<VrContext> {
let display = match vrsm.get_displays().get(0) {
Some(d) => d.clone(),
None => {
error!("No VR display present");
return None
},
};
info!("VR Device: {}", display.borrow().data().display_name);
Some(VrContext {
vrsm: vrsm,
disp: display,
near: 0.1,
far: 100.0,
layer: Default::default(),
exit: false,
paused: false,
})
}
/// Connect to default hardware devices.
pub fn new() -> Option<VrContext> {
let mut vrsm = VRServiceManager::new();
vrsm.register_defaults();
VrContext::init(vrsm)
}
/// Connect to a mock HMD.
pub fn mock() -> Option<VrContext> {
let mut vrsm = VRServiceManager::new();
vrsm.register_mock();
VrContext::init(vrsm)
}
/// Set the OpenGL texture id to display on the HMD.
pub fn set_texture(&mut self, texture_id: u32) {
info!("Attaching texture {} to HMD", texture_id);
self.layer.texture_id = texture_id;
}
/// Start drawing to the HMD.
pub fn start(&mut self) {
info!("Starting HMD presentation");
self.disp.borrow_mut().start_present(Some(VRFramebufferAttributes {
multiview: false,
depth: false,
multisampling: false,
}));
}
/// Stop drawing to the HMD.
pub fn stop(&mut self) {
info!("Stopping HMD presentation");
self.disp.borrow_mut().stop_present();
}
/// Retrieve the HMD device from the hardware API.
pub fn retrieve_size(&mut self) -> (u32, u32) {
size_from_data(&self.disp.borrow().data())
}
/// Synchronize with the hardware, returning transient details about the VR
/// system at the specific moment in time. This data can be used directly or
/// to update state variables.
pub fn sync(&mut self) -> VrMoment {
{
let mut disp = self.disp.borrow_mut();
disp.sync_poses();
}
let mut new_controllers = Vec::new();
for event in self.vrsm.poll_events() {
match event {
VREvent::Display(VRDisplayEvent::Pause(_)) => self.paused = true,
VREvent::Display(VRDisplayEvent::Resume(_)) => self.paused = false,
VREvent::Display(VRDisplayEvent::Exit(_)) => self.exit = true,
VREvent::Gamepad(VRGamepadEvent::Connect(_, state)) =>
new_controllers.push(ControllerRef::Indexed(state.gamepad_id)),
_ => (),
}
}
let mut moment = VrMoment {
cont: FnvHashMap::default(),
hmd: None,
primary: None,
secondary: None,
tertiary: None,
layer: self.layer.clone(),
stage: na::one(),
inverse_stage: na::one(),
exit: self.exit,
paused: self.paused,
new_controllers: new_controllers,
timestamp: 0.,
};
{
let disp = self.disp.borrow();
let data = disp.data();
let state = disp.synced_frame_data(self.near, self.far);
let (w, h) = size_from_data(&data);
moment.timestamp = state.timestamp / 1000.;
moment.inverse_stage = data.stage_parameters
.map(|stage| Matrix4::upgrade(stage.sitting_to_standing_transform))
.and_then(|stage| na::try_convert(stage))
.unwrap_or(Similarity3::identity());
moment.stage = moment.inverse_stage.inverse();
let left_view = Transform3::upgrade(state.left_view_matrix);
let right_view = Transform3::upgrade(state.right_view_matrix);
let left_projection = Transform3::upgrade(state.left_projection_matrix);
let right_projection = Transform3::upgrade(state.right_projection_matrix);
if let (Some(pose), true) = (pose_transform(&state.pose, &moment.inverse_stage), data.connected) {
moment.hmd = Some(HmdMoment {
name: data.display_name.clone(),
size: (w, h),
pose: pose,
left: EyeParams {
eye: moment.inverse_stage * left_view.try_inverse().unwrap() * Point3::origin(),
view: left_view * moment.stage,
proj: left_projection,
clip_offset: -0.5,
clip: Rect {
x: 0,
y: 0,
w: data.left_eye_parameters.render_width as u16,
h: h as u16,
},
}, | proj: right_projection,
clip_offset: 0.5,
clip: Rect {
x: data.left_eye_parameters.render_width as u16,
y: 0,
w: data.right_eye_parameters.render_width as u16,
h: h as u16,
},
},
});
}
}
let gamepads = self.vrsm.get_gamepads();
{
let mut gpiter = gamepads.iter().filter_map(|gp| {
let gp = gp.borrow();
if gp.state().connected { Some(gp.id()) } else { None }
});
moment.primary = gpiter.next();
moment.secondary = gpiter.next();
moment.tertiary = gpiter.next();
}
for gp in gamepads {
let gp = gp.borrow();
let data = gp.data();
let state = gp.state();
if let (Some(pose), true) = (pose_transform(&state.pose, &moment.inverse_stage), state.connected) {
moment.cont.insert(state.gamepad_id, ControllerMoment {
id: state.gamepad_id,
name: data.name.clone(),
pose: pose,
axes: state.axes.clone(),
buttons: state.buttons.clone(),
});
}
}
moment
}
}
/// Instantaneous information about the VR system retrieved from `VrContext::sync()`.
/// This can be used directly or to update some persistent state.
pub struct VrMoment {
cont: FnvHashMap<u32, ControllerMoment>,
hmd: Option<HmdMoment>,
primary: Option<u32>,
secondary: Option<u32>,
tertiary: Option<u32>,
layer: VRLayer,
/// The stage transform (moves the origin to the center of the room)
pub stage: Similarity3<f32>,
/// The inverse stage transform (moves the center of the room to the origin)
pub inverse_stage: Similarity3<f32>,
/// Has the VR system requested the application to exit
pub exit: bool,
/// Has the VR system requested the application to pause movement (should still sync and submit frames)
pub paused: bool,
/// References to controllers that have connected since the last sync
pub new_controllers: Vec<ControllerRef>,
/// Relative time of this moment (seconds)
pub timestamp: f64,
}
impl VrMoment {
/// Get a controller by reference if such a controller is connected.
pub fn controller(&self, role: ControllerRef) -> Option<&ControllerMoment> {
if let Some(ref i) = role.index(self) { self.cont.get(i) } else { None }
}
/// Iterate over all connected controllers.
pub fn controllers<'a>(&'a self) -> ControllerIter<'a> {
self.cont.values()
}
/// Get instantaneous information about the HMD if it is connected.
pub fn hmd(&self) -> Option<&HmdMoment> {
self.hmd.as_ref()
}
/// Submit the rendered scene. This ends the applicability
/// of this information, since it only applies to the
/// state of the VR system at the last sync.
pub fn submit(self, ctx: &mut VrContext) {
let mut d = ctx.disp.borrow_mut();
d.render_layer(&self.layer);
d.submit_frame();
}
}
/// Iterator over momentary controller information.
pub type ControllerIter<'a> = ::std::collections::hash_map::Values<'a, u32, ControllerMoment>;
/// Used to persistently identity a controller, either by internal
/// id or by role. Note that roles can refer to different physical devices
/// at different times, while the internal id will remain locked
/// to a particular device.
#[derive(Copy, Clone, Debug)]
pub enum ControllerRef {
Primary,
Secondary,
Tertiary,
Indexed(u32),
}
impl ControllerRef {
/// Get the internal id of the controller at a particular moment.
fn index(&self, moment: &VrMoment) -> Option<u32> {
use self::ControllerRef::*;
match *self {
Primary => moment.primary,
Secondary => moment.secondary,
Tertiary => moment.tertiary,
Indexed(i) => Some(i),
}
}
/// Make thus reference specific to a device (internal id)
/// rather than dynamically updating (role).
pub fn fixed(&self, moment: &VrMoment) -> ControllerRef {
match self.index(moment) {
Some(i) => ControllerRef::Indexed(i),
None => *self,
}
}
}
/// Create a reference to the primary controller.
pub fn primary() -> ControllerRef {
ControllerRef::Primary
}
/// Create a reference to the secondary controller.
pub fn secondary() -> ControllerRef {
ControllerRef::Secondary
}
/// Create a reference to the tertiary controller.
pub fn tertiary() -> ControllerRef {
ControllerRef::Tertiary
}
/// Instantaneous information about a button.
pub type ButtonMoment = VRGamepadButton;
/// A device that provides instantaneous position and orientation information.
pub trait Trackable {
/// Get the location and orientation of the device.
fn pose(&self) -> Isometry3<f32>;
/// Get the direction of the device's x axis.
fn x_dir(&self) -> Vector3<f32> { self.pose() * Vector3::x() }
/// Get the direction of the device's y axis.
fn y_dir(&self) -> Vector3<f32> { self.pose() * Vector3::y() }
/// Get the direction of the device's z axis.
fn z_dir(&self) -> Vector3<f32> { self.pose() * Vector3::z() }
/// The the location of the device's origin.
fn origin(&self) -> Point3<f32> { self.pose() * Point3::origin() }
/// Get the direction the device is pointing.
fn pointing(&self) -> Vector3<f32> { -self.z_dir() }
}
/// Instantaneous information about the HMD. This can be used directly
/// or to update some persistent state.
#[derive(Clone)]
pub struct HmdMoment {
/// The textual name of the HMD
pub name: String,
/// The resolution of the HMD
pub size: (u32, u32),
/// The location and orientation of the HMD
pub pose: Isometry3<f32>,
/// The drawing parameters for the left eye
pub left: EyeParams,
/// The drawing parameters for the right eye
pub right: EyeParams,
}
impl Trackable for HmdMoment {
fn pose(&self) -> Isometry3<f32> {
self.pose
}
}
/// Instantaneous information about a controller. This can be used directly
/// or to update some persistent state.
#[derive(Clone, Debug)]
pub struct ControllerMoment {
id: u32,
/// The textual name of the controller
pub name: String,
/// The location and orientation of the controller
pub pose: Isometry3<f32>,
/// The state of the floating point inputs on the controller
pub axes: Vec<f64>,
/// The state of the button inputs on the controller
pub buttons: Vec<ButtonMoment>,
}
impl ControllerMoment {
/// Create a reference to this particular hardware device (not to its role).
pub fn reference(&self) -> ControllerRef {
ControllerRef::Indexed(self.id)
}
}
impl Trackable for ControllerMoment {
fn pose(&self) -> Isometry3<f32> {
self.pose
}
}
fn pose_transform(ctr: &VRPose, inverse_stage: &Similarity3<f32>) -> Option<Isometry3<f32>> {
let or = Unit::new_normalize(Quaternion::upgrade(
match ctr.orientation { Some(o) => o, None => return None }));
let pos = Translation3::upgrade(
match ctr.position { Some(o) => o, None => return None });
Some((inverse_stage * Isometry3::from_parts(pos, or)).isometry)
}
/// A structure for tracking the state of a mapped controller.
#[derive(Clone, Debug)]
pub struct MappedController {
/// The controller that updates this state object
pub is: ControllerRef,
/// Time of last update
pub last_timestamp: Option<f64>,
/// Time since last update in seconds
pub dt: f64,
/// The controller connection status.
pub connected: bool,
/// The pose of the controller
pub pose: Isometry3<f32>,
/// The linear velocity of the controller
pub lin_vel: Vector3<f32>,
/// The rotational axis of the controller multiplied by the rotation velocity (rad/s)
pub ang_vel: Vector3<f32>,
/// The transformation of the controller between the second most and most recent updates
pub pose_delta: Isometry3<f32>,
/// How far is the trigger pulled
pub trigger: f64,
/// The change in the trigger between the second most and most recent updates
pub trigger_delta: f64,
/// The last touched location on the circular pad
pub pad: Point2<f64>,
/// The change in touch location on the circular pad between the second most and most recent updates
pub pad_delta: Vector2<f64>,
/// Is the circular pad touched
pub pad_touched: bool,
/// Is the menu button pressed
pub menu: bool,
/// Are the grip buttons pressed
pub grip: bool,
/// Maximum time step for velocity calculations
pub max_step: f64,
}
impl Default for MappedController {
fn default() -> Self {
MappedController {
is: primary(),
last_timestamp: None,
dt: 0.,
connected: false,
pose: na::one(),
lin_vel: na::zero(),
ang_vel: na::zero(),
pose_delta: na::one(),
trigger: 0.,
trigger_delta: 0.,
pad: Point2::origin(),
pad_delta: na::zero(),
pad_touched: false,
menu: false,
grip: false,
max_step: ::std::f64::INFINITY,
}
}
}
impl MappedController {
/// Create a simple default state that will be updated with data from the given controller.
pub fn new(reference: ControllerRef) -> MappedController {
MappedController {
is: reference,
.. Default::default()
}
}
/// Update the controller state using the provided instantaneous information.
pub fn update(&mut self, mom: &VrMoment) -> Result<(), ()> {
if let Some(cont) = mom.controller(self.is) {
if cont.axes.len() < 3 || cont.buttons.len() < 2 { return Err(()) }
self.connected = true;
self.dt = self.last_timestamp.map(|t| mom.timestamp - t).unwrap_or(0.);
self.last_timestamp = Some(mom.timestamp);
if self.dt > ::std::f64::EPSILON {
// Translation change
let lin_delta = cont.pose.translation.vector - self.pose.translation.vector;
// Rotation change
let ang_delta = (cont.pose.rotation * self.pose.rotation.inverse()).scaled_axis();
// Time change (clamped below max_step)
let dt = self.dt.min(self.max_step);
// Smoothing param
let frac = VEL_SMOOTHING.powf(dt) as f32;
let invfrac = 1. - frac;
// Calculate velocity
self.lin_vel = frac * self.lin_vel + invfrac * lin_delta / dt as f32;
self.ang_vel = frac * self.ang_vel + invfrac * ang_delta / dt as f32;
}
self.pose_delta = cont.pose * self.pose.inverse();
self.pose = cont.pose;
let (x, y) = (cont.axes[0], cont.axes[1]);
if x!= 0. || y!= 0. {
let pad = Point2::new(x, y);
if self.pad_touched {
self.pad_delta = pad - self.pad;
} else {
self.pad_delta = na::zero();
}
self.pad = pad;
self.pad_touched = true;
} else {
self.pad_touched = false;
self.pad_delta = na::zero();
}
self.trigger_delta = cont.axes[2] - self.trigger;
self.trigger = cont.axes[2];
self.menu = cont.buttons[0].pressed;
self.grip = cont.buttons[1].pressed;
} else {
*self = MappedController {
is: self.is,
pose: self.pose,
.. Default::default()
};
}
Ok(())
}
/// Get the radial location of the last circular pad touch.
pub fn pad_theta(&self) -> f64 {
self.pad[1].atan2(self.pad[0])
}
}
impl Trackable for MappedController {
fn pose(&self) -> Isometry3<f32> {
self.pose
}
} | right: EyeParams {
eye: moment.inverse_stage * right_view.try_inverse().unwrap() * Point3::origin(),
view: right_view * moment.stage, | random_line_split |
vr.rs | use nalgebra::{self as na, Similarity3, Transform3, Matrix4, Vector3, Point3, Vector2, Point2, Isometry3, Quaternion, Translation3, Unit};
use webvr::*;
use draw::EyeParams;
use fnv::FnvHashMap;
use gfx::{Rect};
use ::NativeRepr;
const VEL_SMOOTHING: f64 = 1e-90;
/// Provides access to VR hardware.
pub struct VrContext {
vrsm: VRServiceManager,
disp: VRDisplayPtr,
/// Set the eyes' near clipping plane
pub near: f64,
/// Set the eyes' far clipping plane
pub far: f64,
layer: VRLayer,
exit: bool,
paused: bool,
}
fn size_from_data(data: &VRDisplayData) -> (u32, u32) {
let w = data.left_eye_parameters.render_width + data.right_eye_parameters.render_width;
let h = data.left_eye_parameters.render_height.max(data.right_eye_parameters.render_height);
(w, h)
}
impl VrContext {
pub fn init(mut vrsm: VRServiceManager) -> Option<VrContext> {
let display = match vrsm.get_displays().get(0) {
Some(d) => d.clone(),
None => {
error!("No VR display present");
return None
},
};
info!("VR Device: {}", display.borrow().data().display_name);
Some(VrContext {
vrsm: vrsm,
disp: display,
near: 0.1,
far: 100.0,
layer: Default::default(),
exit: false,
paused: false,
})
}
/// Connect to default hardware devices.
pub fn new() -> Option<VrContext> {
let mut vrsm = VRServiceManager::new();
vrsm.register_defaults();
VrContext::init(vrsm)
}
/// Connect to a mock HMD.
pub fn mock() -> Option<VrContext> {
let mut vrsm = VRServiceManager::new();
vrsm.register_mock();
VrContext::init(vrsm)
}
/// Set the OpenGL texture id to display on the HMD.
pub fn set_texture(&mut self, texture_id: u32) {
info!("Attaching texture {} to HMD", texture_id);
self.layer.texture_id = texture_id;
}
/// Start drawing to the HMD.
pub fn start(&mut self) {
info!("Starting HMD presentation");
self.disp.borrow_mut().start_present(Some(VRFramebufferAttributes {
multiview: false,
depth: false,
multisampling: false,
}));
}
/// Stop drawing to the HMD.
pub fn stop(&mut self) {
info!("Stopping HMD presentation");
self.disp.borrow_mut().stop_present();
}
/// Retrieve the HMD device from the hardware API.
pub fn retrieve_size(&mut self) -> (u32, u32) {
size_from_data(&self.disp.borrow().data())
}
/// Synchronize with the hardware, returning transient details about the VR
/// system at the specific moment in time. This data can be used directly or
/// to update state variables.
pub fn sync(&mut self) -> VrMoment {
{
let mut disp = self.disp.borrow_mut();
disp.sync_poses();
}
let mut new_controllers = Vec::new();
for event in self.vrsm.poll_events() {
match event {
VREvent::Display(VRDisplayEvent::Pause(_)) => self.paused = true,
VREvent::Display(VRDisplayEvent::Resume(_)) => self.paused = false,
VREvent::Display(VRDisplayEvent::Exit(_)) => self.exit = true,
VREvent::Gamepad(VRGamepadEvent::Connect(_, state)) =>
new_controllers.push(ControllerRef::Indexed(state.gamepad_id)),
_ => (),
}
}
let mut moment = VrMoment {
cont: FnvHashMap::default(),
hmd: None,
primary: None,
secondary: None,
tertiary: None,
layer: self.layer.clone(),
stage: na::one(),
inverse_stage: na::one(),
exit: self.exit,
paused: self.paused,
new_controllers: new_controllers,
timestamp: 0.,
};
{
let disp = self.disp.borrow();
let data = disp.data();
let state = disp.synced_frame_data(self.near, self.far);
let (w, h) = size_from_data(&data);
moment.timestamp = state.timestamp / 1000.;
moment.inverse_stage = data.stage_parameters
.map(|stage| Matrix4::upgrade(stage.sitting_to_standing_transform))
.and_then(|stage| na::try_convert(stage))
.unwrap_or(Similarity3::identity());
moment.stage = moment.inverse_stage.inverse();
let left_view = Transform3::upgrade(state.left_view_matrix);
let right_view = Transform3::upgrade(state.right_view_matrix);
let left_projection = Transform3::upgrade(state.left_projection_matrix);
let right_projection = Transform3::upgrade(state.right_projection_matrix);
if let (Some(pose), true) = (pose_transform(&state.pose, &moment.inverse_stage), data.connected) {
moment.hmd = Some(HmdMoment {
name: data.display_name.clone(),
size: (w, h),
pose: pose,
left: EyeParams {
eye: moment.inverse_stage * left_view.try_inverse().unwrap() * Point3::origin(),
view: left_view * moment.stage,
proj: left_projection,
clip_offset: -0.5,
clip: Rect {
x: 0,
y: 0,
w: data.left_eye_parameters.render_width as u16,
h: h as u16,
},
},
right: EyeParams {
eye: moment.inverse_stage * right_view.try_inverse().unwrap() * Point3::origin(),
view: right_view * moment.stage,
proj: right_projection,
clip_offset: 0.5,
clip: Rect {
x: data.left_eye_parameters.render_width as u16,
y: 0,
w: data.right_eye_parameters.render_width as u16,
h: h as u16,
},
},
});
}
}
let gamepads = self.vrsm.get_gamepads();
{
let mut gpiter = gamepads.iter().filter_map(|gp| {
let gp = gp.borrow();
if gp.state().connected { Some(gp.id()) } else { None }
});
moment.primary = gpiter.next();
moment.secondary = gpiter.next();
moment.tertiary = gpiter.next();
}
for gp in gamepads {
let gp = gp.borrow();
let data = gp.data();
let state = gp.state();
if let (Some(pose), true) = (pose_transform(&state.pose, &moment.inverse_stage), state.connected) {
moment.cont.insert(state.gamepad_id, ControllerMoment {
id: state.gamepad_id,
name: data.name.clone(),
pose: pose,
axes: state.axes.clone(),
buttons: state.buttons.clone(),
});
}
}
moment
}
}
/// Instantaneous information about the VR system retrieved from `VrContext::sync()`.
/// This can be used directly or to update some persistent state.
pub struct VrMoment {
cont: FnvHashMap<u32, ControllerMoment>,
hmd: Option<HmdMoment>,
primary: Option<u32>,
secondary: Option<u32>,
tertiary: Option<u32>,
layer: VRLayer,
/// The stage transform (moves the origin to the center of the room)
pub stage: Similarity3<f32>,
/// The inverse stage transform (moves the center of the room to the origin)
pub inverse_stage: Similarity3<f32>,
/// Has the VR system requested the application to exit
pub exit: bool,
/// Has the VR system requested the application to pause movement (should still sync and submit frames)
pub paused: bool,
/// References to controllers that have connected since the last sync
pub new_controllers: Vec<ControllerRef>,
/// Relative time of this moment (seconds)
pub timestamp: f64,
}
impl VrMoment {
/// Get a controller by reference if such a controller is connected.
pub fn controller(&self, role: ControllerRef) -> Option<&ControllerMoment> {
if let Some(ref i) = role.index(self) { self.cont.get(i) } else { None }
}
/// Iterate over all connected controllers.
pub fn controllers<'a>(&'a self) -> ControllerIter<'a> {
self.cont.values()
}
/// Get instantaneous information about the HMD if it is connected.
pub fn hmd(&self) -> Option<&HmdMoment> {
self.hmd.as_ref()
}
/// Submit the rendered scene. This ends the applicability
/// of this information, since it only applies to the
/// state of the VR system at the last sync.
pub fn submit(self, ctx: &mut VrContext) {
let mut d = ctx.disp.borrow_mut();
d.render_layer(&self.layer);
d.submit_frame();
}
}
/// Iterator over momentary controller information.
pub type ControllerIter<'a> = ::std::collections::hash_map::Values<'a, u32, ControllerMoment>;
/// Used to persistently identity a controller, either by internal
/// id or by role. Note that roles can refer to different physical devices
/// at different times, while the internal id will remain locked
/// to a particular device.
#[derive(Copy, Clone, Debug)]
pub enum ControllerRef {
Primary,
Secondary,
Tertiary,
Indexed(u32),
}
impl ControllerRef {
/// Get the internal id of the controller at a particular moment.
fn index(&self, moment: &VrMoment) -> Option<u32> {
use self::ControllerRef::*;
match *self {
Primary => moment.primary,
Secondary => moment.secondary,
Tertiary => moment.tertiary,
Indexed(i) => Some(i),
}
}
/// Make thus reference specific to a device (internal id)
/// rather than dynamically updating (role).
pub fn fixed(&self, moment: &VrMoment) -> ControllerRef {
match self.index(moment) {
Some(i) => ControllerRef::Indexed(i),
None => *self,
}
}
}
/// Create a reference to the primary controller.
pub fn primary() -> ControllerRef {
ControllerRef::Primary
}
/// Create a reference to the secondary controller.
pub fn secondary() -> ControllerRef {
ControllerRef::Secondary
}
/// Create a reference to the tertiary controller.
pub fn tertiary() -> ControllerRef {
ControllerRef::Tertiary
}
/// Instantaneous information about a button.
pub type ButtonMoment = VRGamepadButton;
/// A device that provides instantaneous position and orientation information.
pub trait Trackable {
/// Get the location and orientation of the device.
fn pose(&self) -> Isometry3<f32>;
/// Get the direction of the device's x axis.
fn x_dir(&self) -> Vector3<f32> { self.pose() * Vector3::x() }
/// Get the direction of the device's y axis.
fn y_dir(&self) -> Vector3<f32> { self.pose() * Vector3::y() }
/// Get the direction of the device's z axis.
fn z_dir(&self) -> Vector3<f32> { self.pose() * Vector3::z() }
/// The the location of the device's origin.
fn origin(&self) -> Point3<f32> { self.pose() * Point3::origin() }
/// Get the direction the device is pointing.
fn pointing(&self) -> Vector3<f32> { -self.z_dir() }
}
/// Instantaneous information about the HMD. This can be used directly
/// or to update some persistent state.
#[derive(Clone)]
pub struct HmdMoment {
/// The textual name of the HMD
pub name: String,
/// The resolution of the HMD
pub size: (u32, u32),
/// The location and orientation of the HMD
pub pose: Isometry3<f32>,
/// The drawing parameters for the left eye
pub left: EyeParams,
/// The drawing parameters for the right eye
pub right: EyeParams,
}
impl Trackable for HmdMoment {
fn pose(&self) -> Isometry3<f32> {
self.pose
}
}
/// Instantaneous information about a controller. This can be used directly
/// or to update some persistent state.
#[derive(Clone, Debug)]
pub struct ControllerMoment {
id: u32,
/// The textual name of the controller
pub name: String,
/// The location and orientation of the controller
pub pose: Isometry3<f32>,
/// The state of the floating point inputs on the controller
pub axes: Vec<f64>,
/// The state of the button inputs on the controller
pub buttons: Vec<ButtonMoment>,
}
impl ControllerMoment {
/// Create a reference to this particular hardware device (not to its role).
pub fn reference(&self) -> ControllerRef {
ControllerRef::Indexed(self.id)
}
}
impl Trackable for ControllerMoment {
fn pose(&self) -> Isometry3<f32> |
}
fn pose_transform(ctr: &VRPose, inverse_stage: &Similarity3<f32>) -> Option<Isometry3<f32>> {
let or = Unit::new_normalize(Quaternion::upgrade(
match ctr.orientation { Some(o) => o, None => return None }));
let pos = Translation3::upgrade(
match ctr.position { Some(o) => o, None => return None });
Some((inverse_stage * Isometry3::from_parts(pos, or)).isometry)
}
/// A structure for tracking the state of a mapped controller.
#[derive(Clone, Debug)]
pub struct MappedController {
/// The controller that updates this state object
pub is: ControllerRef,
/// Time of last update
pub last_timestamp: Option<f64>,
/// Time since last update in seconds
pub dt: f64,
/// The controller connection status.
pub connected: bool,
/// The pose of the controller
pub pose: Isometry3<f32>,
/// The linear velocity of the controller
pub lin_vel: Vector3<f32>,
/// The rotational axis of the controller multiplied by the rotation velocity (rad/s)
pub ang_vel: Vector3<f32>,
/// The transformation of the controller between the second most and most recent updates
pub pose_delta: Isometry3<f32>,
/// How far is the trigger pulled
pub trigger: f64,
/// The change in the trigger between the second most and most recent updates
pub trigger_delta: f64,
/// The last touched location on the circular pad
pub pad: Point2<f64>,
/// The change in touch location on the circular pad between the second most and most recent updates
pub pad_delta: Vector2<f64>,
/// Is the circular pad touched
pub pad_touched: bool,
/// Is the menu button pressed
pub menu: bool,
/// Are the grip buttons pressed
pub grip: bool,
/// Maximum time step for velocity calculations
pub max_step: f64,
}
impl Default for MappedController {
fn default() -> Self {
MappedController {
is: primary(),
last_timestamp: None,
dt: 0.,
connected: false,
pose: na::one(),
lin_vel: na::zero(),
ang_vel: na::zero(),
pose_delta: na::one(),
trigger: 0.,
trigger_delta: 0.,
pad: Point2::origin(),
pad_delta: na::zero(),
pad_touched: false,
menu: false,
grip: false,
max_step: ::std::f64::INFINITY,
}
}
}
impl MappedController {
/// Create a simple default state that will be updated with data from the given controller.
pub fn new(reference: ControllerRef) -> MappedController {
MappedController {
is: reference,
.. Default::default()
}
}
/// Update the controller state using the provided instantaneous information.
pub fn update(&mut self, mom: &VrMoment) -> Result<(), ()> {
if let Some(cont) = mom.controller(self.is) {
if cont.axes.len() < 3 || cont.buttons.len() < 2 { return Err(()) }
self.connected = true;
self.dt = self.last_timestamp.map(|t| mom.timestamp - t).unwrap_or(0.);
self.last_timestamp = Some(mom.timestamp);
if self.dt > ::std::f64::EPSILON {
// Translation change
let lin_delta = cont.pose.translation.vector - self.pose.translation.vector;
// Rotation change
let ang_delta = (cont.pose.rotation * self.pose.rotation.inverse()).scaled_axis();
// Time change (clamped below max_step)
let dt = self.dt.min(self.max_step);
// Smoothing param
let frac = VEL_SMOOTHING.powf(dt) as f32;
let invfrac = 1. - frac;
// Calculate velocity
self.lin_vel = frac * self.lin_vel + invfrac * lin_delta / dt as f32;
self.ang_vel = frac * self.ang_vel + invfrac * ang_delta / dt as f32;
}
self.pose_delta = cont.pose * self.pose.inverse();
self.pose = cont.pose;
let (x, y) = (cont.axes[0], cont.axes[1]);
if x!= 0. || y!= 0. {
let pad = Point2::new(x, y);
if self.pad_touched {
self.pad_delta = pad - self.pad;
} else {
self.pad_delta = na::zero();
}
self.pad = pad;
self.pad_touched = true;
} else {
self.pad_touched = false;
self.pad_delta = na::zero();
}
self.trigger_delta = cont.axes[2] - self.trigger;
self.trigger = cont.axes[2];
self.menu = cont.buttons[0].pressed;
self.grip = cont.buttons[1].pressed;
} else {
*self = MappedController {
is: self.is,
pose: self.pose,
.. Default::default()
};
}
Ok(())
}
/// Get the radial location of the last circular pad touch.
pub fn pad_theta(&self) -> f64 {
self.pad[1].atan2(self.pad[0])
}
}
impl Trackable for MappedController {
fn pose(&self) -> Isometry3<f32> {
self.pose
}
}
| {
self.pose
} | identifier_body |
mod.rs | //! An iterator over incoming signals.
//!
//! This provides a higher abstraction over the signals, providing
//! the [`SignalsInfo`] structure which is able to iterate over the
//! incoming signals. The structure is parametrized by an
//! [`Exfiltrator`][self::exfiltrator::Exfiltrator], which specifies what information is returned
//! for each delivered signal. Note that some exfiltrators are behind a feature flag.
//!
//! The [`Signals`] is a type alias for the common case when it is enough to get the signal number.
//!
//! This module (and everything in it) is turned by the `iterator` feature. It is **on** by
//! default, the possibility to turn off is mostly possible for very special purposes (compiling on
//! `<rustc-1.36`, minimizing the amount of code compiled, …). In a sense, this is the highest
//! level abstraction of the crate and the API expected to be used by most of the people.
//!
//! # Examples
//!
//! ```rust
//! extern crate libc;
//! extern crate signal_hook;
//!
//! use std::io::Error;
//!
//! use signal_hook::consts::signal::*;
//! use signal_hook::iterator::Signals;
//!
//! fn main() -> Result<(), Error> {
//! let mut signals = Signals::new(&[
//! SIGHUP,
//! SIGTERM,
//! SIGINT,
//! SIGQUIT,
//! # SIGUSR1,
//! ])?;
//! # // A trick to terminate the example when run as doc-test. Not part of the real code.
//! # signal_hook::low_level::raise(SIGUSR1).unwrap();
//! 'outer: loop {
//! // Pick up signals that arrived since last time
//! for signal in signals.pending() {
//! match signal as libc::c_int {
//! SIGHUP => {
//! // Reload configuration
//! // Reopen the log file
//! }
//! SIGTERM | SIGINT | SIGQUIT => {
//! break 'outer;
//! },
//! # SIGUSR1 => return Ok(()),
//! _ => unreachable!(),
//! }
//! }
//! // Do some bit of work ‒ something with upper limit on waiting, so we don't block
//! // forever with a SIGTERM already waiting.
//! }
//! println!("Terminating. Bye bye");
//! Ok(())
//! }
//! ```
pub mod backend;
pub mod exfiltrator;
use std::borrow::Borrow;
use std::fmt::{Debug, Formatter, Result as FmtResult};
use std::io::{Error, ErrorKind, Read};
use std::os::unix::net::UnixStream;
use libc::{self, c_int};
pub use self::backend::{Handle, Pending};
use self::backend::{PollResult, RefSignalIterator, SignalDelivery};
use self::exfiltrator::{Exfiltrator, SignalOnly};
/// The main structure of the module, representing interest in some signals.
///
/// Unlike the helpers in other modules, this registers the signals when created and unregisters
/// them on drop. It provides the pending signals during its lifetime, either in batches or as an
/// infinite iterator.
///
/// Most users will want to use it through the [`Signals`] type alias for simplicity.
///
/// # Multiple threads
///
/// Instances of this struct can be [sent][std::marker::Send] to other threads. In a multithreaded
/// application this can be used to dedicate a separate thread for signal handling. In this case
/// you should get a [`Handle`] using the [`handle`][Signals::handle] method before sending the
/// `Signals` instance to a background thread. With the handle you will be able to shut down the
/// background thread later, or to operatively add more signals.
///
/// The controller handle can be shared between as many threads as you like using its
/// [`clone`][Handle::clone] method.
///
/// # Exfiltrators
///
/// The [`SignalOnly]` provides only the signal number. There are further exfiltrators available in
/// the [`exfiltrator`] module. Note that some of them are behind feature flags that need to be
/// enabled.
///
/// # Examples
///
/// ```rust
/// # extern crate signal_hook;
/// #
/// # use std::io::Error;
/// # use std::thread;
/// use signal_hook::consts::signal::*;
/// use signal_hook::iterator::Signals;
///
/// #
/// # fn main() -> Result<(), Error> {
/// let mut signals = Signals::new(&[SIGUSR1, SIGUSR2])?;
/// let handle = signals.handle();
/// let thread = thread::spawn(move || {
/// for signal in &mut signals {
/// match signal {
/// SIGUSR1 => {},
/// SIGUSR2 => {},
/// _ => unreachable!(),
/// }
/// }
/// });
///
/// // Some time later...
/// handle.close();
/// thread.join().unwrap();
/// # Ok(())
/// # }
/// ```
pub struct SignalsInfo<E: Exfiltrator = SignalOnly>(SignalDelivery<UnixStream, E>);
impl<E: Exfiltrator> SignalsInfo<E> {
/// Creates the `Signals` structure.
///
/// This registers all the signals listed. The same restrictions (panics, errors) apply as
/// for the [`Handle::add_signal`] method.
pub fn new<I, S>(signals: I) -> Result<Self, Error>
where
I: IntoIterator<Item = S>,
S: Borrow<c_int>,
E: Default,
{
Self::with_exfiltrator(signals, E::default())
}
/// An advanced constructor with explicit [`Exfiltrator`].
pub fn with_exfiltrator<I, S>(signals: I, exfiltrator: E) -> Result<Self, Error>
where
I: IntoIterator<Item = S>,
S: Borrow<c_int>,
{
let (read, write) = UnixStream::pair()?;
Ok(SignalsInfo(SignalDelivery::with_pipe(
read,
write,
exfiltrator,
signals,
)?))
}
/// Registers another signal to the set watched by this [`Signals`] instance.
///
/// The same restrictions (panics, errors) apply as for the [`Handle::add_signal`]
/// method.
pub fn add_signal(&self, signal: c_int) -> Result<(), Error> {
self.handle().add_signal(signal)
}
/// Returns an iterator of already received signals.
///
/// This returns an iterator over all the signal numbers of the signals received since last
/// time they were read (out of the set registered by this `Signals` instance). Note that they
/// are returned in arbitrary order and a signal instance may returned only once even if it was
/// received multiple times.
///
/// This method returns immediately (does not block) and may produce an empty iterator if there
/// are no signals ready.
pub fn pending(&mut self) -> Pending<E> {
self.0.pending()
}
/// Block until the stream contains some bytes.
///
/// Returns true if it was possible to read a byte and false otherwise.
fn has_signals(read: &mut UnixStream) -> Result<bool, Error> {
loop {
match read.read(&mut [0u8]) {
Ok(num_read) => break Ok(num_read > 0),
// If we get an EINTR error it is fine to retry reading from the stream.
// Otherwise we should pass on the error to the caller.
Err(error) => {
if error.kind()!= ErrorKind::Interrupted {
break Err(error);
}
}
}
}
}
/// Waits for some signals to be available and returns an iterator.
///
/// This is similar to [`pending`][SignalsInfo::pending]. If there are no signals available, it
/// tries to wait for some to arrive. However, due to implementation details, this still can
/// produce an empty iterator.
///
/// This can block for arbitrary long time. If the [`Handle::close`] method is used in
/// another thread this method will return immediately.
///
/// Note that the blocking is done in this method, not in the iterator.
pub fn wait(&mut self) -> Pending<E> {
match self.0.poll_pending(&mut Self::has_signals) {
Ok(Some(pending)) => pending,
// Because of the blocking has_signals method the poll_pending method
// only returns None if the instance is closed. But we want to return
// a possibly empty pending object anyway.
Ok(None) => self.pending(),
// Users can't manipulate the internal file descriptors and the way we use them
// shouldn't produce any errors. So it is OK to panic.
Err(error) => panic!("Unexpected error: {}", error),
}
}
/// Is it closed?
///
/// See [`close`][Handle::close].
pub fn is_closed(&self) -> bool {
self.handle().is_closed()
}
/// Get an infinite iterator over arriving signals.
///
/// The iterator's `next()` blocks as necessary to wait for signals to arrive. This is adequate
/// if you want to designate a thread solely to handling signals. If multiple signals come at
/// the same time (between two values produced by the iterator), they will be returned in
/// arbitrary order. Multiple instances of the same signal may be collated.
///
/// This is also the iterator returned by `IntoIterator` implementation on `&mut Signals`.
///
/// This iterator terminates only if explicitly [closed][Handle::close].
///
/// # Examples
///
/// ```rust
/// # extern crate libc;
/// # extern crate signal_hook;
/// #
/// # use std::io::Error;
/// # use std::thread;
/// #
/// use signal_hook::consts::signal::*; | /// let handle = signals.handle();
/// thread::spawn(move || {
/// for signal in signals.forever() {
/// match signal {
/// SIGUSR1 => {},
/// SIGUSR2 => {},
/// _ => unreachable!(),
/// }
/// }
/// });
/// handle.close();
/// # Ok(())
/// # }
/// ```
pub fn forever(&mut self) -> Forever<E> {
Forever(RefSignalIterator::new(&mut self.0))
}
/// Get a shareable handle to a [`Handle`] for this instance.
///
/// This can be used to add further signals or close the [`Signals`] instance.
pub fn handle(&self) -> Handle {
self.0.handle()
}
}
impl<E> Debug for SignalsInfo<E>
where
E: Debug + Exfiltrator,
E::Storage: Debug,
{
fn fmt(&self, fmt: &mut Formatter) -> FmtResult {
fmt.debug_tuple("Signals").field(&self.0).finish()
}
}
impl<'a, E: Exfiltrator> IntoIterator for &'a mut SignalsInfo<E> {
type Item = E::Output;
type IntoIter = Forever<'a, E>;
fn into_iter(self) -> Self::IntoIter {
self.forever()
}
}
/// An infinite iterator of arriving signals.
pub struct Forever<'a, E: Exfiltrator>(RefSignalIterator<'a, UnixStream, E>);
impl<'a, E: Exfiltrator> Iterator for Forever<'a, E> {
type Item = E::Output;
fn next(&mut self) -> Option<E::Output> {
loop {
match self.0.poll_signal(&mut SignalsInfo::<E>::has_signals) {
PollResult::Signal(result) => break Some(result),
PollResult::Closed => break None,
// In theory, the poll_signal should not return PollResult::Pending. Nevertheless,
// there's a race condition - if the other side closes the pipe/socket after
// checking for it being closed, then the `read` there returns 0 as EOF. That
// appears as pending here. Next time we should get Closed.
PollResult::Pending => continue,
// Users can't manipulate the internal file descriptors and the way we use them
// shouldn't produce any errors. So it is OK to panic.
PollResult::Err(error) => panic!("Unexpected error: {}", error),
}
}
}
}
/// A type alias for an iterator returning just the signal numbers.
///
/// This is the simplified version for most of the use cases. For advanced usages, the
/// [`SignalsInfo`] with explicit [`Exfiltrator`] type can be used.
pub type Signals = SignalsInfo<SignalOnly>; | /// use signal_hook::iterator::Signals;
///
/// # fn main() -> Result<(), Error> {
/// let mut signals = Signals::new(&[SIGUSR1, SIGUSR2])?; | random_line_split |
mod.rs | //! An iterator over incoming signals.
//!
//! This provides a higher abstraction over the signals, providing
//! the [`SignalsInfo`] structure which is able to iterate over the
//! incoming signals. The structure is parametrized by an
//! [`Exfiltrator`][self::exfiltrator::Exfiltrator], which specifies what information is returned
//! for each delivered signal. Note that some exfiltrators are behind a feature flag.
//!
//! The [`Signals`] is a type alias for the common case when it is enough to get the signal number.
//!
//! This module (and everything in it) is turned by the `iterator` feature. It is **on** by
//! default, the possibility to turn off is mostly possible for very special purposes (compiling on
//! `<rustc-1.36`, minimizing the amount of code compiled, …). In a sense, this is the highest
//! level abstraction of the crate and the API expected to be used by most of the people.
//!
//! # Examples
//!
//! ```rust
//! extern crate libc;
//! extern crate signal_hook;
//!
//! use std::io::Error;
//!
//! use signal_hook::consts::signal::*;
//! use signal_hook::iterator::Signals;
//!
//! fn main() -> Result<(), Error> {
//! let mut signals = Signals::new(&[
//! SIGHUP,
//! SIGTERM,
//! SIGINT,
//! SIGQUIT,
//! # SIGUSR1,
//! ])?;
//! # // A trick to terminate the example when run as doc-test. Not part of the real code.
//! # signal_hook::low_level::raise(SIGUSR1).unwrap();
//! 'outer: loop {
//! // Pick up signals that arrived since last time
//! for signal in signals.pending() {
//! match signal as libc::c_int {
//! SIGHUP => {
//! // Reload configuration
//! // Reopen the log file
//! }
//! SIGTERM | SIGINT | SIGQUIT => {
//! break 'outer;
//! },
//! # SIGUSR1 => return Ok(()),
//! _ => unreachable!(),
//! }
//! }
//! // Do some bit of work ‒ something with upper limit on waiting, so we don't block
//! // forever with a SIGTERM already waiting.
//! }
//! println!("Terminating. Bye bye");
//! Ok(())
//! }
//! ```
pub mod backend;
pub mod exfiltrator;
use std::borrow::Borrow;
use std::fmt::{Debug, Formatter, Result as FmtResult};
use std::io::{Error, ErrorKind, Read};
use std::os::unix::net::UnixStream;
use libc::{self, c_int};
pub use self::backend::{Handle, Pending};
use self::backend::{PollResult, RefSignalIterator, SignalDelivery};
use self::exfiltrator::{Exfiltrator, SignalOnly};
/// The main structure of the module, representing interest in some signals.
///
/// Unlike the helpers in other modules, this registers the signals when created and unregisters
/// them on drop. It provides the pending signals during its lifetime, either in batches or as an
/// infinite iterator.
///
/// Most users will want to use it through the [`Signals`] type alias for simplicity.
///
/// # Multiple threads
///
/// Instances of this struct can be [sent][std::marker::Send] to other threads. In a multithreaded
/// application this can be used to dedicate a separate thread for signal handling. In this case
/// you should get a [`Handle`] using the [`handle`][Signals::handle] method before sending the
/// `Signals` instance to a background thread. With the handle you will be able to shut down the
/// background thread later, or to operatively add more signals.
///
/// The controller handle can be shared between as many threads as you like using its
/// [`clone`][Handle::clone] method.
///
/// # Exfiltrators
///
/// The [`SignalOnly]` provides only the signal number. There are further exfiltrators available in
/// the [`exfiltrator`] module. Note that some of them are behind feature flags that need to be
/// enabled.
///
/// # Examples
///
/// ```rust
/// # extern crate signal_hook;
/// #
/// # use std::io::Error;
/// # use std::thread;
/// use signal_hook::consts::signal::*;
/// use signal_hook::iterator::Signals;
///
/// #
/// # fn main() -> Result<(), Error> {
/// let mut signals = Signals::new(&[SIGUSR1, SIGUSR2])?;
/// let handle = signals.handle();
/// let thread = thread::spawn(move || {
/// for signal in &mut signals {
/// match signal {
/// SIGUSR1 => {},
/// SIGUSR2 => {},
/// _ => unreachable!(),
/// }
/// }
/// });
///
/// // Some time later...
/// handle.close();
/// thread.join().unwrap();
/// # Ok(())
/// # }
/// ```
pub struct SignalsInfo<E: Exfiltrator = SignalOnly>(SignalDelivery<UnixStream, E>);
impl<E: Exfiltrator> SignalsInfo<E> {
/// Creates the `Signals` structure.
///
/// This registers all the signals listed. The same restrictions (panics, errors) apply as
/// for the [`Handle::add_signal`] method.
pub fn new<I, S>(signals: I) -> Result<Self, Error>
where
I: IntoIterator<Item = S>,
S: Borrow<c_int>,
E: Default,
{
Self::with_exfiltrator(signals, E::default())
}
/// An advanced constructor with explicit [`Exfiltrator`].
pub fn with_exfiltrator<I, S>(signals: I, exfiltrator: E) -> Result<Self, Error>
where
I: IntoIterator<Item = S>,
S: Borrow<c_int>,
{
let (read, write) = UnixStream::pair()?;
Ok(SignalsInfo(SignalDelivery::with_pipe(
read,
write,
exfiltrator,
signals,
)?))
}
/// Registers another signal to the set watched by this [`Signals`] instance.
///
/// The same restrictions (panics, errors) apply as for the [`Handle::add_signal`]
/// method.
pub fn add_signal(&self, signal: c_int) -> Result<(), Error> {
self.handle().add_signal(signal)
}
/// Returns an iterator of already received signals.
///
/// This returns an iterator over all the signal numbers of the signals received since last
/// time they were read (out of the set registered by this `Signals` instance). Note that they
/// are returned in arbitrary order and a signal instance may returned only once even if it was
/// received multiple times.
///
/// This method returns immediately (does not block) and may produce an empty iterator if there
/// are no signals ready.
pub fn pending(&mut self) -> Pending<E> {
self.0.pending()
}
/// Block until the stream contains some bytes.
///
/// Returns true if it was possible to read a byte and false otherwise.
fn has_signals(read: &mut UnixStream) -> Result<bool, Error> {
| /// Waits for some signals to be available and returns an iterator.
///
/// This is similar to [`pending`][SignalsInfo::pending]. If there are no signals available, it
/// tries to wait for some to arrive. However, due to implementation details, this still can
/// produce an empty iterator.
///
/// This can block for arbitrary long time. If the [`Handle::close`] method is used in
/// another thread this method will return immediately.
///
/// Note that the blocking is done in this method, not in the iterator.
pub fn wait(&mut self) -> Pending<E> {
match self.0.poll_pending(&mut Self::has_signals) {
Ok(Some(pending)) => pending,
// Because of the blocking has_signals method the poll_pending method
// only returns None if the instance is closed. But we want to return
// a possibly empty pending object anyway.
Ok(None) => self.pending(),
// Users can't manipulate the internal file descriptors and the way we use them
// shouldn't produce any errors. So it is OK to panic.
Err(error) => panic!("Unexpected error: {}", error),
}
}
/// Is it closed?
///
/// See [`close`][Handle::close].
pub fn is_closed(&self) -> bool {
self.handle().is_closed()
}
/// Get an infinite iterator over arriving signals.
///
/// The iterator's `next()` blocks as necessary to wait for signals to arrive. This is adequate
/// if you want to designate a thread solely to handling signals. If multiple signals come at
/// the same time (between two values produced by the iterator), they will be returned in
/// arbitrary order. Multiple instances of the same signal may be collated.
///
/// This is also the iterator returned by `IntoIterator` implementation on `&mut Signals`.
///
/// This iterator terminates only if explicitly [closed][Handle::close].
///
/// # Examples
///
/// ```rust
/// # extern crate libc;
/// # extern crate signal_hook;
/// #
/// # use std::io::Error;
/// # use std::thread;
/// #
/// use signal_hook::consts::signal::*;
/// use signal_hook::iterator::Signals;
///
/// # fn main() -> Result<(), Error> {
/// let mut signals = Signals::new(&[SIGUSR1, SIGUSR2])?;
/// let handle = signals.handle();
/// thread::spawn(move || {
/// for signal in signals.forever() {
/// match signal {
/// SIGUSR1 => {},
/// SIGUSR2 => {},
/// _ => unreachable!(),
/// }
/// }
/// });
/// handle.close();
/// # Ok(())
/// # }
/// ```
pub fn forever(&mut self) -> Forever<E> {
Forever(RefSignalIterator::new(&mut self.0))
}
/// Get a shareable handle to a [`Handle`] for this instance.
///
/// This can be used to add further signals or close the [`Signals`] instance.
pub fn handle(&self) -> Handle {
self.0.handle()
}
}
impl<E> Debug for SignalsInfo<E>
where
E: Debug + Exfiltrator,
E::Storage: Debug,
{
fn fmt(&self, fmt: &mut Formatter) -> FmtResult {
fmt.debug_tuple("Signals").field(&self.0).finish()
}
}
impl<'a, E: Exfiltrator> IntoIterator for &'a mut SignalsInfo<E> {
type Item = E::Output;
type IntoIter = Forever<'a, E>;
fn into_iter(self) -> Self::IntoIter {
self.forever()
}
}
/// An infinite iterator of arriving signals.
pub struct Forever<'a, E: Exfiltrator>(RefSignalIterator<'a, UnixStream, E>);
impl<'a, E: Exfiltrator> Iterator for Forever<'a, E> {
type Item = E::Output;
fn next(&mut self) -> Option<E::Output> {
loop {
match self.0.poll_signal(&mut SignalsInfo::<E>::has_signals) {
PollResult::Signal(result) => break Some(result),
PollResult::Closed => break None,
// In theory, the poll_signal should not return PollResult::Pending. Nevertheless,
// there's a race condition - if the other side closes the pipe/socket after
// checking for it being closed, then the `read` there returns 0 as EOF. That
// appears as pending here. Next time we should get Closed.
PollResult::Pending => continue,
// Users can't manipulate the internal file descriptors and the way we use them
// shouldn't produce any errors. So it is OK to panic.
PollResult::Err(error) => panic!("Unexpected error: {}", error),
}
}
}
}
/// A type alias for an iterator returning just the signal numbers.
///
/// This is the simplified version for most of the use cases. For advanced usages, the
/// [`SignalsInfo`] with explicit [`Exfiltrator`] type can be used.
pub type Signals = SignalsInfo<SignalOnly>;
| loop {
match read.read(&mut [0u8]) {
Ok(num_read) => break Ok(num_read > 0),
// If we get an EINTR error it is fine to retry reading from the stream.
// Otherwise we should pass on the error to the caller.
Err(error) => {
if error.kind() != ErrorKind::Interrupted {
break Err(error);
}
}
}
}
}
| identifier_body |
mod.rs | //! An iterator over incoming signals.
//!
//! This provides a higher abstraction over the signals, providing
//! the [`SignalsInfo`] structure which is able to iterate over the
//! incoming signals. The structure is parametrized by an
//! [`Exfiltrator`][self::exfiltrator::Exfiltrator], which specifies what information is returned
//! for each delivered signal. Note that some exfiltrators are behind a feature flag.
//!
//! The [`Signals`] is a type alias for the common case when it is enough to get the signal number.
//!
//! This module (and everything in it) is turned by the `iterator` feature. It is **on** by
//! default, the possibility to turn off is mostly possible for very special purposes (compiling on
//! `<rustc-1.36`, minimizing the amount of code compiled, …). In a sense, this is the highest
//! level abstraction of the crate and the API expected to be used by most of the people.
//!
//! # Examples
//!
//! ```rust
//! extern crate libc;
//! extern crate signal_hook;
//!
//! use std::io::Error;
//!
//! use signal_hook::consts::signal::*;
//! use signal_hook::iterator::Signals;
//!
//! fn main() -> Result<(), Error> {
//! let mut signals = Signals::new(&[
//! SIGHUP,
//! SIGTERM,
//! SIGINT,
//! SIGQUIT,
//! # SIGUSR1,
//! ])?;
//! # // A trick to terminate the example when run as doc-test. Not part of the real code.
//! # signal_hook::low_level::raise(SIGUSR1).unwrap();
//! 'outer: loop {
//! // Pick up signals that arrived since last time
//! for signal in signals.pending() {
//! match signal as libc::c_int {
//! SIGHUP => {
//! // Reload configuration
//! // Reopen the log file
//! }
//! SIGTERM | SIGINT | SIGQUIT => {
//! break 'outer;
//! },
//! # SIGUSR1 => return Ok(()),
//! _ => unreachable!(),
//! }
//! }
//! // Do some bit of work ‒ something with upper limit on waiting, so we don't block
//! // forever with a SIGTERM already waiting.
//! }
//! println!("Terminating. Bye bye");
//! Ok(())
//! }
//! ```
pub mod backend;
pub mod exfiltrator;
use std::borrow::Borrow;
use std::fmt::{Debug, Formatter, Result as FmtResult};
use std::io::{Error, ErrorKind, Read};
use std::os::unix::net::UnixStream;
use libc::{self, c_int};
pub use self::backend::{Handle, Pending};
use self::backend::{PollResult, RefSignalIterator, SignalDelivery};
use self::exfiltrator::{Exfiltrator, SignalOnly};
/// The main structure of the module, representing interest in some signals.
///
/// Unlike the helpers in other modules, this registers the signals when created and unregisters
/// them on drop. It provides the pending signals during its lifetime, either in batches or as an
/// infinite iterator.
///
/// Most users will want to use it through the [`Signals`] type alias for simplicity.
///
/// # Multiple threads
///
/// Instances of this struct can be [sent][std::marker::Send] to other threads. In a multithreaded
/// application this can be used to dedicate a separate thread for signal handling. In this case
/// you should get a [`Handle`] using the [`handle`][Signals::handle] method before sending the
/// `Signals` instance to a background thread. With the handle you will be able to shut down the
/// background thread later, or to operatively add more signals.
///
/// The controller handle can be shared between as many threads as you like using its
/// [`clone`][Handle::clone] method.
///
/// # Exfiltrators
///
/// The [`SignalOnly]` provides only the signal number. There are further exfiltrators available in
/// the [`exfiltrator`] module. Note that some of them are behind feature flags that need to be
/// enabled.
///
/// # Examples
///
/// ```rust
/// # extern crate signal_hook;
/// #
/// # use std::io::Error;
/// # use std::thread;
/// use signal_hook::consts::signal::*;
/// use signal_hook::iterator::Signals;
///
/// #
/// # fn main() -> Result<(), Error> {
/// let mut signals = Signals::new(&[SIGUSR1, SIGUSR2])?;
/// let handle = signals.handle();
/// let thread = thread::spawn(move || {
/// for signal in &mut signals {
/// match signal {
/// SIGUSR1 => {},
/// SIGUSR2 => {},
/// _ => unreachable!(),
/// }
/// }
/// });
///
/// // Some time later...
/// handle.close();
/// thread.join().unwrap();
/// # Ok(())
/// # }
/// ```
pub struct SignalsInfo<E: Exfiltrator = SignalOnly>(SignalDelivery<UnixStream, E>);
impl<E: Exfiltrator> SignalsInfo<E> {
/// Creates the `Signals` structure.
///
/// This registers all the signals listed. The same restrictions (panics, errors) apply as
/// for the [`Handle::add_signal`] method.
pub fn new<I, S>(signals: I) -> Result<Self, Error>
where
I: IntoIterator<Item = S>,
S: Borrow<c_int>,
E: Default,
{
Self::with_exfiltrator(signals, E::default())
}
/// An advanced constructor with explicit [`Exfiltrator`].
pub fn with_exfiltrator<I, S>(signals: I, exfiltrator: E) -> Result<Self, Error>
where
I: IntoIterator<Item = S>,
S: Borrow<c_int>,
{
let (read, write) = UnixStream::pair()?;
Ok(SignalsInfo(SignalDelivery::with_pipe(
read,
write,
exfiltrator,
signals,
)?))
}
/// Registers another signal to the set watched by this [`Signals`] instance.
///
/// The same restrictions (panics, errors) apply as for the [`Handle::add_signal`]
/// method.
pub fn add_signal(&self, signal: c_int) -> Result<(), Error> {
self.handle().add_signal(signal)
}
/// Returns an iterator of already received signals.
///
/// This returns an iterator over all the signal numbers of the signals received since last
/// time they were read (out of the set registered by this `Signals` instance). Note that they
/// are returned in arbitrary order and a signal instance may returned only once even if it was
/// received multiple times.
///
/// This method returns immediately (does not block) and may produce an empty iterator if there
/// are no signals ready.
pub fn pending(&mut self) -> Pending<E> {
self.0.pending()
}
/// Block until the stream contains some bytes.
///
/// Returns true if it was possible to read a byte and false otherwise.
fn has_signals(read: &mut UnixStream) -> Result<bool, Error> {
loop {
match read.read(&mut [0u8]) {
Ok(num_read) => break Ok(num_read > 0),
// If we get an EINTR error it is fine to retry reading from the stream.
// Otherwise we should pass on the error to the caller.
Err(error) => {
if error.kind()!= ErrorKind::Interrupted {
break Err(error);
}
}
}
}
}
/// Waits for some signals to be available and returns an iterator.
///
/// This is similar to [`pending`][SignalsInfo::pending]. If there are no signals available, it
/// tries to wait for some to arrive. However, due to implementation details, this still can
/// produce an empty iterator.
///
/// This can block for arbitrary long time. If the [`Handle::close`] method is used in
/// another thread this method will return immediately.
///
/// Note that the blocking is done in this method, not in the iterator.
pub fn wait(&mut self) -> Pending<E> {
match self.0.poll_pending(&mut Self::has_signals) {
Ok(Some(pending)) => pending,
// Because of the blocking has_signals method the poll_pending method
// only returns None if the instance is closed. But we want to return
// a possibly empty pending object anyway.
Ok(None) => self.pending(),
// Users can't manipulate the internal file descriptors and the way we use them
// shouldn't produce any errors. So it is OK to panic.
Err(error) => panic!("Unexpected error: {}", error),
}
}
/// Is it closed?
///
/// See [`close`][Handle::close].
pub fn is_closed(&self) -> bool {
self.handle().is_closed()
}
/// Get an infinite iterator over arriving signals.
///
/// The iterator's `next()` blocks as necessary to wait for signals to arrive. This is adequate
/// if you want to designate a thread solely to handling signals. If multiple signals come at
/// the same time (between two values produced by the iterator), they will be returned in
/// arbitrary order. Multiple instances of the same signal may be collated.
///
/// This is also the iterator returned by `IntoIterator` implementation on `&mut Signals`.
///
/// This iterator terminates only if explicitly [closed][Handle::close].
///
/// # Examples
///
/// ```rust
/// # extern crate libc;
/// # extern crate signal_hook;
/// #
/// # use std::io::Error;
/// # use std::thread;
/// #
/// use signal_hook::consts::signal::*;
/// use signal_hook::iterator::Signals;
///
/// # fn main() -> Result<(), Error> {
/// let mut signals = Signals::new(&[SIGUSR1, SIGUSR2])?;
/// let handle = signals.handle();
/// thread::spawn(move || {
/// for signal in signals.forever() {
/// match signal {
/// SIGUSR1 => {},
/// SIGUSR2 => {},
/// _ => unreachable!(),
/// }
/// }
/// });
/// handle.close();
/// # Ok(())
/// # }
/// ```
pub fn forever(&mut self) -> Forever<E> {
Forever(RefSignalIterator::new(&mut self.0))
}
/// Get a shareable handle to a [`Handle`] for this instance.
///
/// This can be used to add further signals or close the [`Signals`] instance.
pub fn handle(&self) -> Handle {
self.0.handle()
}
}
impl<E> Debug for SignalsInfo<E>
where
E: Debug + Exfiltrator,
E::Storage: Debug,
{
fn fmt( | lf, fmt: &mut Formatter) -> FmtResult {
fmt.debug_tuple("Signals").field(&self.0).finish()
}
}
impl<'a, E: Exfiltrator> IntoIterator for &'a mut SignalsInfo<E> {
type Item = E::Output;
type IntoIter = Forever<'a, E>;
fn into_iter(self) -> Self::IntoIter {
self.forever()
}
}
/// An infinite iterator of arriving signals.
pub struct Forever<'a, E: Exfiltrator>(RefSignalIterator<'a, UnixStream, E>);
impl<'a, E: Exfiltrator> Iterator for Forever<'a, E> {
type Item = E::Output;
fn next(&mut self) -> Option<E::Output> {
loop {
match self.0.poll_signal(&mut SignalsInfo::<E>::has_signals) {
PollResult::Signal(result) => break Some(result),
PollResult::Closed => break None,
// In theory, the poll_signal should not return PollResult::Pending. Nevertheless,
// there's a race condition - if the other side closes the pipe/socket after
// checking for it being closed, then the `read` there returns 0 as EOF. That
// appears as pending here. Next time we should get Closed.
PollResult::Pending => continue,
// Users can't manipulate the internal file descriptors and the way we use them
// shouldn't produce any errors. So it is OK to panic.
PollResult::Err(error) => panic!("Unexpected error: {}", error),
}
}
}
}
/// A type alias for an iterator returning just the signal numbers.
///
/// This is the simplified version for most of the use cases. For advanced usages, the
/// [`SignalsInfo`] with explicit [`Exfiltrator`] type can be used.
pub type Signals = SignalsInfo<SignalOnly>;
| &se | identifier_name |
windows.rs | // Copyright (c) Jørgen Tjernø <[email protected]>. All rights reserved.
use anyhow::{anyhow, bail, Context, Result};
use log::{debug, info, trace, warn};
use mail_slot::{MailslotClient, MailslotName};
use simplelog::*;
use std::{
fs::{File, OpenOptions},
io,
path::{Path, PathBuf},
process::{Command, Stdio},
};
use structopt::StructOpt;
use winreg::{enums::*, RegKey};
// How many bytes do we let the log size grow to before we rotate it? We only keep one current and one old log.
const MAX_LOG_SIZE: u64 = 64 * 1024;
// Flags needed to run delete_subkey_all as well as just set_value and enum_values on the same handle.
const ENUMERATE_AND_DELETE_FLAGS: u32 = winreg::enums::KEY_READ | winreg::enums::KEY_SET_VALUE;
const DISPLAY_NAME: &str = "Hermes URL Handler";
const DESCRIPTION: &str = "Open links to UE4 assets or custom editor actions";
fn get_protocol_registry_key(protocol: &str) -> String {
format!(r"SOFTWARE\Classes\{}", protocol)
}
fn get_configuration_registry_key(protocol: &str) -> String {
format!(r"Software\bitSpatter\Hermes\Protocols\{}", protocol)
}
/// Register associations with Windows to handle our protocol, and the command we'll invoke
fn register_command(
protocol: &str,
#[allow(clippy::ptr_arg)] commandline: &Vec<String>,
extra_args: Option<&str>,
) -> io::Result<()> {
use std::env::current_exe;
let exe_path = current_exe()?;
let exe_path = exe_path.to_str().unwrap_or_default().to_owned();
let icon_path = format!("\"{}\",0", exe_path);
let open_command = if let Some(extra_args) = extra_args {
format!("\"{}\" {} open \"%1\"", exe_path, extra_args)
} else {
format!("\"{}\" open \"%1\"", exe_path)
};
let hkcu = RegKey::predef(HKEY_CURRENT_USER);
// Configure our ProgID to point to the right command
let protocol_path = get_protocol_registry_key(protocol);
let (progid_class, _) = hkcu.create_subkey(&protocol_path)?;
progid_class.set_value("", &format!("URL:{} Protocol", protocol))?;
// Indicates that this class defines a protocol handler
progid_class.set_value("URL Protocol", &"")?;
let (progid_class_defaulticon, _) = progid_class.create_subkey("DefaultIcon")?;
progid_class_defaulticon.set_value("", &icon_path)?;
debug!(
r"set HKEY_CURRENT_USER\{}\DefaultIcon to '{}'",
protocol_path, icon_path
);
let (progid_class_shell_open_command, _) = progid_class.create_subkey(r"shell\open\command")?;
progid_class_shell_open_command.set_value("", &open_command)?;
debug!(
r"set HKEY_CURRENT_USER\{}\shell\open\command to '{}'",
protocol_path, open_command
);
info!("registering command for {}://", protocol);
let config_path = get_configuration_registry_key(&protocol);
let (config, _) = hkcu.create_subkey(&config_path)?;
config.set_value("command", commandline)?;
debug!(
r"set HKEY_CURRENT_USER\{}\command to {:?}",
config_path, commandline
);
Ok(())
}
/// Remove all the registry keys that we've set up for a protocol
fn unregister_protocol(protocol: &str) {
let hkcu = RegKey::predef(HKEY_CURRENT_USER);
let protocol_path = get_protocol_registry_key(protocol);
trace!("querying protocol registration at {}", protocol_path);
if let Ok(protocol_registry_key) =
hkcu.open_subkey_with_flags(&protocol_path, ENUMERATE_AND_DELETE_FLAGS)
{
info!("removing protocol registration for {}://", protocol);
let result = protocol_registry_key.delete_subkey_all("");
if let Err(error) = result {
warn!("unable to delete {}: {}", protocol_path, error);
}
} else {
trace!(
"could not open {}, assuming it doesn't exist",
protocol_path,
);
}
let _ = hkcu.delete_subkey(&protocol_path);
let configuration_path = get_configuration_registry_key(protocol);
trace!("querying configuration at {}", configuration_path);
if let Ok(configuration_registry_key) =
hkcu.open_subkey_with_flags(&configuration_path, ENUMERATE_AND_DELETE_FLAGS)
{
info!("removing configuration for {}://", protocol);
let result = configuration_registry_key.delete_subkey_all("");
if let Err(error) = result {
warn!("unable to delete {}: {}", configuration_path, error);
}
} else {
trace!(
"could not open {}, assuming it doesn't exist",
configuration_path,
);
}
let _ = hkcu.delete_subkey(&configuration_path);
}
/// Combine the path and query string from the given Url
fn get_path_and_extras(url: &url::Url) -> String {
let mut path = url.path().to_owned();
if let Some(query) = url.query() {
path += "?";
path += query;
}
path
}
/// Dispatch the given URL to the correct mailslot or launch the editor
fn open_url(url: &str) -> Result<()> {
let url = url::Url::parse(url)?;
let protocol = url.scheme();
let hostname = url
.host_str()
.ok_or_else(|| anyhow!("could not parse hostname from {}", url))?;
let path = get_path_and_extras(&url);
let full_path = format!("/{}{}", hostname, path);
trace!(
"split url {} into protocol={}, full_path={} (hostname={} + path={})",
url,
protocol,
full_path,
hostname,
path
);
let hkcu = RegKey::predef(HKEY_CURRENT_USER);
let config = hkcu
.open_subkey(get_configuration_registry_key(protocol))
.with_context(|| format!("no hostnames registered when trying to handle url {}", url))?; | .get_value("command")
.with_context(|| format!("command not registered when trying to handle url {}", url))?;
let could_send = {
let slot = MailslotName::local(&format!(r"bitSpatter\Hermes\{}", protocol));
trace!("Attempting to send URL to mailslot {}", slot.to_string());
if let Ok(mut client) = MailslotClient::new(&slot) {
if let Err(error) = client.send_message(full_path.as_bytes()) {
warn!("Could not send mail slot message to {}: {} -- assuming application is shutting down, starting a new one", slot.to_string(), error);
false
} else {
trace!("Delivered using Mailslot");
true
}
} else {
trace!("Could not connect to Mailslot, assuming application is not running");
false
}
};
if!could_send {
let (exe_name, args) = {
debug!(
"registered handler for {}: {:?}",
protocol, protocol_command
);
let mut protocol_command = protocol_command.into_iter();
let exe_name = protocol_command
.next()
.ok_or_else(|| anyhow!("empty command specified for hostname {}", hostname))?;
// TODO: Handle %%1 as an escape?
let args: Vec<_> = protocol_command
.map(|arg: String| arg.replace("%1", &full_path))
.collect();
(exe_name, args)
};
info!("executing {:?} with arguments {:?}", exe_name, args);
Command::new(&exe_name)
.args(&args)
.stdout(Stdio::null())
.stderr(Stdio::null())
.stdin(Stdio::null())
.spawn()
.with_context(|| format!("Failed to execute {:?} {:?}", exe_name, args))?;
}
Ok(())
}
/// Validate the scheme according to RFC3986 (https://datatracker.ietf.org/doc/html/rfc3986)
fn parse_scheme(src: &str) -> Result<String, anyhow::Error> {
let src = src.trim();
let mut chars = src.chars();
let first_char = chars
.next()
.ok_or_else(|| anyhow!("protocol needs to contain at least one character"))?;
if!first_char.is_ascii_alphabetic() {
bail!(
"protocol '{}' needs to start with an alphabetic character",
src
);
}
for char in chars {
if!char.is_ascii_alphanumeric() && char!= '+' && char!= '-' && char!= '.' {
bail!("protocol '{}' can only contain the letters a-z, the numbers 0-9, '+', '-', and '.'", src);
}
}
Ok(src.to_lowercase())
}
// This is the definition of our command line options
#[derive(Debug, StructOpt)]
#[structopt(
name = DISPLAY_NAME,
about = DESCRIPTION
)]
struct CommandOptions {
/// Use verbose logging
#[structopt(short, long)]
verbose: bool,
/// Use debug logging, even more verbose than --verbose
#[structopt(long)]
debug: bool,
/// Choose the mode of operation
#[structopt(subcommand)]
mode: ExecutionMode,
}
#[derive(Debug, StructOpt)]
enum ExecutionMode {
/// Dispatch the given URL to Unreal Engine (or launch it, if needed)
Open {
/// URL to open
url: String,
},
/// Register this EXE as a URL protocol handler
Register {
/// The protocol this exe will be registered for
#[structopt(parse(try_from_str = parse_scheme))]
protocol: String,
/// Enable debug logging for this registration
#[structopt(long)]
register_with_debugging: bool,
/// The command line that will handle the registration if needed, where %1 is the placeholder for the path
commandline: Vec<String>,
},
/// Remove all registry entries for the URL protocol handler & hostname configuration
Unregister {
/// The protocol we will delete the registration for
#[structopt(parse(try_from_str = parse_scheme))]
protocol: String,
},
}
fn get_exe_relative_path(filename: &str) -> io::Result<PathBuf> {
let mut path = std::env::current_exe()?;
path.set_file_name(filename);
Ok(path)
}
fn rotate_and_open_log(log_path: &Path) -> Result<File, io::Error> {
if let Ok(log_info) = std::fs::metadata(&log_path) {
if log_info.len() > MAX_LOG_SIZE
&& std::fs::rename(&log_path, log_path.with_extension("log.old")).is_err()
&& std::fs::remove_file(log_path).is_err()
{
return File::create(log_path);
}
}
return OpenOptions::new().append(true).create(true).open(log_path);
}
fn init() -> Result<CommandOptions> {
// First parse our command line options, so we can use it to configure the logging.
let options = CommandOptions::from_args();
let log_level = if options.debug {
LevelFilter::Trace
} else if options.verbose {
LevelFilter::Debug
} else {
LevelFilter::Info
};
let mut loggers: Vec<Box<dyn SharedLogger>> = Vec::new();
// Always log to hermes.log
let log_path = get_exe_relative_path("hermes.log")?;
loggers.push(WriteLogger::new(
log_level,
Config::default(),
rotate_and_open_log(&log_path)?,
));
// We only use the terminal logger in the debug build, since we don't allocate a console window otherwise.
if cfg!(debug_assertions) {
loggers.push(TermLogger::new(
log_level,
Config::default(),
TerminalMode::Mixed,
));
};
CombinedLogger::init(loggers)?;
trace!("command line options: {:?}", options);
Ok(options)
}
fn get_debug_args(register_with_debugging: bool) -> Option<&'static str> {
if register_with_debugging {
Some("--debug")
} else {
None
}
}
pub fn main() -> Result<()> {
let options = init()?;
trace!(
"running from directory {}",
std::env::current_dir().unwrap_or_default().display()
);
match options.mode {
ExecutionMode::Register {
protocol,
commandline,
register_with_debugging,
} => {
register_command(
&protocol,
&commandline,
get_debug_args(register_with_debugging),
)
.with_context(|| format!("Failed to register command for {}://", protocol))?;
}
ExecutionMode::Unregister { protocol } => {
info!("unregistering handler for {}://", protocol);
unregister_protocol(&protocol);
}
ExecutionMode::Open { url } => {
open_url(&url).with_context(|| format!("Failed to open url {}", url))?;
}
}
Ok(())
} | let protocol_command: Vec<_> = config | random_line_split |
windows.rs | // Copyright (c) Jørgen Tjernø <[email protected]>. All rights reserved.
use anyhow::{anyhow, bail, Context, Result};
use log::{debug, info, trace, warn};
use mail_slot::{MailslotClient, MailslotName};
use simplelog::*;
use std::{
fs::{File, OpenOptions},
io,
path::{Path, PathBuf},
process::{Command, Stdio},
};
use structopt::StructOpt;
use winreg::{enums::*, RegKey};
// How many bytes do we let the log size grow to before we rotate it? We only keep one current and one old log.
const MAX_LOG_SIZE: u64 = 64 * 1024;
// Flags needed to run delete_subkey_all as well as just set_value and enum_values on the same handle.
const ENUMERATE_AND_DELETE_FLAGS: u32 = winreg::enums::KEY_READ | winreg::enums::KEY_SET_VALUE;
const DISPLAY_NAME: &str = "Hermes URL Handler";
const DESCRIPTION: &str = "Open links to UE4 assets or custom editor actions";
fn get_protocol_registry_key(protocol: &str) -> String {
format!(r"SOFTWARE\Classes\{}", protocol)
}
fn get_configuration_registry_key(protocol: &str) -> String {
format!(r"Software\bitSpatter\Hermes\Protocols\{}", protocol)
}
/// Register associations with Windows to handle our protocol, and the command we'll invoke
fn register_command(
protocol: &str,
#[allow(clippy::ptr_arg)] commandline: &Vec<String>,
extra_args: Option<&str>,
) -> io::Result<()> {
use std::env::current_exe;
let exe_path = current_exe()?;
let exe_path = exe_path.to_str().unwrap_or_default().to_owned();
let icon_path = format!("\"{}\",0", exe_path);
let open_command = if let Some(extra_args) = extra_args {
format!("\"{}\" {} open \"%1\"", exe_path, extra_args)
} else {
format!("\"{}\" open \"%1\"", exe_path)
};
let hkcu = RegKey::predef(HKEY_CURRENT_USER);
// Configure our ProgID to point to the right command
let protocol_path = get_protocol_registry_key(protocol);
let (progid_class, _) = hkcu.create_subkey(&protocol_path)?;
progid_class.set_value("", &format!("URL:{} Protocol", protocol))?;
// Indicates that this class defines a protocol handler
progid_class.set_value("URL Protocol", &"")?;
let (progid_class_defaulticon, _) = progid_class.create_subkey("DefaultIcon")?;
progid_class_defaulticon.set_value("", &icon_path)?;
debug!(
r"set HKEY_CURRENT_USER\{}\DefaultIcon to '{}'",
protocol_path, icon_path
);
let (progid_class_shell_open_command, _) = progid_class.create_subkey(r"shell\open\command")?;
progid_class_shell_open_command.set_value("", &open_command)?;
debug!(
r"set HKEY_CURRENT_USER\{}\shell\open\command to '{}'",
protocol_path, open_command
);
info!("registering command for {}://", protocol);
let config_path = get_configuration_registry_key(&protocol);
let (config, _) = hkcu.create_subkey(&config_path)?;
config.set_value("command", commandline)?;
debug!(
r"set HKEY_CURRENT_USER\{}\command to {:?}",
config_path, commandline
);
Ok(())
}
/// Remove all the registry keys that we've set up for a protocol
fn unregister_protocol(protocol: &str) {
let hkcu = RegKey::predef(HKEY_CURRENT_USER);
let protocol_path = get_protocol_registry_key(protocol);
trace!("querying protocol registration at {}", protocol_path);
if let Ok(protocol_registry_key) =
hkcu.open_subkey_with_flags(&protocol_path, ENUMERATE_AND_DELETE_FLAGS)
{
info!("removing protocol registration for {}://", protocol);
let result = protocol_registry_key.delete_subkey_all("");
if let Err(error) = result {
warn!("unable to delete {}: {}", protocol_path, error);
}
} else {
trace!(
"could not open {}, assuming it doesn't exist",
protocol_path,
);
}
let _ = hkcu.delete_subkey(&protocol_path);
let configuration_path = get_configuration_registry_key(protocol);
trace!("querying configuration at {}", configuration_path);
if let Ok(configuration_registry_key) =
hkcu.open_subkey_with_flags(&configuration_path, ENUMERATE_AND_DELETE_FLAGS)
{
info!("removing configuration for {}://", protocol);
let result = configuration_registry_key.delete_subkey_all("");
if let Err(error) = result {
warn!("unable to delete {}: {}", configuration_path, error);
}
} else {
trace!(
"could not open {}, assuming it doesn't exist",
configuration_path,
);
}
let _ = hkcu.delete_subkey(&configuration_path);
}
/// Combine the path and query string from the given Url
fn ge | rl: &url::Url) -> String {
let mut path = url.path().to_owned();
if let Some(query) = url.query() {
path += "?";
path += query;
}
path
}
/// Dispatch the given URL to the correct mailslot or launch the editor
fn open_url(url: &str) -> Result<()> {
let url = url::Url::parse(url)?;
let protocol = url.scheme();
let hostname = url
.host_str()
.ok_or_else(|| anyhow!("could not parse hostname from {}", url))?;
let path = get_path_and_extras(&url);
let full_path = format!("/{}{}", hostname, path);
trace!(
"split url {} into protocol={}, full_path={} (hostname={} + path={})",
url,
protocol,
full_path,
hostname,
path
);
let hkcu = RegKey::predef(HKEY_CURRENT_USER);
let config = hkcu
.open_subkey(get_configuration_registry_key(protocol))
.with_context(|| format!("no hostnames registered when trying to handle url {}", url))?;
let protocol_command: Vec<_> = config
.get_value("command")
.with_context(|| format!("command not registered when trying to handle url {}", url))?;
let could_send = {
let slot = MailslotName::local(&format!(r"bitSpatter\Hermes\{}", protocol));
trace!("Attempting to send URL to mailslot {}", slot.to_string());
if let Ok(mut client) = MailslotClient::new(&slot) {
if let Err(error) = client.send_message(full_path.as_bytes()) {
warn!("Could not send mail slot message to {}: {} -- assuming application is shutting down, starting a new one", slot.to_string(), error);
false
} else {
trace!("Delivered using Mailslot");
true
}
} else {
trace!("Could not connect to Mailslot, assuming application is not running");
false
}
};
if!could_send {
let (exe_name, args) = {
debug!(
"registered handler for {}: {:?}",
protocol, protocol_command
);
let mut protocol_command = protocol_command.into_iter();
let exe_name = protocol_command
.next()
.ok_or_else(|| anyhow!("empty command specified for hostname {}", hostname))?;
// TODO: Handle %%1 as an escape?
let args: Vec<_> = protocol_command
.map(|arg: String| arg.replace("%1", &full_path))
.collect();
(exe_name, args)
};
info!("executing {:?} with arguments {:?}", exe_name, args);
Command::new(&exe_name)
.args(&args)
.stdout(Stdio::null())
.stderr(Stdio::null())
.stdin(Stdio::null())
.spawn()
.with_context(|| format!("Failed to execute {:?} {:?}", exe_name, args))?;
}
Ok(())
}
/// Validate the scheme according to RFC3986 (https://datatracker.ietf.org/doc/html/rfc3986)
fn parse_scheme(src: &str) -> Result<String, anyhow::Error> {
let src = src.trim();
let mut chars = src.chars();
let first_char = chars
.next()
.ok_or_else(|| anyhow!("protocol needs to contain at least one character"))?;
if!first_char.is_ascii_alphabetic() {
bail!(
"protocol '{}' needs to start with an alphabetic character",
src
);
}
for char in chars {
if!char.is_ascii_alphanumeric() && char!= '+' && char!= '-' && char!= '.' {
bail!("protocol '{}' can only contain the letters a-z, the numbers 0-9, '+', '-', and '.'", src);
}
}
Ok(src.to_lowercase())
}
// This is the definition of our command line options
#[derive(Debug, StructOpt)]
#[structopt(
name = DISPLAY_NAME,
about = DESCRIPTION
)]
struct CommandOptions {
/// Use verbose logging
#[structopt(short, long)]
verbose: bool,
/// Use debug logging, even more verbose than --verbose
#[structopt(long)]
debug: bool,
/// Choose the mode of operation
#[structopt(subcommand)]
mode: ExecutionMode,
}
#[derive(Debug, StructOpt)]
enum ExecutionMode {
/// Dispatch the given URL to Unreal Engine (or launch it, if needed)
Open {
/// URL to open
url: String,
},
/// Register this EXE as a URL protocol handler
Register {
/// The protocol this exe will be registered for
#[structopt(parse(try_from_str = parse_scheme))]
protocol: String,
/// Enable debug logging for this registration
#[structopt(long)]
register_with_debugging: bool,
/// The command line that will handle the registration if needed, where %1 is the placeholder for the path
commandline: Vec<String>,
},
/// Remove all registry entries for the URL protocol handler & hostname configuration
Unregister {
/// The protocol we will delete the registration for
#[structopt(parse(try_from_str = parse_scheme))]
protocol: String,
},
}
fn get_exe_relative_path(filename: &str) -> io::Result<PathBuf> {
let mut path = std::env::current_exe()?;
path.set_file_name(filename);
Ok(path)
}
fn rotate_and_open_log(log_path: &Path) -> Result<File, io::Error> {
if let Ok(log_info) = std::fs::metadata(&log_path) {
if log_info.len() > MAX_LOG_SIZE
&& std::fs::rename(&log_path, log_path.with_extension("log.old")).is_err()
&& std::fs::remove_file(log_path).is_err()
{
return File::create(log_path);
}
}
return OpenOptions::new().append(true).create(true).open(log_path);
}
fn init() -> Result<CommandOptions> {
// First parse our command line options, so we can use it to configure the logging.
let options = CommandOptions::from_args();
let log_level = if options.debug {
LevelFilter::Trace
} else if options.verbose {
LevelFilter::Debug
} else {
LevelFilter::Info
};
let mut loggers: Vec<Box<dyn SharedLogger>> = Vec::new();
// Always log to hermes.log
let log_path = get_exe_relative_path("hermes.log")?;
loggers.push(WriteLogger::new(
log_level,
Config::default(),
rotate_and_open_log(&log_path)?,
));
// We only use the terminal logger in the debug build, since we don't allocate a console window otherwise.
if cfg!(debug_assertions) {
loggers.push(TermLogger::new(
log_level,
Config::default(),
TerminalMode::Mixed,
));
};
CombinedLogger::init(loggers)?;
trace!("command line options: {:?}", options);
Ok(options)
}
fn get_debug_args(register_with_debugging: bool) -> Option<&'static str> {
if register_with_debugging {
Some("--debug")
} else {
None
}
}
pub fn main() -> Result<()> {
let options = init()?;
trace!(
"running from directory {}",
std::env::current_dir().unwrap_or_default().display()
);
match options.mode {
ExecutionMode::Register {
protocol,
commandline,
register_with_debugging,
} => {
register_command(
&protocol,
&commandline,
get_debug_args(register_with_debugging),
)
.with_context(|| format!("Failed to register command for {}://", protocol))?;
}
ExecutionMode::Unregister { protocol } => {
info!("unregistering handler for {}://", protocol);
unregister_protocol(&protocol);
}
ExecutionMode::Open { url } => {
open_url(&url).with_context(|| format!("Failed to open url {}", url))?;
}
}
Ok(())
}
| t_path_and_extras(u | identifier_name |
windows.rs | // Copyright (c) Jørgen Tjernø <[email protected]>. All rights reserved.
use anyhow::{anyhow, bail, Context, Result};
use log::{debug, info, trace, warn};
use mail_slot::{MailslotClient, MailslotName};
use simplelog::*;
use std::{
fs::{File, OpenOptions},
io,
path::{Path, PathBuf},
process::{Command, Stdio},
};
use structopt::StructOpt;
use winreg::{enums::*, RegKey};
// How many bytes do we let the log size grow to before we rotate it? We only keep one current and one old log.
const MAX_LOG_SIZE: u64 = 64 * 1024;
// Flags needed to run delete_subkey_all as well as just set_value and enum_values on the same handle.
const ENUMERATE_AND_DELETE_FLAGS: u32 = winreg::enums::KEY_READ | winreg::enums::KEY_SET_VALUE;
const DISPLAY_NAME: &str = "Hermes URL Handler";
const DESCRIPTION: &str = "Open links to UE4 assets or custom editor actions";
fn get_protocol_registry_key(protocol: &str) -> String {
format!(r"SOFTWARE\Classes\{}", protocol)
}
fn get_configuration_registry_key(protocol: &str) -> String {
format!(r"Software\bitSpatter\Hermes\Protocols\{}", protocol)
}
/// Register associations with Windows to handle our protocol, and the command we'll invoke
fn register_command(
protocol: &str,
#[allow(clippy::ptr_arg)] commandline: &Vec<String>,
extra_args: Option<&str>,
) -> io::Result<()> {
use std::env::current_exe;
let exe_path = current_exe()?;
let exe_path = exe_path.to_str().unwrap_or_default().to_owned();
let icon_path = format!("\"{}\",0", exe_path);
let open_command = if let Some(extra_args) = extra_args {
format!("\"{}\" {} open \"%1\"", exe_path, extra_args)
} else {
format!("\"{}\" open \"%1\"", exe_path)
};
let hkcu = RegKey::predef(HKEY_CURRENT_USER);
// Configure our ProgID to point to the right command
let protocol_path = get_protocol_registry_key(protocol);
let (progid_class, _) = hkcu.create_subkey(&protocol_path)?;
progid_class.set_value("", &format!("URL:{} Protocol", protocol))?;
// Indicates that this class defines a protocol handler
progid_class.set_value("URL Protocol", &"")?;
let (progid_class_defaulticon, _) = progid_class.create_subkey("DefaultIcon")?;
progid_class_defaulticon.set_value("", &icon_path)?;
debug!(
r"set HKEY_CURRENT_USER\{}\DefaultIcon to '{}'",
protocol_path, icon_path
);
let (progid_class_shell_open_command, _) = progid_class.create_subkey(r"shell\open\command")?;
progid_class_shell_open_command.set_value("", &open_command)?;
debug!(
r"set HKEY_CURRENT_USER\{}\shell\open\command to '{}'",
protocol_path, open_command
);
info!("registering command for {}://", protocol);
let config_path = get_configuration_registry_key(&protocol);
let (config, _) = hkcu.create_subkey(&config_path)?;
config.set_value("command", commandline)?;
debug!(
r"set HKEY_CURRENT_USER\{}\command to {:?}",
config_path, commandline
);
Ok(())
}
/// Remove all the registry keys that we've set up for a protocol
fn unregister_protocol(protocol: &str) {
let hkcu = RegKey::predef(HKEY_CURRENT_USER);
let protocol_path = get_protocol_registry_key(protocol);
trace!("querying protocol registration at {}", protocol_path);
if let Ok(protocol_registry_key) =
hkcu.open_subkey_with_flags(&protocol_path, ENUMERATE_AND_DELETE_FLAGS)
{
info!("removing protocol registration for {}://", protocol);
let result = protocol_registry_key.delete_subkey_all("");
if let Err(error) = result {
warn!("unable to delete {}: {}", protocol_path, error);
}
} else {
trace!(
"could not open {}, assuming it doesn't exist",
protocol_path,
);
}
let _ = hkcu.delete_subkey(&protocol_path);
let configuration_path = get_configuration_registry_key(protocol);
trace!("querying configuration at {}", configuration_path);
if let Ok(configuration_registry_key) =
hkcu.open_subkey_with_flags(&configuration_path, ENUMERATE_AND_DELETE_FLAGS)
{
info!("removing configuration for {}://", protocol);
let result = configuration_registry_key.delete_subkey_all("");
if let Err(error) = result {
warn!("unable to delete {}: {}", configuration_path, error);
}
} else {
trace!(
"could not open {}, assuming it doesn't exist",
configuration_path,
);
}
let _ = hkcu.delete_subkey(&configuration_path);
}
/// Combine the path and query string from the given Url
fn get_path_and_extras(url: &url::Url) -> String {
let mut path = url.path().to_owned();
if let Some(query) = url.query() {
path += "?";
path += query;
}
path
}
/// Dispatch the given URL to the correct mailslot or launch the editor
fn open_url(url: &str) -> Result<()> {
let url = url::Url::parse(url)?;
let protocol = url.scheme();
let hostname = url
.host_str()
.ok_or_else(|| anyhow!("could not parse hostname from {}", url))?;
let path = get_path_and_extras(&url);
let full_path = format!("/{}{}", hostname, path);
trace!(
"split url {} into protocol={}, full_path={} (hostname={} + path={})",
url,
protocol,
full_path,
hostname,
path
);
let hkcu = RegKey::predef(HKEY_CURRENT_USER);
let config = hkcu
.open_subkey(get_configuration_registry_key(protocol))
.with_context(|| format!("no hostnames registered when trying to handle url {}", url))?;
let protocol_command: Vec<_> = config
.get_value("command")
.with_context(|| format!("command not registered when trying to handle url {}", url))?;
let could_send = {
let slot = MailslotName::local(&format!(r"bitSpatter\Hermes\{}", protocol));
trace!("Attempting to send URL to mailslot {}", slot.to_string());
if let Ok(mut client) = MailslotClient::new(&slot) {
| lse {
trace!("Could not connect to Mailslot, assuming application is not running");
false
}
};
if!could_send {
let (exe_name, args) = {
debug!(
"registered handler for {}: {:?}",
protocol, protocol_command
);
let mut protocol_command = protocol_command.into_iter();
let exe_name = protocol_command
.next()
.ok_or_else(|| anyhow!("empty command specified for hostname {}", hostname))?;
// TODO: Handle %%1 as an escape?
let args: Vec<_> = protocol_command
.map(|arg: String| arg.replace("%1", &full_path))
.collect();
(exe_name, args)
};
info!("executing {:?} with arguments {:?}", exe_name, args);
Command::new(&exe_name)
.args(&args)
.stdout(Stdio::null())
.stderr(Stdio::null())
.stdin(Stdio::null())
.spawn()
.with_context(|| format!("Failed to execute {:?} {:?}", exe_name, args))?;
}
Ok(())
}
/// Validate the scheme according to RFC3986 (https://datatracker.ietf.org/doc/html/rfc3986)
fn parse_scheme(src: &str) -> Result<String, anyhow::Error> {
let src = src.trim();
let mut chars = src.chars();
let first_char = chars
.next()
.ok_or_else(|| anyhow!("protocol needs to contain at least one character"))?;
if!first_char.is_ascii_alphabetic() {
bail!(
"protocol '{}' needs to start with an alphabetic character",
src
);
}
for char in chars {
if!char.is_ascii_alphanumeric() && char!= '+' && char!= '-' && char!= '.' {
bail!("protocol '{}' can only contain the letters a-z, the numbers 0-9, '+', '-', and '.'", src);
}
}
Ok(src.to_lowercase())
}
// This is the definition of our command line options
#[derive(Debug, StructOpt)]
#[structopt(
name = DISPLAY_NAME,
about = DESCRIPTION
)]
struct CommandOptions {
/// Use verbose logging
#[structopt(short, long)]
verbose: bool,
/// Use debug logging, even more verbose than --verbose
#[structopt(long)]
debug: bool,
/// Choose the mode of operation
#[structopt(subcommand)]
mode: ExecutionMode,
}
#[derive(Debug, StructOpt)]
enum ExecutionMode {
/// Dispatch the given URL to Unreal Engine (or launch it, if needed)
Open {
/// URL to open
url: String,
},
/// Register this EXE as a URL protocol handler
Register {
/// The protocol this exe will be registered for
#[structopt(parse(try_from_str = parse_scheme))]
protocol: String,
/// Enable debug logging for this registration
#[structopt(long)]
register_with_debugging: bool,
/// The command line that will handle the registration if needed, where %1 is the placeholder for the path
commandline: Vec<String>,
},
/// Remove all registry entries for the URL protocol handler & hostname configuration
Unregister {
/// The protocol we will delete the registration for
#[structopt(parse(try_from_str = parse_scheme))]
protocol: String,
},
}
fn get_exe_relative_path(filename: &str) -> io::Result<PathBuf> {
let mut path = std::env::current_exe()?;
path.set_file_name(filename);
Ok(path)
}
fn rotate_and_open_log(log_path: &Path) -> Result<File, io::Error> {
if let Ok(log_info) = std::fs::metadata(&log_path) {
if log_info.len() > MAX_LOG_SIZE
&& std::fs::rename(&log_path, log_path.with_extension("log.old")).is_err()
&& std::fs::remove_file(log_path).is_err()
{
return File::create(log_path);
}
}
return OpenOptions::new().append(true).create(true).open(log_path);
}
fn init() -> Result<CommandOptions> {
// First parse our command line options, so we can use it to configure the logging.
let options = CommandOptions::from_args();
let log_level = if options.debug {
LevelFilter::Trace
} else if options.verbose {
LevelFilter::Debug
} else {
LevelFilter::Info
};
let mut loggers: Vec<Box<dyn SharedLogger>> = Vec::new();
// Always log to hermes.log
let log_path = get_exe_relative_path("hermes.log")?;
loggers.push(WriteLogger::new(
log_level,
Config::default(),
rotate_and_open_log(&log_path)?,
));
// We only use the terminal logger in the debug build, since we don't allocate a console window otherwise.
if cfg!(debug_assertions) {
loggers.push(TermLogger::new(
log_level,
Config::default(),
TerminalMode::Mixed,
));
};
CombinedLogger::init(loggers)?;
trace!("command line options: {:?}", options);
Ok(options)
}
fn get_debug_args(register_with_debugging: bool) -> Option<&'static str> {
if register_with_debugging {
Some("--debug")
} else {
None
}
}
pub fn main() -> Result<()> {
let options = init()?;
trace!(
"running from directory {}",
std::env::current_dir().unwrap_or_default().display()
);
match options.mode {
ExecutionMode::Register {
protocol,
commandline,
register_with_debugging,
} => {
register_command(
&protocol,
&commandline,
get_debug_args(register_with_debugging),
)
.with_context(|| format!("Failed to register command for {}://", protocol))?;
}
ExecutionMode::Unregister { protocol } => {
info!("unregistering handler for {}://", protocol);
unregister_protocol(&protocol);
}
ExecutionMode::Open { url } => {
open_url(&url).with_context(|| format!("Failed to open url {}", url))?;
}
}
Ok(())
}
| if let Err(error) = client.send_message(full_path.as_bytes()) {
warn!("Could not send mail slot message to {}: {} -- assuming application is shutting down, starting a new one", slot.to_string(), error);
false
} else {
trace!("Delivered using Mailslot");
true
}
} e | conditional_block |
lib.rs | //! # `uc8159`
//!
//! This is a driver crate for accessing the `uc8159` E-Ink display controller. For now, most of
//! the options are hard-coded for the
//! [Pimoroni Inky Impression](https://shop.pimoroni.com/products/inky-impression) display as that's
//! the only one I own, so proposing changes to add more features is most welcome!
//!
//! # Usage
//!
//! Get started by creating a [`Display`] instance. Populate the display buffer by using `fill`,
//! `copy_from`, `set_pixel`, etc. When you want to display the buffer contents to the screen, use
//! `show`.
#![cfg_attr(not(feature = "std"), no_std)]
use core::convert;
use core::marker;
use core::mem;
use core::slice;
/// Colors available on a 7-color e-ink display.
#[derive(Clone, Copy, Debug, Eq, Ord, PartialEq, PartialOrd)]
#[repr(u8)]
pub enum Color {
Black = 0,
White = 1,
Green = 2,
Blue = 3,
Red = 4,
Yellow = 5,
Orange = 6,
/// The absence of any color, i.e. the display will show no ink, which usually just means it
/// will show traces of whatever was there before.
Clean = 7,
}
/// A pre-computed palette which can be used to map arbitrary RGB colors onto a [`Display`].
#[derive(Clone, Debug)]
pub struct Palette([[u8; 3]; 7]);
/// Configuration when creating a new [`Display`] instance.
#[derive(Clone, Debug, Eq, Ord, PartialEq, PartialOrd)]
pub struct Config {
/// The color that will be used to fill the outline around the display, which can't be
/// controlled as individual pixels.
pub border_color: Color,
}
// Currently hard-coded behavior for Pimoroni Inky Impression
const WIDTH: usize = 600;
const HEIGHT: usize = 448;
const SPI_CHUNK_SIZE: usize = 4096;
#[derive(Clone, Copy, Debug, Eq, Ord, PartialEq, PartialOrd)]
#[repr(u8)]
enum Command {
PSR = 0x00,
PWR = 0x01,
POF = 0x02,
PFS = 0x03,
PON = 0x04,
// BTST = 0x06,
// DSLP = 0x07,
DTM1 = 0x10,
// DSP = 0x11,
DRF = 0x12,
// IPC = 0x13,
PLL = 0x30,
// TSC = 0x40,
TSE = 0x41,
// TSW = 0x42,
// TSR = 0x43,
CDI = 0x50,
// LPD = 0x51,
TCON = 0x60,
TRES = 0x61,
DAM = 0x65,
// REV = 0x70,
// FLG = 0x71,
// AMV = 0x80,
// VV = 0x81,
// VDCS = 0x82,
PWS = 0xE3,
// TSSET = 0xE5,
}
/// An instance of a display which is governed by a particular `uc8159` controller.
#[derive(Debug)]
pub struct Display<SPI, TIMER, RESET, BUSY, DC, ERR = convert::Infallible>
where
SPI: embedded_hal::blocking::spi::Write<u8>,
TIMER: embedded_hal::blocking::delay::DelayMs<u16>,
RESET: embedded_hal::digital::v2::OutputPin,
BUSY: embedded_hal::digital::v2::InputPin,
DC: embedded_hal::digital::v2::OutputPin,
ERR: From<SPI::Error> + From<RESET::Error> + From<BUSY::Error> + From<DC::Error>,
{
spi: SPI,
delay: TIMER,
reset: RESET,
busy: BUSY,
dc: DC,
config: Config,
buffer: [u8; WIDTH / 2 * HEIGHT],
phantom: marker::PhantomData<ERR>,
}
impl<SPI, DELAY, RESET, BUSY, DC, ERR> Display<SPI, DELAY, RESET, BUSY, DC, ERR>
where
SPI: embedded_hal::blocking::spi::Write<u8>,
DELAY: embedded_hal::blocking::delay::DelayMs<u16>,
RESET: embedded_hal::digital::v2::OutputPin,
BUSY: embedded_hal::digital::v2::InputPin,
DC: embedded_hal::digital::v2::OutputPin,
ERR: From<SPI::Error> + From<RESET::Error> + From<BUSY::Error> + From<DC::Error>,
{
/// Creates a new display instance.
///
/// The provided `spi` bus will be used for most of the communication. The `delay` instance
/// is used when waiting for reset and drawing operations to complete. The `reset` pin can be
/// provided to make sure the device is reset before each new draw command. The `busy` pin is
/// used to poll to see when draw operations are complete. The `dc` pin is used to signal
/// whether the sent `spi` signal is a command (low) or data (high).
pub fn new(spi: SPI, delay: DELAY, reset: RESET, busy: BUSY, dc: DC, config: Config) -> Self {
let phantom = marker::PhantomData;
let buffer = [0; WIDTH / 2 * HEIGHT];
Self {
spi,
delay,
reset,
busy,
dc,
config,
buffer,
phantom,
}
}
/// The width of the display.
pub fn width(&self) -> usize {
WIDTH
}
/// The height of the display.
pub fn height(&self) -> usize {
HEIGHT
}
/// Fills the entire display using a single color.
///
/// This is a pretty fast operation compared to e.g. calling `set_pixel`.
pub fn fill(&mut self, color: Color) {
self.buffer = [((color as u8) << 4) | color as u8; WIDTH / 2 * HEIGHT];
}
/// Copies data from another source in bulk.
///
/// The color data must contain exactly `width() * height()` elements and be in row-major order.
pub fn copy_from(&mut self, color: &[Color]) {
for (idx, cell) in color.chunks(2).enumerate() {
self.buffer[idx] = ((cell[0] as u8) << 4) | cell[1] as u8;
}
}
/// Sets a specific pixel color.
pub fn set_pixel(&mut self, x: usize, y: usize, color: Color) {
let cell = &mut self.buffer[y * WIDTH / 2 + x / 2];
if (x & 1) == 0 {
*cell = (*cell & 0b00001111) | ((color as u8) << 4);
} else |
}
/// Displays the contents of the internal buffer to the screen.
///
/// This operation blocks until the contents are completely shown.
pub fn show(&mut self) -> Result<(), ERR> {
self.setup()?;
let ptr = &self.buffer as *const _ as *const u8;
let len = mem::size_of_val(&self.buffer);
let data = unsafe { slice::from_raw_parts(ptr, len) };
Self::send_command(&mut self.spi, &mut self.dc, Command::DTM1, data)?;
self.busy_wait()?;
Self::send_command(&mut self.spi, &mut self.dc, Command::PON, &[])?;
self.busy_wait()?;
Self::send_command(&mut self.spi, &mut self.dc, Command::DRF, &[])?;
self.busy_wait()?;
Self::send_command(&mut self.spi, &mut self.dc, Command::POF, &[])?;
self.busy_wait()?;
Ok(())
}
fn setup(&mut self) -> Result<(), ERR> {
self.reset.set_low()?;
self.delay.delay_ms(100);
self.reset.set_high()?;
self.delay.delay_ms(100);
self.busy_wait()?;
let width_bytes = (WIDTH as u16).to_be_bytes();
let height_bytes = (HEIGHT as u16).to_be_bytes();
Self::send_command(
&mut self.spi,
&mut self.dc,
Command::TRES,
&[
width_bytes[0],
width_bytes[1],
height_bytes[0],
height_bytes[1],
],
)?;
// Panel Setting
// 0b11000000 = Resolution select, 0b00 = 640x480, our panel is 0b11 = 600x448
// 0b00100000 = LUT selection, 0 = ext flash, 1 = registers, we use ext flash
// 0b00010000 = Ignore
// 0b00001000 = Gate scan direction, 0 = down, 1 = up (default)
// 0b00000100 = Source shift direction, 0 = left, 1 = right (default)
// 0b00000010 = DC-DC converter, 0 = off, 1 = on
// 0b00000001 = Soft reset, 0 = Reset, 1 = Normal (Default)
Self::send_command(
&mut self.spi,
&mut self.dc,
Command::PSR,
&[
0b11101111, // See above for more magic numbers
0x08, // display_colours == UC8159_7C
],
)?;
Self::send_command(
&mut self.spi,
&mut self.dc,
Command::PWR,
&[
(0x06 << 3) | //??? - not documented in UC8159 datasheet
(0x01 << 2) | // SOURCE_INTERNAL_DC_DC
(0x01 << 1) | // GATE_INTERNAL_DC_DC
(0x01), // LV_SOURCE_INTERNAL_DC_DC
0x00, // VGx_20V
0x23, // UC8159_7C
0x23, // UC8159_7C
],
)?;
// Set the PLL clock frequency to 50Hz
// 0b11000000 = Ignore
// 0b00111000 = M
// 0b00000111 = N
// PLL = 2MHz * (M / N)
// PLL = 2MHz * (7 / 4)
// PLL = 2,800,000???
Self::send_command(&mut self.spi, &mut self.dc, Command::PLL, &[0x3C])?;
Self::send_command(&mut self.spi, &mut self.dc, Command::TSE, &[0x00])?;
// VCOM and Data Interval setting
// 0b11100000 = Vborder control (0b001 = LUTB voltage)
// 0b00010000 = Data polarity
// 0b00001111 = Vcom and data interval (0b0111 = 10, default)
Self::send_command(
&mut self.spi,
&mut self.dc,
Command::CDI,
&[((self.config.border_color as u8) << 5) | 0x17],
)?;
// Gate/Source non-overlap period
// 0b11110000 = Source to Gate (0b0010 = 12nS, default)
// 0b00001111 = Gate to Source
Self::send_command(&mut self.spi, &mut self.dc, Command::TCON, &[0x22])?;
// Disable external flash
Self::send_command(&mut self.spi, &mut self.dc, Command::DAM, &[0b00000000])?;
// UC8159_7C
Self::send_command(&mut self.spi, &mut self.dc, Command::PWS, &[0xAA])?;
// Power off sequence
// 0b00110000 = power off sequence of VDH and VDL, 0b00 = 1 frame (default)
// All other bits ignored?
Self::send_command(
&mut self.spi,
&mut self.dc,
Command::PFS,
&[0b00000000], // PFS_1_FRAME
)?;
Ok(())
}
fn busy_wait(&mut self) -> Result<(), ERR> {
while self.busy.is_low()? {
self.delay.delay_ms(10);
}
Ok(())
}
fn send_command(spi: &mut SPI, dc: &mut DC, command: Command, data: &[u8]) -> Result<(), ERR> {
dc.set_low()?;
spi.write(&[command as u8])?;
if!data.is_empty() {
dc.set_high()?;
for chunk in data.chunks(SPI_CHUNK_SIZE) {
spi.write(chunk)?;
}
}
Ok(())
}
}
impl Color {
pub fn all() -> [Self; 8] {
[
Color::Black,
Color::White,
Color::Green,
Color::Blue,
Color::Red,
Color::Yellow,
Color::Orange,
Color::Clean,
]
}
pub fn all_significant() -> [Self; 7] {
[
Color::Black,
Color::White,
Color::Green,
Color::Blue,
Color::Red,
Color::Yellow,
Color::Orange,
]
}
fn as_rgb_desaturated(self) -> [u8; 3] {
match self {
Color::Black => [0, 0, 0],
Color::White => [255, 255, 255],
Color::Green => [0, 255, 0],
Color::Blue => [0, 0, 255],
Color::Red => [255, 0, 0],
Color::Yellow => [255, 255, 0],
Color::Orange => [255, 140, 0],
Color::Clean => [255, 255, 255],
}
}
fn as_rgb_saturated(self) -> [u8; 3] {
match self {
Color::Black => [57, 48, 57],
Color::White => [255, 255, 255],
Color::Green => [58, 91, 70],
Color::Blue => [61, 59, 94],
Color::Red => [156, 72, 75],
Color::Yellow => [208, 190, 71],
Color::Orange => [77, 106, 73],
Color::Clean => [255, 255, 255],
}
}
}
impl Palette {
/// Creates a new palette using the provided saturation.
///
/// The saturation should be within the range of 0.0 to 1.0, otherwise funky colors might
/// appear!
pub fn new(saturation: f32) -> Palette {
let all_significant = Color::all_significant();
let mut colors = [[0; 3]; 7];
for (idx, color) in all_significant.iter().copied().enumerate() {
let [rs, gs, bs] = color.as_rgb_saturated();
let [rd, gd, bd] = color.as_rgb_desaturated();
let r_corr = (rs as f32 * saturation + rd as f32 * (1.0 - saturation)) as u8;
let g_corr = (gs as f32 * saturation + gd as f32 * (1.0 - saturation)) as u8;
let b_corr = (bs as f32 * saturation + bd as f32 * (1.0 - saturation)) as u8;
colors[idx] = [r_corr, g_corr, b_corr];
}
Palette(colors)
}
/// Returns the closest color to the provided RGB value available in the palette.
pub fn closest_color(&self, r: u8, g: u8, b: u8) -> Color {
let idx = self
.0
.iter()
.enumerate()
.min_by_key(|(_, &[pr, pg, pb])| {
let dr = if pr > r { pr - r } else { r - pr } as u32;
let dg = if pg > g { pg - g } else { g - pg } as u32;
let db = if pb > b { pb - b } else { b - pb } as u32;
dr * dr + dg * dg + db * db
})
.unwrap()
.0;
Color::all()[idx]
}
}
| {
*cell = (*cell & 0b11110000) | color as u8;
} | conditional_block |
lib.rs | //! # `uc8159`
//!
//! This is a driver crate for accessing the `uc8159` E-Ink display controller. For now, most of
//! the options are hard-coded for the
//! [Pimoroni Inky Impression](https://shop.pimoroni.com/products/inky-impression) display as that's
//! the only one I own, so proposing changes to add more features is most welcome!
//!
//! # Usage
//!
//! Get started by creating a [`Display`] instance. Populate the display buffer by using `fill`,
//! `copy_from`, `set_pixel`, etc. When you want to display the buffer contents to the screen, use
//! `show`.
#![cfg_attr(not(feature = "std"), no_std)]
use core::convert;
use core::marker;
use core::mem;
use core::slice;
/// Colors available on a 7-color e-ink display.
#[derive(Clone, Copy, Debug, Eq, Ord, PartialEq, PartialOrd)]
#[repr(u8)]
pub enum Color {
Black = 0,
White = 1,
Green = 2,
Blue = 3,
Red = 4,
Yellow = 5,
Orange = 6,
/// The absence of any color, i.e. the display will show no ink, which usually just means it
/// will show traces of whatever was there before.
Clean = 7,
}
/// A pre-computed palette which can be used to map arbitrary RGB colors onto a [`Display`].
#[derive(Clone, Debug)]
pub struct Palette([[u8; 3]; 7]);
/// Configuration when creating a new [`Display`] instance.
#[derive(Clone, Debug, Eq, Ord, PartialEq, PartialOrd)]
pub struct Config {
/// The color that will be used to fill the outline around the display, which can't be
/// controlled as individual pixels.
pub border_color: Color,
}
// Currently hard-coded behavior for Pimoroni Inky Impression
const WIDTH: usize = 600;
const HEIGHT: usize = 448;
const SPI_CHUNK_SIZE: usize = 4096;
#[derive(Clone, Copy, Debug, Eq, Ord, PartialEq, PartialOrd)]
#[repr(u8)]
enum Command {
PSR = 0x00,
PWR = 0x01,
POF = 0x02,
PFS = 0x03,
PON = 0x04,
// BTST = 0x06,
// DSLP = 0x07,
DTM1 = 0x10,
// DSP = 0x11,
DRF = 0x12,
// IPC = 0x13,
PLL = 0x30,
// TSC = 0x40,
TSE = 0x41,
// TSW = 0x42,
// TSR = 0x43,
CDI = 0x50,
// LPD = 0x51,
TCON = 0x60,
TRES = 0x61,
DAM = 0x65,
// REV = 0x70,
// FLG = 0x71,
// AMV = 0x80,
// VV = 0x81,
// VDCS = 0x82,
PWS = 0xE3,
// TSSET = 0xE5,
}
/// An instance of a display which is governed by a particular `uc8159` controller.
#[derive(Debug)]
pub struct Display<SPI, TIMER, RESET, BUSY, DC, ERR = convert::Infallible>
where
SPI: embedded_hal::blocking::spi::Write<u8>,
TIMER: embedded_hal::blocking::delay::DelayMs<u16>,
RESET: embedded_hal::digital::v2::OutputPin,
BUSY: embedded_hal::digital::v2::InputPin,
DC: embedded_hal::digital::v2::OutputPin,
ERR: From<SPI::Error> + From<RESET::Error> + From<BUSY::Error> + From<DC::Error>,
{
spi: SPI,
delay: TIMER,
reset: RESET,
busy: BUSY,
dc: DC,
config: Config,
buffer: [u8; WIDTH / 2 * HEIGHT],
phantom: marker::PhantomData<ERR>,
}
impl<SPI, DELAY, RESET, BUSY, DC, ERR> Display<SPI, DELAY, RESET, BUSY, DC, ERR>
where
SPI: embedded_hal::blocking::spi::Write<u8>,
DELAY: embedded_hal::blocking::delay::DelayMs<u16>,
RESET: embedded_hal::digital::v2::OutputPin,
BUSY: embedded_hal::digital::v2::InputPin,
DC: embedded_hal::digital::v2::OutputPin,
ERR: From<SPI::Error> + From<RESET::Error> + From<BUSY::Error> + From<DC::Error>,
{
/// Creates a new display instance.
///
/// The provided `spi` bus will be used for most of the communication. The `delay` instance
/// is used when waiting for reset and drawing operations to complete. The `reset` pin can be
/// provided to make sure the device is reset before each new draw command. The `busy` pin is
/// used to poll to see when draw operations are complete. The `dc` pin is used to signal
/// whether the sent `spi` signal is a command (low) or data (high).
pub fn new(spi: SPI, delay: DELAY, reset: RESET, busy: BUSY, dc: DC, config: Config) -> Self {
let phantom = marker::PhantomData;
let buffer = [0; WIDTH / 2 * HEIGHT];
Self {
spi,
delay,
reset,
busy,
dc,
config,
buffer,
phantom,
}
}
/// The width of the display.
pub fn width(&self) -> usize {
WIDTH
}
/// The height of the display.
pub fn height(&self) -> usize {
HEIGHT
}
/// Fills the entire display using a single color.
///
/// This is a pretty fast operation compared to e.g. calling `set_pixel`.
pub fn fill(&mut self, color: Color) {
self.buffer = [((color as u8) << 4) | color as u8; WIDTH / 2 * HEIGHT];
}
/// Copies data from another source in bulk.
///
/// The color data must contain exactly `width() * height()` elements and be in row-major order.
pub fn copy_from(&mut self, color: &[Color]) {
for (idx, cell) in color.chunks(2).enumerate() {
self.buffer[idx] = ((cell[0] as u8) << 4) | cell[1] as u8;
}
}
/// Sets a specific pixel color.
pub fn set_pixel(&mut self, x: usize, y: usize, color: Color) {
let cell = &mut self.buffer[y * WIDTH / 2 + x / 2];
if (x & 1) == 0 {
*cell = (*cell & 0b00001111) | ((color as u8) << 4);
} else {
*cell = (*cell & 0b11110000) | color as u8;
}
}
/// Displays the contents of the internal buffer to the screen.
///
/// This operation blocks until the contents are completely shown.
pub fn show(&mut self) -> Result<(), ERR> {
self.setup()?;
let ptr = &self.buffer as *const _ as *const u8;
let len = mem::size_of_val(&self.buffer);
let data = unsafe { slice::from_raw_parts(ptr, len) };
Self::send_command(&mut self.spi, &mut self.dc, Command::DTM1, data)?;
self.busy_wait()?;
Self::send_command(&mut self.spi, &mut self.dc, Command::PON, &[])?;
self.busy_wait()?;
Self::send_command(&mut self.spi, &mut self.dc, Command::DRF, &[])?;
self.busy_wait()?;
Self::send_command(&mut self.spi, &mut self.dc, Command::POF, &[])?;
self.busy_wait()?;
Ok(())
}
fn setup(&mut self) -> Result<(), ERR> {
self.reset.set_low()?;
self.delay.delay_ms(100);
self.reset.set_high()?;
self.delay.delay_ms(100);
self.busy_wait()?;
let width_bytes = (WIDTH as u16).to_be_bytes();
let height_bytes = (HEIGHT as u16).to_be_bytes();
Self::send_command(
&mut self.spi,
&mut self.dc,
Command::TRES,
&[
width_bytes[0],
width_bytes[1],
height_bytes[0],
height_bytes[1],
],
)?;
// Panel Setting
// 0b11000000 = Resolution select, 0b00 = 640x480, our panel is 0b11 = 600x448
// 0b00100000 = LUT selection, 0 = ext flash, 1 = registers, we use ext flash
// 0b00010000 = Ignore
// 0b00001000 = Gate scan direction, 0 = down, 1 = up (default)
// 0b00000100 = Source shift direction, 0 = left, 1 = right (default)
// 0b00000010 = DC-DC converter, 0 = off, 1 = on
// 0b00000001 = Soft reset, 0 = Reset, 1 = Normal (Default)
Self::send_command(
&mut self.spi,
&mut self.dc,
Command::PSR,
&[
0b11101111, // See above for more magic numbers
0x08, // display_colours == UC8159_7C
],
)?;
Self::send_command(
&mut self.spi,
&mut self.dc,
Command::PWR,
&[
(0x06 << 3) | //??? - not documented in UC8159 datasheet
(0x01 << 2) | // SOURCE_INTERNAL_DC_DC
(0x01 << 1) | // GATE_INTERNAL_DC_DC
(0x01), // LV_SOURCE_INTERNAL_DC_DC
0x00, // VGx_20V
0x23, // UC8159_7C
0x23, // UC8159_7C
],
)?;
// Set the PLL clock frequency to 50Hz
// 0b11000000 = Ignore
// 0b00111000 = M
// 0b00000111 = N
// PLL = 2MHz * (M / N)
// PLL = 2MHz * (7 / 4)
// PLL = 2,800,000???
Self::send_command(&mut self.spi, &mut self.dc, Command::PLL, &[0x3C])?;
Self::send_command(&mut self.spi, &mut self.dc, Command::TSE, &[0x00])?;
// VCOM and Data Interval setting
// 0b11100000 = Vborder control (0b001 = LUTB voltage)
// 0b00010000 = Data polarity
// 0b00001111 = Vcom and data interval (0b0111 = 10, default)
Self::send_command(
&mut self.spi,
&mut self.dc,
Command::CDI,
&[((self.config.border_color as u8) << 5) | 0x17],
)?;
// Gate/Source non-overlap period
// 0b11110000 = Source to Gate (0b0010 = 12nS, default)
// 0b00001111 = Gate to Source
Self::send_command(&mut self.spi, &mut self.dc, Command::TCON, &[0x22])?;
// Disable external flash
Self::send_command(&mut self.spi, &mut self.dc, Command::DAM, &[0b00000000])?;
// UC8159_7C
Self::send_command(&mut self.spi, &mut self.dc, Command::PWS, &[0xAA])?;
// Power off sequence
// 0b00110000 = power off sequence of VDH and VDL, 0b00 = 1 frame (default)
// All other bits ignored?
Self::send_command(
&mut self.spi,
&mut self.dc,
Command::PFS,
&[0b00000000], // PFS_1_FRAME
)?;
Ok(())
}
fn busy_wait(&mut self) -> Result<(), ERR> {
while self.busy.is_low()? {
self.delay.delay_ms(10);
}
Ok(())
}
fn send_command(spi: &mut SPI, dc: &mut DC, command: Command, data: &[u8]) -> Result<(), ERR> |
}
impl Color {
pub fn all() -> [Self; 8] {
[
Color::Black,
Color::White,
Color::Green,
Color::Blue,
Color::Red,
Color::Yellow,
Color::Orange,
Color::Clean,
]
}
pub fn all_significant() -> [Self; 7] {
[
Color::Black,
Color::White,
Color::Green,
Color::Blue,
Color::Red,
Color::Yellow,
Color::Orange,
]
}
fn as_rgb_desaturated(self) -> [u8; 3] {
match self {
Color::Black => [0, 0, 0],
Color::White => [255, 255, 255],
Color::Green => [0, 255, 0],
Color::Blue => [0, 0, 255],
Color::Red => [255, 0, 0],
Color::Yellow => [255, 255, 0],
Color::Orange => [255, 140, 0],
Color::Clean => [255, 255, 255],
}
}
fn as_rgb_saturated(self) -> [u8; 3] {
match self {
Color::Black => [57, 48, 57],
Color::White => [255, 255, 255],
Color::Green => [58, 91, 70],
Color::Blue => [61, 59, 94],
Color::Red => [156, 72, 75],
Color::Yellow => [208, 190, 71],
Color::Orange => [77, 106, 73],
Color::Clean => [255, 255, 255],
}
}
}
impl Palette {
/// Creates a new palette using the provided saturation.
///
/// The saturation should be within the range of 0.0 to 1.0, otherwise funky colors might
/// appear!
pub fn new(saturation: f32) -> Palette {
let all_significant = Color::all_significant();
let mut colors = [[0; 3]; 7];
for (idx, color) in all_significant.iter().copied().enumerate() {
let [rs, gs, bs] = color.as_rgb_saturated();
let [rd, gd, bd] = color.as_rgb_desaturated();
let r_corr = (rs as f32 * saturation + rd as f32 * (1.0 - saturation)) as u8;
let g_corr = (gs as f32 * saturation + gd as f32 * (1.0 - saturation)) as u8;
let b_corr = (bs as f32 * saturation + bd as f32 * (1.0 - saturation)) as u8;
colors[idx] = [r_corr, g_corr, b_corr];
}
Palette(colors)
}
/// Returns the closest color to the provided RGB value available in the palette.
pub fn closest_color(&self, r: u8, g: u8, b: u8) -> Color {
let idx = self
.0
.iter()
.enumerate()
.min_by_key(|(_, &[pr, pg, pb])| {
let dr = if pr > r { pr - r } else { r - pr } as u32;
let dg = if pg > g { pg - g } else { g - pg } as u32;
let db = if pb > b { pb - b } else { b - pb } as u32;
dr * dr + dg * dg + db * db
})
.unwrap()
.0;
Color::all()[idx]
}
}
| {
dc.set_low()?;
spi.write(&[command as u8])?;
if !data.is_empty() {
dc.set_high()?;
for chunk in data.chunks(SPI_CHUNK_SIZE) {
spi.write(chunk)?;
}
}
Ok(())
} | identifier_body |
lib.rs | //! # `uc8159`
//!
//! This is a driver crate for accessing the `uc8159` E-Ink display controller. For now, most of
//! the options are hard-coded for the
//! [Pimoroni Inky Impression](https://shop.pimoroni.com/products/inky-impression) display as that's
//! the only one I own, so proposing changes to add more features is most welcome!
//!
//! # Usage
//!
//! Get started by creating a [`Display`] instance. Populate the display buffer by using `fill`,
//! `copy_from`, `set_pixel`, etc. When you want to display the buffer contents to the screen, use
//! `show`.
#![cfg_attr(not(feature = "std"), no_std)]
use core::convert;
use core::marker;
use core::mem;
use core::slice;
/// Colors available on a 7-color e-ink display.
#[derive(Clone, Copy, Debug, Eq, Ord, PartialEq, PartialOrd)]
#[repr(u8)]
pub enum Color {
Black = 0,
White = 1,
Green = 2,
Blue = 3,
Red = 4,
Yellow = 5,
Orange = 6,
/// The absence of any color, i.e. the display will show no ink, which usually just means it
/// will show traces of whatever was there before.
Clean = 7,
}
/// A pre-computed palette which can be used to map arbitrary RGB colors onto a [`Display`].
#[derive(Clone, Debug)]
pub struct Palette([[u8; 3]; 7]);
/// Configuration when creating a new [`Display`] instance.
#[derive(Clone, Debug, Eq, Ord, PartialEq, PartialOrd)]
pub struct Config {
/// The color that will be used to fill the outline around the display, which can't be
/// controlled as individual pixels.
pub border_color: Color,
}
// Currently hard-coded behavior for Pimoroni Inky Impression
const WIDTH: usize = 600;
const HEIGHT: usize = 448;
const SPI_CHUNK_SIZE: usize = 4096;
#[derive(Clone, Copy, Debug, Eq, Ord, PartialEq, PartialOrd)]
#[repr(u8)]
enum Command {
PSR = 0x00,
PWR = 0x01,
POF = 0x02,
PFS = 0x03,
PON = 0x04,
// BTST = 0x06,
// DSLP = 0x07,
DTM1 = 0x10,
// DSP = 0x11,
DRF = 0x12,
// IPC = 0x13,
PLL = 0x30,
// TSC = 0x40,
TSE = 0x41,
// TSW = 0x42,
// TSR = 0x43,
CDI = 0x50,
// LPD = 0x51,
TCON = 0x60,
TRES = 0x61,
DAM = 0x65,
// REV = 0x70,
// FLG = 0x71,
// AMV = 0x80,
// VV = 0x81,
// VDCS = 0x82,
PWS = 0xE3,
// TSSET = 0xE5,
}
/// An instance of a display which is governed by a particular `uc8159` controller.
#[derive(Debug)]
pub struct Display<SPI, TIMER, RESET, BUSY, DC, ERR = convert::Infallible>
where
SPI: embedded_hal::blocking::spi::Write<u8>,
TIMER: embedded_hal::blocking::delay::DelayMs<u16>,
RESET: embedded_hal::digital::v2::OutputPin,
BUSY: embedded_hal::digital::v2::InputPin,
DC: embedded_hal::digital::v2::OutputPin,
ERR: From<SPI::Error> + From<RESET::Error> + From<BUSY::Error> + From<DC::Error>,
{
spi: SPI,
delay: TIMER,
reset: RESET,
busy: BUSY,
dc: DC,
config: Config,
buffer: [u8; WIDTH / 2 * HEIGHT],
phantom: marker::PhantomData<ERR>,
}
impl<SPI, DELAY, RESET, BUSY, DC, ERR> Display<SPI, DELAY, RESET, BUSY, DC, ERR>
where
SPI: embedded_hal::blocking::spi::Write<u8>,
DELAY: embedded_hal::blocking::delay::DelayMs<u16>,
RESET: embedded_hal::digital::v2::OutputPin,
BUSY: embedded_hal::digital::v2::InputPin,
DC: embedded_hal::digital::v2::OutputPin, | {
/// Creates a new display instance.
///
/// The provided `spi` bus will be used for most of the communication. The `delay` instance
/// is used when waiting for reset and drawing operations to complete. The `reset` pin can be
/// provided to make sure the device is reset before each new draw command. The `busy` pin is
/// used to poll to see when draw operations are complete. The `dc` pin is used to signal
/// whether the sent `spi` signal is a command (low) or data (high).
pub fn new(spi: SPI, delay: DELAY, reset: RESET, busy: BUSY, dc: DC, config: Config) -> Self {
let phantom = marker::PhantomData;
let buffer = [0; WIDTH / 2 * HEIGHT];
Self {
spi,
delay,
reset,
busy,
dc,
config,
buffer,
phantom,
}
}
/// The width of the display.
pub fn width(&self) -> usize {
WIDTH
}
/// The height of the display.
pub fn height(&self) -> usize {
HEIGHT
}
/// Fills the entire display using a single color.
///
/// This is a pretty fast operation compared to e.g. calling `set_pixel`.
pub fn fill(&mut self, color: Color) {
self.buffer = [((color as u8) << 4) | color as u8; WIDTH / 2 * HEIGHT];
}
/// Copies data from another source in bulk.
///
/// The color data must contain exactly `width() * height()` elements and be in row-major order.
pub fn copy_from(&mut self, color: &[Color]) {
for (idx, cell) in color.chunks(2).enumerate() {
self.buffer[idx] = ((cell[0] as u8) << 4) | cell[1] as u8;
}
}
/// Sets a specific pixel color.
pub fn set_pixel(&mut self, x: usize, y: usize, color: Color) {
let cell = &mut self.buffer[y * WIDTH / 2 + x / 2];
if (x & 1) == 0 {
*cell = (*cell & 0b00001111) | ((color as u8) << 4);
} else {
*cell = (*cell & 0b11110000) | color as u8;
}
}
/// Displays the contents of the internal buffer to the screen.
///
/// This operation blocks until the contents are completely shown.
pub fn show(&mut self) -> Result<(), ERR> {
self.setup()?;
let ptr = &self.buffer as *const _ as *const u8;
let len = mem::size_of_val(&self.buffer);
let data = unsafe { slice::from_raw_parts(ptr, len) };
Self::send_command(&mut self.spi, &mut self.dc, Command::DTM1, data)?;
self.busy_wait()?;
Self::send_command(&mut self.spi, &mut self.dc, Command::PON, &[])?;
self.busy_wait()?;
Self::send_command(&mut self.spi, &mut self.dc, Command::DRF, &[])?;
self.busy_wait()?;
Self::send_command(&mut self.spi, &mut self.dc, Command::POF, &[])?;
self.busy_wait()?;
Ok(())
}
fn setup(&mut self) -> Result<(), ERR> {
self.reset.set_low()?;
self.delay.delay_ms(100);
self.reset.set_high()?;
self.delay.delay_ms(100);
self.busy_wait()?;
let width_bytes = (WIDTH as u16).to_be_bytes();
let height_bytes = (HEIGHT as u16).to_be_bytes();
Self::send_command(
&mut self.spi,
&mut self.dc,
Command::TRES,
&[
width_bytes[0],
width_bytes[1],
height_bytes[0],
height_bytes[1],
],
)?;
// Panel Setting
// 0b11000000 = Resolution select, 0b00 = 640x480, our panel is 0b11 = 600x448
// 0b00100000 = LUT selection, 0 = ext flash, 1 = registers, we use ext flash
// 0b00010000 = Ignore
// 0b00001000 = Gate scan direction, 0 = down, 1 = up (default)
// 0b00000100 = Source shift direction, 0 = left, 1 = right (default)
// 0b00000010 = DC-DC converter, 0 = off, 1 = on
// 0b00000001 = Soft reset, 0 = Reset, 1 = Normal (Default)
Self::send_command(
&mut self.spi,
&mut self.dc,
Command::PSR,
&[
0b11101111, // See above for more magic numbers
0x08, // display_colours == UC8159_7C
],
)?;
Self::send_command(
&mut self.spi,
&mut self.dc,
Command::PWR,
&[
(0x06 << 3) | //??? - not documented in UC8159 datasheet
(0x01 << 2) | // SOURCE_INTERNAL_DC_DC
(0x01 << 1) | // GATE_INTERNAL_DC_DC
(0x01), // LV_SOURCE_INTERNAL_DC_DC
0x00, // VGx_20V
0x23, // UC8159_7C
0x23, // UC8159_7C
],
)?;
// Set the PLL clock frequency to 50Hz
// 0b11000000 = Ignore
// 0b00111000 = M
// 0b00000111 = N
// PLL = 2MHz * (M / N)
// PLL = 2MHz * (7 / 4)
// PLL = 2,800,000???
Self::send_command(&mut self.spi, &mut self.dc, Command::PLL, &[0x3C])?;
Self::send_command(&mut self.spi, &mut self.dc, Command::TSE, &[0x00])?;
// VCOM and Data Interval setting
// 0b11100000 = Vborder control (0b001 = LUTB voltage)
// 0b00010000 = Data polarity
// 0b00001111 = Vcom and data interval (0b0111 = 10, default)
Self::send_command(
&mut self.spi,
&mut self.dc,
Command::CDI,
&[((self.config.border_color as u8) << 5) | 0x17],
)?;
// Gate/Source non-overlap period
// 0b11110000 = Source to Gate (0b0010 = 12nS, default)
// 0b00001111 = Gate to Source
Self::send_command(&mut self.spi, &mut self.dc, Command::TCON, &[0x22])?;
// Disable external flash
Self::send_command(&mut self.spi, &mut self.dc, Command::DAM, &[0b00000000])?;
// UC8159_7C
Self::send_command(&mut self.spi, &mut self.dc, Command::PWS, &[0xAA])?;
// Power off sequence
// 0b00110000 = power off sequence of VDH and VDL, 0b00 = 1 frame (default)
// All other bits ignored?
Self::send_command(
&mut self.spi,
&mut self.dc,
Command::PFS,
&[0b00000000], // PFS_1_FRAME
)?;
Ok(())
}
fn busy_wait(&mut self) -> Result<(), ERR> {
while self.busy.is_low()? {
self.delay.delay_ms(10);
}
Ok(())
}
fn send_command(spi: &mut SPI, dc: &mut DC, command: Command, data: &[u8]) -> Result<(), ERR> {
dc.set_low()?;
spi.write(&[command as u8])?;
if!data.is_empty() {
dc.set_high()?;
for chunk in data.chunks(SPI_CHUNK_SIZE) {
spi.write(chunk)?;
}
}
Ok(())
}
}
impl Color {
pub fn all() -> [Self; 8] {
[
Color::Black,
Color::White,
Color::Green,
Color::Blue,
Color::Red,
Color::Yellow,
Color::Orange,
Color::Clean,
]
}
pub fn all_significant() -> [Self; 7] {
[
Color::Black,
Color::White,
Color::Green,
Color::Blue,
Color::Red,
Color::Yellow,
Color::Orange,
]
}
fn as_rgb_desaturated(self) -> [u8; 3] {
match self {
Color::Black => [0, 0, 0],
Color::White => [255, 255, 255],
Color::Green => [0, 255, 0],
Color::Blue => [0, 0, 255],
Color::Red => [255, 0, 0],
Color::Yellow => [255, 255, 0],
Color::Orange => [255, 140, 0],
Color::Clean => [255, 255, 255],
}
}
fn as_rgb_saturated(self) -> [u8; 3] {
match self {
Color::Black => [57, 48, 57],
Color::White => [255, 255, 255],
Color::Green => [58, 91, 70],
Color::Blue => [61, 59, 94],
Color::Red => [156, 72, 75],
Color::Yellow => [208, 190, 71],
Color::Orange => [77, 106, 73],
Color::Clean => [255, 255, 255],
}
}
}
impl Palette {
/// Creates a new palette using the provided saturation.
///
/// The saturation should be within the range of 0.0 to 1.0, otherwise funky colors might
/// appear!
pub fn new(saturation: f32) -> Palette {
let all_significant = Color::all_significant();
let mut colors = [[0; 3]; 7];
for (idx, color) in all_significant.iter().copied().enumerate() {
let [rs, gs, bs] = color.as_rgb_saturated();
let [rd, gd, bd] = color.as_rgb_desaturated();
let r_corr = (rs as f32 * saturation + rd as f32 * (1.0 - saturation)) as u8;
let g_corr = (gs as f32 * saturation + gd as f32 * (1.0 - saturation)) as u8;
let b_corr = (bs as f32 * saturation + bd as f32 * (1.0 - saturation)) as u8;
colors[idx] = [r_corr, g_corr, b_corr];
}
Palette(colors)
}
/// Returns the closest color to the provided RGB value available in the palette.
pub fn closest_color(&self, r: u8, g: u8, b: u8) -> Color {
let idx = self
.0
.iter()
.enumerate()
.min_by_key(|(_, &[pr, pg, pb])| {
let dr = if pr > r { pr - r } else { r - pr } as u32;
let dg = if pg > g { pg - g } else { g - pg } as u32;
let db = if pb > b { pb - b } else { b - pb } as u32;
dr * dr + dg * dg + db * db
})
.unwrap()
.0;
Color::all()[idx]
}
} | ERR: From<SPI::Error> + From<RESET::Error> + From<BUSY::Error> + From<DC::Error>, | random_line_split |
lib.rs | //! # `uc8159`
//!
//! This is a driver crate for accessing the `uc8159` E-Ink display controller. For now, most of
//! the options are hard-coded for the
//! [Pimoroni Inky Impression](https://shop.pimoroni.com/products/inky-impression) display as that's
//! the only one I own, so proposing changes to add more features is most welcome!
//!
//! # Usage
//!
//! Get started by creating a [`Display`] instance. Populate the display buffer by using `fill`,
//! `copy_from`, `set_pixel`, etc. When you want to display the buffer contents to the screen, use
//! `show`.
#![cfg_attr(not(feature = "std"), no_std)]
use core::convert;
use core::marker;
use core::mem;
use core::slice;
/// Colors available on a 7-color e-ink display.
#[derive(Clone, Copy, Debug, Eq, Ord, PartialEq, PartialOrd)]
#[repr(u8)]
pub enum Color {
Black = 0,
White = 1,
Green = 2,
Blue = 3,
Red = 4,
Yellow = 5,
Orange = 6,
/// The absence of any color, i.e. the display will show no ink, which usually just means it
/// will show traces of whatever was there before.
Clean = 7,
}
/// A pre-computed palette which can be used to map arbitrary RGB colors onto a [`Display`].
#[derive(Clone, Debug)]
pub struct Palette([[u8; 3]; 7]);
/// Configuration when creating a new [`Display`] instance.
#[derive(Clone, Debug, Eq, Ord, PartialEq, PartialOrd)]
pub struct Config {
/// The color that will be used to fill the outline around the display, which can't be
/// controlled as individual pixels.
pub border_color: Color,
}
// Currently hard-coded behavior for Pimoroni Inky Impression
const WIDTH: usize = 600;
const HEIGHT: usize = 448;
const SPI_CHUNK_SIZE: usize = 4096;
#[derive(Clone, Copy, Debug, Eq, Ord, PartialEq, PartialOrd)]
#[repr(u8)]
enum Command {
PSR = 0x00,
PWR = 0x01,
POF = 0x02,
PFS = 0x03,
PON = 0x04,
// BTST = 0x06,
// DSLP = 0x07,
DTM1 = 0x10,
// DSP = 0x11,
DRF = 0x12,
// IPC = 0x13,
PLL = 0x30,
// TSC = 0x40,
TSE = 0x41,
// TSW = 0x42,
// TSR = 0x43,
CDI = 0x50,
// LPD = 0x51,
TCON = 0x60,
TRES = 0x61,
DAM = 0x65,
// REV = 0x70,
// FLG = 0x71,
// AMV = 0x80,
// VV = 0x81,
// VDCS = 0x82,
PWS = 0xE3,
// TSSET = 0xE5,
}
/// An instance of a display which is governed by a particular `uc8159` controller.
#[derive(Debug)]
pub struct Display<SPI, TIMER, RESET, BUSY, DC, ERR = convert::Infallible>
where
SPI: embedded_hal::blocking::spi::Write<u8>,
TIMER: embedded_hal::blocking::delay::DelayMs<u16>,
RESET: embedded_hal::digital::v2::OutputPin,
BUSY: embedded_hal::digital::v2::InputPin,
DC: embedded_hal::digital::v2::OutputPin,
ERR: From<SPI::Error> + From<RESET::Error> + From<BUSY::Error> + From<DC::Error>,
{
spi: SPI,
delay: TIMER,
reset: RESET,
busy: BUSY,
dc: DC,
config: Config,
buffer: [u8; WIDTH / 2 * HEIGHT],
phantom: marker::PhantomData<ERR>,
}
impl<SPI, DELAY, RESET, BUSY, DC, ERR> Display<SPI, DELAY, RESET, BUSY, DC, ERR>
where
SPI: embedded_hal::blocking::spi::Write<u8>,
DELAY: embedded_hal::blocking::delay::DelayMs<u16>,
RESET: embedded_hal::digital::v2::OutputPin,
BUSY: embedded_hal::digital::v2::InputPin,
DC: embedded_hal::digital::v2::OutputPin,
ERR: From<SPI::Error> + From<RESET::Error> + From<BUSY::Error> + From<DC::Error>,
{
/// Creates a new display instance.
///
/// The provided `spi` bus will be used for most of the communication. The `delay` instance
/// is used when waiting for reset and drawing operations to complete. The `reset` pin can be
/// provided to make sure the device is reset before each new draw command. The `busy` pin is
/// used to poll to see when draw operations are complete. The `dc` pin is used to signal
/// whether the sent `spi` signal is a command (low) or data (high).
pub fn new(spi: SPI, delay: DELAY, reset: RESET, busy: BUSY, dc: DC, config: Config) -> Self {
let phantom = marker::PhantomData;
let buffer = [0; WIDTH / 2 * HEIGHT];
Self {
spi,
delay,
reset,
busy,
dc,
config,
buffer,
phantom,
}
}
/// The width of the display.
pub fn width(&self) -> usize {
WIDTH
}
/// The height of the display.
pub fn height(&self) -> usize {
HEIGHT
}
/// Fills the entire display using a single color.
///
/// This is a pretty fast operation compared to e.g. calling `set_pixel`.
pub fn fill(&mut self, color: Color) {
self.buffer = [((color as u8) << 4) | color as u8; WIDTH / 2 * HEIGHT];
}
/// Copies data from another source in bulk.
///
/// The color data must contain exactly `width() * height()` elements and be in row-major order.
pub fn | (&mut self, color: &[Color]) {
for (idx, cell) in color.chunks(2).enumerate() {
self.buffer[idx] = ((cell[0] as u8) << 4) | cell[1] as u8;
}
}
/// Sets a specific pixel color.
pub fn set_pixel(&mut self, x: usize, y: usize, color: Color) {
let cell = &mut self.buffer[y * WIDTH / 2 + x / 2];
if (x & 1) == 0 {
*cell = (*cell & 0b00001111) | ((color as u8) << 4);
} else {
*cell = (*cell & 0b11110000) | color as u8;
}
}
/// Displays the contents of the internal buffer to the screen.
///
/// This operation blocks until the contents are completely shown.
pub fn show(&mut self) -> Result<(), ERR> {
self.setup()?;
let ptr = &self.buffer as *const _ as *const u8;
let len = mem::size_of_val(&self.buffer);
let data = unsafe { slice::from_raw_parts(ptr, len) };
Self::send_command(&mut self.spi, &mut self.dc, Command::DTM1, data)?;
self.busy_wait()?;
Self::send_command(&mut self.spi, &mut self.dc, Command::PON, &[])?;
self.busy_wait()?;
Self::send_command(&mut self.spi, &mut self.dc, Command::DRF, &[])?;
self.busy_wait()?;
Self::send_command(&mut self.spi, &mut self.dc, Command::POF, &[])?;
self.busy_wait()?;
Ok(())
}
fn setup(&mut self) -> Result<(), ERR> {
self.reset.set_low()?;
self.delay.delay_ms(100);
self.reset.set_high()?;
self.delay.delay_ms(100);
self.busy_wait()?;
let width_bytes = (WIDTH as u16).to_be_bytes();
let height_bytes = (HEIGHT as u16).to_be_bytes();
Self::send_command(
&mut self.spi,
&mut self.dc,
Command::TRES,
&[
width_bytes[0],
width_bytes[1],
height_bytes[0],
height_bytes[1],
],
)?;
// Panel Setting
// 0b11000000 = Resolution select, 0b00 = 640x480, our panel is 0b11 = 600x448
// 0b00100000 = LUT selection, 0 = ext flash, 1 = registers, we use ext flash
// 0b00010000 = Ignore
// 0b00001000 = Gate scan direction, 0 = down, 1 = up (default)
// 0b00000100 = Source shift direction, 0 = left, 1 = right (default)
// 0b00000010 = DC-DC converter, 0 = off, 1 = on
// 0b00000001 = Soft reset, 0 = Reset, 1 = Normal (Default)
Self::send_command(
&mut self.spi,
&mut self.dc,
Command::PSR,
&[
0b11101111, // See above for more magic numbers
0x08, // display_colours == UC8159_7C
],
)?;
Self::send_command(
&mut self.spi,
&mut self.dc,
Command::PWR,
&[
(0x06 << 3) | //??? - not documented in UC8159 datasheet
(0x01 << 2) | // SOURCE_INTERNAL_DC_DC
(0x01 << 1) | // GATE_INTERNAL_DC_DC
(0x01), // LV_SOURCE_INTERNAL_DC_DC
0x00, // VGx_20V
0x23, // UC8159_7C
0x23, // UC8159_7C
],
)?;
// Set the PLL clock frequency to 50Hz
// 0b11000000 = Ignore
// 0b00111000 = M
// 0b00000111 = N
// PLL = 2MHz * (M / N)
// PLL = 2MHz * (7 / 4)
// PLL = 2,800,000???
Self::send_command(&mut self.spi, &mut self.dc, Command::PLL, &[0x3C])?;
Self::send_command(&mut self.spi, &mut self.dc, Command::TSE, &[0x00])?;
// VCOM and Data Interval setting
// 0b11100000 = Vborder control (0b001 = LUTB voltage)
// 0b00010000 = Data polarity
// 0b00001111 = Vcom and data interval (0b0111 = 10, default)
Self::send_command(
&mut self.spi,
&mut self.dc,
Command::CDI,
&[((self.config.border_color as u8) << 5) | 0x17],
)?;
// Gate/Source non-overlap period
// 0b11110000 = Source to Gate (0b0010 = 12nS, default)
// 0b00001111 = Gate to Source
Self::send_command(&mut self.spi, &mut self.dc, Command::TCON, &[0x22])?;
// Disable external flash
Self::send_command(&mut self.spi, &mut self.dc, Command::DAM, &[0b00000000])?;
// UC8159_7C
Self::send_command(&mut self.spi, &mut self.dc, Command::PWS, &[0xAA])?;
// Power off sequence
// 0b00110000 = power off sequence of VDH and VDL, 0b00 = 1 frame (default)
// All other bits ignored?
Self::send_command(
&mut self.spi,
&mut self.dc,
Command::PFS,
&[0b00000000], // PFS_1_FRAME
)?;
Ok(())
}
fn busy_wait(&mut self) -> Result<(), ERR> {
while self.busy.is_low()? {
self.delay.delay_ms(10);
}
Ok(())
}
fn send_command(spi: &mut SPI, dc: &mut DC, command: Command, data: &[u8]) -> Result<(), ERR> {
dc.set_low()?;
spi.write(&[command as u8])?;
if!data.is_empty() {
dc.set_high()?;
for chunk in data.chunks(SPI_CHUNK_SIZE) {
spi.write(chunk)?;
}
}
Ok(())
}
}
impl Color {
pub fn all() -> [Self; 8] {
[
Color::Black,
Color::White,
Color::Green,
Color::Blue,
Color::Red,
Color::Yellow,
Color::Orange,
Color::Clean,
]
}
pub fn all_significant() -> [Self; 7] {
[
Color::Black,
Color::White,
Color::Green,
Color::Blue,
Color::Red,
Color::Yellow,
Color::Orange,
]
}
fn as_rgb_desaturated(self) -> [u8; 3] {
match self {
Color::Black => [0, 0, 0],
Color::White => [255, 255, 255],
Color::Green => [0, 255, 0],
Color::Blue => [0, 0, 255],
Color::Red => [255, 0, 0],
Color::Yellow => [255, 255, 0],
Color::Orange => [255, 140, 0],
Color::Clean => [255, 255, 255],
}
}
fn as_rgb_saturated(self) -> [u8; 3] {
match self {
Color::Black => [57, 48, 57],
Color::White => [255, 255, 255],
Color::Green => [58, 91, 70],
Color::Blue => [61, 59, 94],
Color::Red => [156, 72, 75],
Color::Yellow => [208, 190, 71],
Color::Orange => [77, 106, 73],
Color::Clean => [255, 255, 255],
}
}
}
impl Palette {
/// Creates a new palette using the provided saturation.
///
/// The saturation should be within the range of 0.0 to 1.0, otherwise funky colors might
/// appear!
pub fn new(saturation: f32) -> Palette {
let all_significant = Color::all_significant();
let mut colors = [[0; 3]; 7];
for (idx, color) in all_significant.iter().copied().enumerate() {
let [rs, gs, bs] = color.as_rgb_saturated();
let [rd, gd, bd] = color.as_rgb_desaturated();
let r_corr = (rs as f32 * saturation + rd as f32 * (1.0 - saturation)) as u8;
let g_corr = (gs as f32 * saturation + gd as f32 * (1.0 - saturation)) as u8;
let b_corr = (bs as f32 * saturation + bd as f32 * (1.0 - saturation)) as u8;
colors[idx] = [r_corr, g_corr, b_corr];
}
Palette(colors)
}
/// Returns the closest color to the provided RGB value available in the palette.
pub fn closest_color(&self, r: u8, g: u8, b: u8) -> Color {
let idx = self
.0
.iter()
.enumerate()
.min_by_key(|(_, &[pr, pg, pb])| {
let dr = if pr > r { pr - r } else { r - pr } as u32;
let dg = if pg > g { pg - g } else { g - pg } as u32;
let db = if pb > b { pb - b } else { b - pb } as u32;
dr * dr + dg * dg + db * db
})
.unwrap()
.0;
Color::all()[idx]
}
}
| copy_from | identifier_name |
args_info.rs | // Copyright (c) 2023 Google LLC All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
use crate::{
enum_only_single_field_unnamed_variants,
errors::Errors,
help::require_description,
parse_attrs::{check_enum_type_attrs, FieldAttrs, FieldKind, TypeAttrs, VariantAttrs},
Optionality, StructField,
};
use proc_macro2::{Span, TokenStream};
use quote::{quote, quote_spanned, ToTokens};
use syn::LitStr;
/// Implement the derive macro for ArgsInfo.
pub(crate) fn impl_args_info(input: &syn::DeriveInput) -> TokenStream {
let errors = &Errors::default();
// parse the types
let type_attrs = &TypeAttrs::parse(errors, input);
// Based on the type generate the appropriate code.
let mut output_tokens = match &input.data {
syn::Data::Struct(ds) => {
impl_arg_info_struct(errors, &input.ident, type_attrs, &input.generics, ds)
}
syn::Data::Enum(de) => {
impl_arg_info_enum(errors, &input.ident, type_attrs, &input.generics, de)
}
syn::Data::Union(_) => {
errors.err(input, "`#[derive(ArgsInfo)]` cannot be applied to unions");
TokenStream::new()
}
};
errors.to_tokens(&mut output_tokens);
output_tokens
}
/// Implement the ArgsInfo trait for a struct annotated with argh attributes.
fn impl_arg_info_struct(
errors: &Errors,
name: &syn::Ident,
type_attrs: &TypeAttrs,
generic_args: &syn::Generics,
ds: &syn::DataStruct,
) -> TokenStream {
// Collect the fields, skipping fields that are not supported.
let fields = match &ds.fields {
syn::Fields::Named(fields) => fields,
syn::Fields::Unnamed(_) => {
errors.err(
&ds.struct_token,
"`#![derive(ArgsInfo)]` is not currently supported on tuple structs",
);
return TokenStream::new();
}
syn::Fields::Unit => {
errors.err(&ds.struct_token, "#![derive(ArgsInfo)]` cannot be applied to unit structs");
return TokenStream::new();
}
};
// Map the fields into StructField objects.
let fields: Vec<_> = fields
.named
.iter()
.filter_map(|field| {
let attrs = FieldAttrs::parse(errors, field);
StructField::new(errors, field, attrs)
})
.collect();
let impl_span = Span::call_site();
// Generate the implementation of `get_args_info()` for this struct.
let args_info = impl_args_info_data(name, errors, type_attrs, &fields);
// Split out the generics info for the impl declaration.
let (impl_generics, ty_generics, where_clause) = generic_args.split_for_impl();
quote_spanned! { impl_span =>
#[automatically_derived]
impl #impl_generics argh::ArgsInfo for #name #ty_generics #where_clause {
fn get_args_info() -> argh::CommandInfoWithArgs {
#args_info
}
}
}
}
/// Implement ArgsInfo for an enum. The enum is a collection of subcommands.
fn impl_arg_info_enum(
errors: &Errors,
name: &syn::Ident,
type_attrs: &TypeAttrs,
generic_args: &syn::Generics,
de: &syn::DataEnum,
) -> TokenStream {
// Validate the enum is OK for argh.
check_enum_type_attrs(errors, type_attrs, &de.enum_token.span);
// Ensure that `#[argh(subcommand)]` is present.
if type_attrs.is_subcommand.is_none() {
errors.err_span(
de.enum_token.span,
concat!(
"`#![derive(ArgsInfo)]` on `enum`s can only be used to enumerate subcommands.\n",
"Consider adding `#[argh(subcommand)]` to the `enum` declaration.",
),
);
}
// One of the variants can be annotated as providing dynamic subcommands.
// We treat this differently since we need to call a function at runtime
// to determine the subcommands provided.
let mut dynamic_type_and_variant = None;
// An enum variant like `<name>(<ty>)`. This is used to collect
// the type of the variant for each subcommand.
struct ArgInfoVariant<'a> {
ty: &'a syn::Type,
}
let variants: Vec<ArgInfoVariant<'_>> = de
.variants
.iter()
.filter_map(|variant| {
let name = &variant.ident;
let ty = enum_only_single_field_unnamed_variants(errors, &variant.fields)?;
if VariantAttrs::parse(errors, variant).is_dynamic.is_some() {
if dynamic_type_and_variant.is_some() {
errors.err(variant, "Only one variant can have the `dynamic` attribute");
}
dynamic_type_and_variant = Some((ty, name));
None
} else {
Some(ArgInfoVariant { ty })
}
})
.collect();
let dynamic_subcommands = if let Some((dynamic_type, _)) = dynamic_type_and_variant {
quote! {
<#dynamic_type as argh::DynamicSubCommand>::commands().iter()
.map(|s|
SubCommandInfo {
name: s.name,
command: CommandInfoWithArgs {
name: s.name,
description: s.description,
..Default::default()
}
}).collect()
}
} else {
quote! { vec![]}
};
let variant_ty_info = variants.iter().map(|t| {
let ty = t.ty;
quote!(
argh::SubCommandInfo {
name: #ty::get_args_info().name,
command: #ty::get_args_info()
}
)
});
let cmd_name = if let Some(id) = &type_attrs.name {
id.clone()
} else {
LitStr::new("", Span::call_site())
};
let (impl_generics, ty_generics, where_clause) = generic_args.split_for_impl();
quote! {
#[automatically_derived]
impl #impl_generics argh::ArgsInfo for #name #ty_generics #where_clause {
fn get_args_info() -> argh::CommandInfoWithArgs {
let mut the_subcommands = vec![#(#variant_ty_info),*];
let mut dynamic_commands = #dynamic_subcommands;
the_subcommands.append(&mut dynamic_commands);
argh::CommandInfoWithArgs {
name: #cmd_name, | commands: the_subcommands,
..Default::default()
}
} // end of get_args_ifo
} // end of impl ArgsInfo
}
}
fn impl_args_info_data<'a>(
name: &proc_macro2::Ident,
errors: &Errors,
type_attrs: &TypeAttrs,
fields: &'a [StructField<'a>],
) -> TokenStream {
let mut subcommands_iter =
fields.iter().filter(|field| field.kind == FieldKind::SubCommand).fuse();
let subcommand: Option<&StructField<'_>> = subcommands_iter.next();
for dup_subcommand in subcommands_iter {
errors.duplicate_attrs("subcommand", subcommand.unwrap().field, dup_subcommand.field);
}
let impl_span = Span::call_site();
let mut positionals = vec![];
let mut flags = vec![];
// Add the implicit --help flag
flags.push(quote! {
argh::FlagInfo {
short: None,
long: "--help",
description: "display usage information",
optionality: argh::Optionality::Optional,
kind: argh::FlagInfoKind::Switch,
hidden: false
}
});
for field in fields {
let optionality = match field.optionality {
Optionality::None => quote! { argh::Optionality::Required },
Optionality::Defaulted(_) => quote! { argh::Optionality::Optional },
Optionality::Optional => quote! { argh::Optionality::Optional },
Optionality::Repeating if field.attrs.greedy.is_some() => {
quote! { argh::Optionality::Greedy }
}
Optionality::Repeating => quote! { argh::Optionality::Repeating },
};
match field.kind {
FieldKind::Positional => {
let name = field.positional_arg_name();
let description = if let Some(desc) = &field.attrs.description {
desc.content.value().trim().to_owned()
} else {
String::new()
};
let hidden = field.attrs.hidden_help;
positionals.push(quote! {
argh::PositionalInfo {
name: #name,
description: #description,
optionality: #optionality,
hidden: #hidden,
}
});
}
FieldKind::Switch | FieldKind::Option => {
let short = if let Some(short) = &field.attrs.short {
quote! { Some(#short) }
} else {
quote! { None }
};
let long = field.long_name.as_ref().expect("missing long name for option");
let description = require_description(
errors,
field.name.span(),
&field.attrs.description,
"field",
);
let kind = if field.kind == FieldKind::Switch {
quote! {
argh::FlagInfoKind::Switch
}
} else {
let arg_name = if let Some(arg_name) = &field.attrs.arg_name {
quote! { #arg_name }
} else {
let arg_name = long.trim_start_matches("--");
quote! { #arg_name }
};
quote! {
argh::FlagInfoKind::Option {
arg_name: #arg_name,
}
}
};
let hidden = field.attrs.hidden_help;
flags.push(quote! {
argh::FlagInfo {
short: #short,
long: #long,
description: #description,
optionality: #optionality,
kind: #kind,
hidden: #hidden,
}
});
}
FieldKind::SubCommand => {}
}
}
let empty_str = syn::LitStr::new("", Span::call_site());
let type_name = LitStr::new(&name.to_string(), Span::call_site());
let subcommand_name = if type_attrs.is_subcommand.is_some() {
type_attrs.name.as_ref().unwrap_or_else(|| {
errors.err(name, "`#[argh(name = \"...\")]` attribute is required for subcommands");
&empty_str
})
} else {
&type_name
};
let subcommand = if let Some(subcommand) = subcommand {
let subcommand_ty = subcommand.ty_without_wrapper;
quote! {
#subcommand_ty::get_subcommands()
}
} else {
quote! {vec![]}
};
let description =
require_description(errors, Span::call_site(), &type_attrs.description, "type");
let examples = type_attrs.examples.iter().map(|e| quote! { #e });
let notes = type_attrs.notes.iter().map(|e| quote! { #e });
let error_codes = type_attrs.error_codes.iter().map(|(code, text)| {
quote! { argh::ErrorCodeInfo{code:#code, description: #text} }
});
quote_spanned! { impl_span =>
argh::CommandInfoWithArgs {
name: #subcommand_name,
description: #description,
examples: &[#( #examples, )*],
notes: &[#( #notes, )*],
positionals: &[#( #positionals, )*],
flags: &[#( #flags, )*],
commands: #subcommand,
error_codes: &[#( #error_codes, )*],
}
}
} | /// A short description of the command's functionality.
description: " enum of subcommands", | random_line_split |
args_info.rs | // Copyright (c) 2023 Google LLC All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
use crate::{
enum_only_single_field_unnamed_variants,
errors::Errors,
help::require_description,
parse_attrs::{check_enum_type_attrs, FieldAttrs, FieldKind, TypeAttrs, VariantAttrs},
Optionality, StructField,
};
use proc_macro2::{Span, TokenStream};
use quote::{quote, quote_spanned, ToTokens};
use syn::LitStr;
/// Implement the derive macro for ArgsInfo.
pub(crate) fn impl_args_info(input: &syn::DeriveInput) -> TokenStream {
let errors = &Errors::default();
// parse the types
let type_attrs = &TypeAttrs::parse(errors, input);
// Based on the type generate the appropriate code.
let mut output_tokens = match &input.data {
syn::Data::Struct(ds) => {
impl_arg_info_struct(errors, &input.ident, type_attrs, &input.generics, ds)
}
syn::Data::Enum(de) => {
impl_arg_info_enum(errors, &input.ident, type_attrs, &input.generics, de)
}
syn::Data::Union(_) => {
errors.err(input, "`#[derive(ArgsInfo)]` cannot be applied to unions");
TokenStream::new()
}
};
errors.to_tokens(&mut output_tokens);
output_tokens
}
/// Implement the ArgsInfo trait for a struct annotated with argh attributes.
fn impl_arg_info_struct(
errors: &Errors,
name: &syn::Ident,
type_attrs: &TypeAttrs,
generic_args: &syn::Generics,
ds: &syn::DataStruct,
) -> TokenStream {
// Collect the fields, skipping fields that are not supported.
let fields = match &ds.fields {
syn::Fields::Named(fields) => fields,
syn::Fields::Unnamed(_) => {
errors.err(
&ds.struct_token,
"`#![derive(ArgsInfo)]` is not currently supported on tuple structs",
);
return TokenStream::new();
}
syn::Fields::Unit => {
errors.err(&ds.struct_token, "#![derive(ArgsInfo)]` cannot be applied to unit structs");
return TokenStream::new();
}
};
// Map the fields into StructField objects.
let fields: Vec<_> = fields
.named
.iter()
.filter_map(|field| {
let attrs = FieldAttrs::parse(errors, field);
StructField::new(errors, field, attrs)
})
.collect();
let impl_span = Span::call_site();
// Generate the implementation of `get_args_info()` for this struct.
let args_info = impl_args_info_data(name, errors, type_attrs, &fields);
// Split out the generics info for the impl declaration.
let (impl_generics, ty_generics, where_clause) = generic_args.split_for_impl();
quote_spanned! { impl_span =>
#[automatically_derived]
impl #impl_generics argh::ArgsInfo for #name #ty_generics #where_clause {
fn get_args_info() -> argh::CommandInfoWithArgs {
#args_info
}
}
}
}
/// Implement ArgsInfo for an enum. The enum is a collection of subcommands.
fn impl_arg_info_enum(
errors: &Errors,
name: &syn::Ident,
type_attrs: &TypeAttrs,
generic_args: &syn::Generics,
de: &syn::DataEnum,
) -> TokenStream | // An enum variant like `<name>(<ty>)`. This is used to collect
// the type of the variant for each subcommand.
struct ArgInfoVariant<'a> {
ty: &'a syn::Type,
}
let variants: Vec<ArgInfoVariant<'_>> = de
.variants
.iter()
.filter_map(|variant| {
let name = &variant.ident;
let ty = enum_only_single_field_unnamed_variants(errors, &variant.fields)?;
if VariantAttrs::parse(errors, variant).is_dynamic.is_some() {
if dynamic_type_and_variant.is_some() {
errors.err(variant, "Only one variant can have the `dynamic` attribute");
}
dynamic_type_and_variant = Some((ty, name));
None
} else {
Some(ArgInfoVariant { ty })
}
})
.collect();
let dynamic_subcommands = if let Some((dynamic_type, _)) = dynamic_type_and_variant {
quote! {
<#dynamic_type as argh::DynamicSubCommand>::commands().iter()
.map(|s|
SubCommandInfo {
name: s.name,
command: CommandInfoWithArgs {
name: s.name,
description: s.description,
..Default::default()
}
}).collect()
}
} else {
quote! { vec![]}
};
let variant_ty_info = variants.iter().map(|t| {
let ty = t.ty;
quote!(
argh::SubCommandInfo {
name: #ty::get_args_info().name,
command: #ty::get_args_info()
}
)
});
let cmd_name = if let Some(id) = &type_attrs.name {
id.clone()
} else {
LitStr::new("", Span::call_site())
};
let (impl_generics, ty_generics, where_clause) = generic_args.split_for_impl();
quote! {
#[automatically_derived]
impl #impl_generics argh::ArgsInfo for #name #ty_generics #where_clause {
fn get_args_info() -> argh::CommandInfoWithArgs {
let mut the_subcommands = vec![#(#variant_ty_info),*];
let mut dynamic_commands = #dynamic_subcommands;
the_subcommands.append(&mut dynamic_commands);
argh::CommandInfoWithArgs {
name: #cmd_name,
/// A short description of the command's functionality.
description: " enum of subcommands",
commands: the_subcommands,
..Default::default()
}
} // end of get_args_ifo
} // end of impl ArgsInfo
}
}
fn impl_args_info_data<'a>(
name: &proc_macro2::Ident,
errors: &Errors,
type_attrs: &TypeAttrs,
fields: &'a [StructField<'a>],
) -> TokenStream {
let mut subcommands_iter =
fields.iter().filter(|field| field.kind == FieldKind::SubCommand).fuse();
let subcommand: Option<&StructField<'_>> = subcommands_iter.next();
for dup_subcommand in subcommands_iter {
errors.duplicate_attrs("subcommand", subcommand.unwrap().field, dup_subcommand.field);
}
let impl_span = Span::call_site();
let mut positionals = vec![];
let mut flags = vec![];
// Add the implicit --help flag
flags.push(quote! {
argh::FlagInfo {
short: None,
long: "--help",
description: "display usage information",
optionality: argh::Optionality::Optional,
kind: argh::FlagInfoKind::Switch,
hidden: false
}
});
for field in fields {
let optionality = match field.optionality {
Optionality::None => quote! { argh::Optionality::Required },
Optionality::Defaulted(_) => quote! { argh::Optionality::Optional },
Optionality::Optional => quote! { argh::Optionality::Optional },
Optionality::Repeating if field.attrs.greedy.is_some() => {
quote! { argh::Optionality::Greedy }
}
Optionality::Repeating => quote! { argh::Optionality::Repeating },
};
match field.kind {
FieldKind::Positional => {
let name = field.positional_arg_name();
let description = if let Some(desc) = &field.attrs.description {
desc.content.value().trim().to_owned()
} else {
String::new()
};
let hidden = field.attrs.hidden_help;
positionals.push(quote! {
argh::PositionalInfo {
name: #name,
description: #description,
optionality: #optionality,
hidden: #hidden,
}
});
}
FieldKind::Switch | FieldKind::Option => {
let short = if let Some(short) = &field.attrs.short {
quote! { Some(#short) }
} else {
quote! { None }
};
let long = field.long_name.as_ref().expect("missing long name for option");
let description = require_description(
errors,
field.name.span(),
&field.attrs.description,
"field",
);
let kind = if field.kind == FieldKind::Switch {
quote! {
argh::FlagInfoKind::Switch
}
} else {
let arg_name = if let Some(arg_name) = &field.attrs.arg_name {
quote! { #arg_name }
} else {
let arg_name = long.trim_start_matches("--");
quote! { #arg_name }
};
quote! {
argh::FlagInfoKind::Option {
arg_name: #arg_name,
}
}
};
let hidden = field.attrs.hidden_help;
flags.push(quote! {
argh::FlagInfo {
short: #short,
long: #long,
description: #description,
optionality: #optionality,
kind: #kind,
hidden: #hidden,
}
});
}
FieldKind::SubCommand => {}
}
}
let empty_str = syn::LitStr::new("", Span::call_site());
let type_name = LitStr::new(&name.to_string(), Span::call_site());
let subcommand_name = if type_attrs.is_subcommand.is_some() {
type_attrs.name.as_ref().unwrap_or_else(|| {
errors.err(name, "`#[argh(name = \"...\")]` attribute is required for subcommands");
&empty_str
})
} else {
&type_name
};
let subcommand = if let Some(subcommand) = subcommand {
let subcommand_ty = subcommand.ty_without_wrapper;
quote! {
#subcommand_ty::get_subcommands()
}
} else {
quote! {vec![]}
};
let description =
require_description(errors, Span::call_site(), &type_attrs.description, "type");
let examples = type_attrs.examples.iter().map(|e| quote! { #e });
let notes = type_attrs.notes.iter().map(|e| quote! { #e });
let error_codes = type_attrs.error_codes.iter().map(|(code, text)| {
quote! { argh::ErrorCodeInfo{code:#code, description: #text} }
});
quote_spanned! { impl_span =>
argh::CommandInfoWithArgs {
name: #subcommand_name,
description: #description,
examples: &[#( #examples, )*],
notes: &[#( #notes, )*],
positionals: &[#( #positionals, )*],
flags: &[#( #flags, )*],
commands: #subcommand,
error_codes: &[#( #error_codes, )*],
}
}
}
| {
// Validate the enum is OK for argh.
check_enum_type_attrs(errors, type_attrs, &de.enum_token.span);
// Ensure that `#[argh(subcommand)]` is present.
if type_attrs.is_subcommand.is_none() {
errors.err_span(
de.enum_token.span,
concat!(
"`#![derive(ArgsInfo)]` on `enum`s can only be used to enumerate subcommands.\n",
"Consider adding `#[argh(subcommand)]` to the `enum` declaration.",
),
);
}
// One of the variants can be annotated as providing dynamic subcommands.
// We treat this differently since we need to call a function at runtime
// to determine the subcommands provided.
let mut dynamic_type_and_variant = None;
| identifier_body |
args_info.rs | // Copyright (c) 2023 Google LLC All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
use crate::{
enum_only_single_field_unnamed_variants,
errors::Errors,
help::require_description,
parse_attrs::{check_enum_type_attrs, FieldAttrs, FieldKind, TypeAttrs, VariantAttrs},
Optionality, StructField,
};
use proc_macro2::{Span, TokenStream};
use quote::{quote, quote_spanned, ToTokens};
use syn::LitStr;
/// Implement the derive macro for ArgsInfo.
pub(crate) fn impl_args_info(input: &syn::DeriveInput) -> TokenStream {
let errors = &Errors::default();
// parse the types
let type_attrs = &TypeAttrs::parse(errors, input);
// Based on the type generate the appropriate code.
let mut output_tokens = match &input.data {
syn::Data::Struct(ds) => {
impl_arg_info_struct(errors, &input.ident, type_attrs, &input.generics, ds)
}
syn::Data::Enum(de) => |
syn::Data::Union(_) => {
errors.err(input, "`#[derive(ArgsInfo)]` cannot be applied to unions");
TokenStream::new()
}
};
errors.to_tokens(&mut output_tokens);
output_tokens
}
/// Implement the ArgsInfo trait for a struct annotated with argh attributes.
fn impl_arg_info_struct(
errors: &Errors,
name: &syn::Ident,
type_attrs: &TypeAttrs,
generic_args: &syn::Generics,
ds: &syn::DataStruct,
) -> TokenStream {
// Collect the fields, skipping fields that are not supported.
let fields = match &ds.fields {
syn::Fields::Named(fields) => fields,
syn::Fields::Unnamed(_) => {
errors.err(
&ds.struct_token,
"`#![derive(ArgsInfo)]` is not currently supported on tuple structs",
);
return TokenStream::new();
}
syn::Fields::Unit => {
errors.err(&ds.struct_token, "#![derive(ArgsInfo)]` cannot be applied to unit structs");
return TokenStream::new();
}
};
// Map the fields into StructField objects.
let fields: Vec<_> = fields
.named
.iter()
.filter_map(|field| {
let attrs = FieldAttrs::parse(errors, field);
StructField::new(errors, field, attrs)
})
.collect();
let impl_span = Span::call_site();
// Generate the implementation of `get_args_info()` for this struct.
let args_info = impl_args_info_data(name, errors, type_attrs, &fields);
// Split out the generics info for the impl declaration.
let (impl_generics, ty_generics, where_clause) = generic_args.split_for_impl();
quote_spanned! { impl_span =>
#[automatically_derived]
impl #impl_generics argh::ArgsInfo for #name #ty_generics #where_clause {
fn get_args_info() -> argh::CommandInfoWithArgs {
#args_info
}
}
}
}
/// Implement ArgsInfo for an enum. The enum is a collection of subcommands.
fn impl_arg_info_enum(
errors: &Errors,
name: &syn::Ident,
type_attrs: &TypeAttrs,
generic_args: &syn::Generics,
de: &syn::DataEnum,
) -> TokenStream {
// Validate the enum is OK for argh.
check_enum_type_attrs(errors, type_attrs, &de.enum_token.span);
// Ensure that `#[argh(subcommand)]` is present.
if type_attrs.is_subcommand.is_none() {
errors.err_span(
de.enum_token.span,
concat!(
"`#![derive(ArgsInfo)]` on `enum`s can only be used to enumerate subcommands.\n",
"Consider adding `#[argh(subcommand)]` to the `enum` declaration.",
),
);
}
// One of the variants can be annotated as providing dynamic subcommands.
// We treat this differently since we need to call a function at runtime
// to determine the subcommands provided.
let mut dynamic_type_and_variant = None;
// An enum variant like `<name>(<ty>)`. This is used to collect
// the type of the variant for each subcommand.
struct ArgInfoVariant<'a> {
ty: &'a syn::Type,
}
let variants: Vec<ArgInfoVariant<'_>> = de
.variants
.iter()
.filter_map(|variant| {
let name = &variant.ident;
let ty = enum_only_single_field_unnamed_variants(errors, &variant.fields)?;
if VariantAttrs::parse(errors, variant).is_dynamic.is_some() {
if dynamic_type_and_variant.is_some() {
errors.err(variant, "Only one variant can have the `dynamic` attribute");
}
dynamic_type_and_variant = Some((ty, name));
None
} else {
Some(ArgInfoVariant { ty })
}
})
.collect();
let dynamic_subcommands = if let Some((dynamic_type, _)) = dynamic_type_and_variant {
quote! {
<#dynamic_type as argh::DynamicSubCommand>::commands().iter()
.map(|s|
SubCommandInfo {
name: s.name,
command: CommandInfoWithArgs {
name: s.name,
description: s.description,
..Default::default()
}
}).collect()
}
} else {
quote! { vec![]}
};
let variant_ty_info = variants.iter().map(|t| {
let ty = t.ty;
quote!(
argh::SubCommandInfo {
name: #ty::get_args_info().name,
command: #ty::get_args_info()
}
)
});
let cmd_name = if let Some(id) = &type_attrs.name {
id.clone()
} else {
LitStr::new("", Span::call_site())
};
let (impl_generics, ty_generics, where_clause) = generic_args.split_for_impl();
quote! {
#[automatically_derived]
impl #impl_generics argh::ArgsInfo for #name #ty_generics #where_clause {
fn get_args_info() -> argh::CommandInfoWithArgs {
let mut the_subcommands = vec![#(#variant_ty_info),*];
let mut dynamic_commands = #dynamic_subcommands;
the_subcommands.append(&mut dynamic_commands);
argh::CommandInfoWithArgs {
name: #cmd_name,
/// A short description of the command's functionality.
description: " enum of subcommands",
commands: the_subcommands,
..Default::default()
}
} // end of get_args_ifo
} // end of impl ArgsInfo
}
}
fn impl_args_info_data<'a>(
name: &proc_macro2::Ident,
errors: &Errors,
type_attrs: &TypeAttrs,
fields: &'a [StructField<'a>],
) -> TokenStream {
let mut subcommands_iter =
fields.iter().filter(|field| field.kind == FieldKind::SubCommand).fuse();
let subcommand: Option<&StructField<'_>> = subcommands_iter.next();
for dup_subcommand in subcommands_iter {
errors.duplicate_attrs("subcommand", subcommand.unwrap().field, dup_subcommand.field);
}
let impl_span = Span::call_site();
let mut positionals = vec![];
let mut flags = vec![];
// Add the implicit --help flag
flags.push(quote! {
argh::FlagInfo {
short: None,
long: "--help",
description: "display usage information",
optionality: argh::Optionality::Optional,
kind: argh::FlagInfoKind::Switch,
hidden: false
}
});
for field in fields {
let optionality = match field.optionality {
Optionality::None => quote! { argh::Optionality::Required },
Optionality::Defaulted(_) => quote! { argh::Optionality::Optional },
Optionality::Optional => quote! { argh::Optionality::Optional },
Optionality::Repeating if field.attrs.greedy.is_some() => {
quote! { argh::Optionality::Greedy }
}
Optionality::Repeating => quote! { argh::Optionality::Repeating },
};
match field.kind {
FieldKind::Positional => {
let name = field.positional_arg_name();
let description = if let Some(desc) = &field.attrs.description {
desc.content.value().trim().to_owned()
} else {
String::new()
};
let hidden = field.attrs.hidden_help;
positionals.push(quote! {
argh::PositionalInfo {
name: #name,
description: #description,
optionality: #optionality,
hidden: #hidden,
}
});
}
FieldKind::Switch | FieldKind::Option => {
let short = if let Some(short) = &field.attrs.short {
quote! { Some(#short) }
} else {
quote! { None }
};
let long = field.long_name.as_ref().expect("missing long name for option");
let description = require_description(
errors,
field.name.span(),
&field.attrs.description,
"field",
);
let kind = if field.kind == FieldKind::Switch {
quote! {
argh::FlagInfoKind::Switch
}
} else {
let arg_name = if let Some(arg_name) = &field.attrs.arg_name {
quote! { #arg_name }
} else {
let arg_name = long.trim_start_matches("--");
quote! { #arg_name }
};
quote! {
argh::FlagInfoKind::Option {
arg_name: #arg_name,
}
}
};
let hidden = field.attrs.hidden_help;
flags.push(quote! {
argh::FlagInfo {
short: #short,
long: #long,
description: #description,
optionality: #optionality,
kind: #kind,
hidden: #hidden,
}
});
}
FieldKind::SubCommand => {}
}
}
let empty_str = syn::LitStr::new("", Span::call_site());
let type_name = LitStr::new(&name.to_string(), Span::call_site());
let subcommand_name = if type_attrs.is_subcommand.is_some() {
type_attrs.name.as_ref().unwrap_or_else(|| {
errors.err(name, "`#[argh(name = \"...\")]` attribute is required for subcommands");
&empty_str
})
} else {
&type_name
};
let subcommand = if let Some(subcommand) = subcommand {
let subcommand_ty = subcommand.ty_without_wrapper;
quote! {
#subcommand_ty::get_subcommands()
}
} else {
quote! {vec![]}
};
let description =
require_description(errors, Span::call_site(), &type_attrs.description, "type");
let examples = type_attrs.examples.iter().map(|e| quote! { #e });
let notes = type_attrs.notes.iter().map(|e| quote! { #e });
let error_codes = type_attrs.error_codes.iter().map(|(code, text)| {
quote! { argh::ErrorCodeInfo{code:#code, description: #text} }
});
quote_spanned! { impl_span =>
argh::CommandInfoWithArgs {
name: #subcommand_name,
description: #description,
examples: &[#( #examples, )*],
notes: &[#( #notes, )*],
positionals: &[#( #positionals, )*],
flags: &[#( #flags, )*],
commands: #subcommand,
error_codes: &[#( #error_codes, )*],
}
}
}
| {
impl_arg_info_enum(errors, &input.ident, type_attrs, &input.generics, de)
} | conditional_block |
args_info.rs | // Copyright (c) 2023 Google LLC All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
use crate::{
enum_only_single_field_unnamed_variants,
errors::Errors,
help::require_description,
parse_attrs::{check_enum_type_attrs, FieldAttrs, FieldKind, TypeAttrs, VariantAttrs},
Optionality, StructField,
};
use proc_macro2::{Span, TokenStream};
use quote::{quote, quote_spanned, ToTokens};
use syn::LitStr;
/// Implement the derive macro for ArgsInfo.
pub(crate) fn impl_args_info(input: &syn::DeriveInput) -> TokenStream {
let errors = &Errors::default();
// parse the types
let type_attrs = &TypeAttrs::parse(errors, input);
// Based on the type generate the appropriate code.
let mut output_tokens = match &input.data {
syn::Data::Struct(ds) => {
impl_arg_info_struct(errors, &input.ident, type_attrs, &input.generics, ds)
}
syn::Data::Enum(de) => {
impl_arg_info_enum(errors, &input.ident, type_attrs, &input.generics, de)
}
syn::Data::Union(_) => {
errors.err(input, "`#[derive(ArgsInfo)]` cannot be applied to unions");
TokenStream::new()
}
};
errors.to_tokens(&mut output_tokens);
output_tokens
}
/// Implement the ArgsInfo trait for a struct annotated with argh attributes.
fn impl_arg_info_struct(
errors: &Errors,
name: &syn::Ident,
type_attrs: &TypeAttrs,
generic_args: &syn::Generics,
ds: &syn::DataStruct,
) -> TokenStream {
// Collect the fields, skipping fields that are not supported.
let fields = match &ds.fields {
syn::Fields::Named(fields) => fields,
syn::Fields::Unnamed(_) => {
errors.err(
&ds.struct_token,
"`#![derive(ArgsInfo)]` is not currently supported on tuple structs",
);
return TokenStream::new();
}
syn::Fields::Unit => {
errors.err(&ds.struct_token, "#![derive(ArgsInfo)]` cannot be applied to unit structs");
return TokenStream::new();
}
};
// Map the fields into StructField objects.
let fields: Vec<_> = fields
.named
.iter()
.filter_map(|field| {
let attrs = FieldAttrs::parse(errors, field);
StructField::new(errors, field, attrs)
})
.collect();
let impl_span = Span::call_site();
// Generate the implementation of `get_args_info()` for this struct.
let args_info = impl_args_info_data(name, errors, type_attrs, &fields);
// Split out the generics info for the impl declaration.
let (impl_generics, ty_generics, where_clause) = generic_args.split_for_impl();
quote_spanned! { impl_span =>
#[automatically_derived]
impl #impl_generics argh::ArgsInfo for #name #ty_generics #where_clause {
fn get_args_info() -> argh::CommandInfoWithArgs {
#args_info
}
}
}
}
/// Implement ArgsInfo for an enum. The enum is a collection of subcommands.
fn impl_arg_info_enum(
errors: &Errors,
name: &syn::Ident,
type_attrs: &TypeAttrs,
generic_args: &syn::Generics,
de: &syn::DataEnum,
) -> TokenStream {
// Validate the enum is OK for argh.
check_enum_type_attrs(errors, type_attrs, &de.enum_token.span);
// Ensure that `#[argh(subcommand)]` is present.
if type_attrs.is_subcommand.is_none() {
errors.err_span(
de.enum_token.span,
concat!(
"`#![derive(ArgsInfo)]` on `enum`s can only be used to enumerate subcommands.\n",
"Consider adding `#[argh(subcommand)]` to the `enum` declaration.",
),
);
}
// One of the variants can be annotated as providing dynamic subcommands.
// We treat this differently since we need to call a function at runtime
// to determine the subcommands provided.
let mut dynamic_type_and_variant = None;
// An enum variant like `<name>(<ty>)`. This is used to collect
// the type of the variant for each subcommand.
struct | <'a> {
ty: &'a syn::Type,
}
let variants: Vec<ArgInfoVariant<'_>> = de
.variants
.iter()
.filter_map(|variant| {
let name = &variant.ident;
let ty = enum_only_single_field_unnamed_variants(errors, &variant.fields)?;
if VariantAttrs::parse(errors, variant).is_dynamic.is_some() {
if dynamic_type_and_variant.is_some() {
errors.err(variant, "Only one variant can have the `dynamic` attribute");
}
dynamic_type_and_variant = Some((ty, name));
None
} else {
Some(ArgInfoVariant { ty })
}
})
.collect();
let dynamic_subcommands = if let Some((dynamic_type, _)) = dynamic_type_and_variant {
quote! {
<#dynamic_type as argh::DynamicSubCommand>::commands().iter()
.map(|s|
SubCommandInfo {
name: s.name,
command: CommandInfoWithArgs {
name: s.name,
description: s.description,
..Default::default()
}
}).collect()
}
} else {
quote! { vec![]}
};
let variant_ty_info = variants.iter().map(|t| {
let ty = t.ty;
quote!(
argh::SubCommandInfo {
name: #ty::get_args_info().name,
command: #ty::get_args_info()
}
)
});
let cmd_name = if let Some(id) = &type_attrs.name {
id.clone()
} else {
LitStr::new("", Span::call_site())
};
let (impl_generics, ty_generics, where_clause) = generic_args.split_for_impl();
quote! {
#[automatically_derived]
impl #impl_generics argh::ArgsInfo for #name #ty_generics #where_clause {
fn get_args_info() -> argh::CommandInfoWithArgs {
let mut the_subcommands = vec![#(#variant_ty_info),*];
let mut dynamic_commands = #dynamic_subcommands;
the_subcommands.append(&mut dynamic_commands);
argh::CommandInfoWithArgs {
name: #cmd_name,
/// A short description of the command's functionality.
description: " enum of subcommands",
commands: the_subcommands,
..Default::default()
}
} // end of get_args_ifo
} // end of impl ArgsInfo
}
}
fn impl_args_info_data<'a>(
name: &proc_macro2::Ident,
errors: &Errors,
type_attrs: &TypeAttrs,
fields: &'a [StructField<'a>],
) -> TokenStream {
let mut subcommands_iter =
fields.iter().filter(|field| field.kind == FieldKind::SubCommand).fuse();
let subcommand: Option<&StructField<'_>> = subcommands_iter.next();
for dup_subcommand in subcommands_iter {
errors.duplicate_attrs("subcommand", subcommand.unwrap().field, dup_subcommand.field);
}
let impl_span = Span::call_site();
let mut positionals = vec![];
let mut flags = vec![];
// Add the implicit --help flag
flags.push(quote! {
argh::FlagInfo {
short: None,
long: "--help",
description: "display usage information",
optionality: argh::Optionality::Optional,
kind: argh::FlagInfoKind::Switch,
hidden: false
}
});
for field in fields {
let optionality = match field.optionality {
Optionality::None => quote! { argh::Optionality::Required },
Optionality::Defaulted(_) => quote! { argh::Optionality::Optional },
Optionality::Optional => quote! { argh::Optionality::Optional },
Optionality::Repeating if field.attrs.greedy.is_some() => {
quote! { argh::Optionality::Greedy }
}
Optionality::Repeating => quote! { argh::Optionality::Repeating },
};
match field.kind {
FieldKind::Positional => {
let name = field.positional_arg_name();
let description = if let Some(desc) = &field.attrs.description {
desc.content.value().trim().to_owned()
} else {
String::new()
};
let hidden = field.attrs.hidden_help;
positionals.push(quote! {
argh::PositionalInfo {
name: #name,
description: #description,
optionality: #optionality,
hidden: #hidden,
}
});
}
FieldKind::Switch | FieldKind::Option => {
let short = if let Some(short) = &field.attrs.short {
quote! { Some(#short) }
} else {
quote! { None }
};
let long = field.long_name.as_ref().expect("missing long name for option");
let description = require_description(
errors,
field.name.span(),
&field.attrs.description,
"field",
);
let kind = if field.kind == FieldKind::Switch {
quote! {
argh::FlagInfoKind::Switch
}
} else {
let arg_name = if let Some(arg_name) = &field.attrs.arg_name {
quote! { #arg_name }
} else {
let arg_name = long.trim_start_matches("--");
quote! { #arg_name }
};
quote! {
argh::FlagInfoKind::Option {
arg_name: #arg_name,
}
}
};
let hidden = field.attrs.hidden_help;
flags.push(quote! {
argh::FlagInfo {
short: #short,
long: #long,
description: #description,
optionality: #optionality,
kind: #kind,
hidden: #hidden,
}
});
}
FieldKind::SubCommand => {}
}
}
let empty_str = syn::LitStr::new("", Span::call_site());
let type_name = LitStr::new(&name.to_string(), Span::call_site());
let subcommand_name = if type_attrs.is_subcommand.is_some() {
type_attrs.name.as_ref().unwrap_or_else(|| {
errors.err(name, "`#[argh(name = \"...\")]` attribute is required for subcommands");
&empty_str
})
} else {
&type_name
};
let subcommand = if let Some(subcommand) = subcommand {
let subcommand_ty = subcommand.ty_without_wrapper;
quote! {
#subcommand_ty::get_subcommands()
}
} else {
quote! {vec![]}
};
let description =
require_description(errors, Span::call_site(), &type_attrs.description, "type");
let examples = type_attrs.examples.iter().map(|e| quote! { #e });
let notes = type_attrs.notes.iter().map(|e| quote! { #e });
let error_codes = type_attrs.error_codes.iter().map(|(code, text)| {
quote! { argh::ErrorCodeInfo{code:#code, description: #text} }
});
quote_spanned! { impl_span =>
argh::CommandInfoWithArgs {
name: #subcommand_name,
description: #description,
examples: &[#( #examples, )*],
notes: &[#( #notes, )*],
positionals: &[#( #positionals, )*],
flags: &[#( #flags, )*],
commands: #subcommand,
error_codes: &[#( #error_codes, )*],
}
}
}
| ArgInfoVariant | identifier_name |
command.rs | use std::process::{Command, Stdio};
use std::io::{BufReader, BufRead, Read, Write}; // Read is used for read_to_string
use std::fs::File;
use std::path::Path;
pub struct Commander {path: String}
impl Commander {
pub fn new<P: AsRef<Path>>(save_path: P) -> Commander {
Commander {
path: save_path.as_ref().to_str().expect("contains invalid utf-8 character").to_owned()
}
}
/// preparation for ch02; save the tab-splited file, hightemp.txt
/// exec `curl http://www.cl.ecei.tohoku.ac.jp/nlp100/data/hightemp.txt" -o ${save_path}`
/// The file is the record of maximum temperature in Japan.
/// these data are composed of prefecture, location, temperature and date.
pub fn prepare(&self) {
let path = "http://www.cl.ecei.tohoku.ac.jp/nlp100/data/hightemp.txt";
let output = Command::new("curl")
.arg(path)
.args(&["-o", &self.path])
.output().expect("fail to execute process");
debug!("status: {:?}", output.status);
debug!("stdout: {}", String::from_utf8_lossy(&output.stdout));
debug!("stderr: \n{}", String::from_utf8_lossy(&output.stderr));
}
/// test ch02_10; count lines in the designated file.
pub fn count_lines(&self) -> Result<usize, ::std::num::ParseIntError> {
let output = Command::new("wc")
.arg("-l")
.arg(&self.path)
.output().expect("fail to execute process");
String::from_utf8_lossy(&output.stdout)
.as_ref()
.trim()
.split(" ")
.take(1)
.collect::<String>()
.parse::<usize>()
}
/// ch02_11; replace tab to space
pub fn replace_tab_to_space(&self) -> String {
let mut cat = Command::new("cat")
.arg(&self.path)
.stdout(Stdio::piped())
.spawn().expect("fail to execute cat command");
let mut tr = Command::new("tr")
.arg("[:blank:]")
.arg(" ")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.spawn().expect("fail to execute tr command");
// see https://www.reddit.com/r/rust/comments/3azfie/how_to_pipe_one_process_into_another/
if let Some(ref mut stdout) = cat.stdout {
if let Some(ref mut stdin) = tr.stdin {
let mut buf: Vec<u8> = Vec::new();
stdout.read_to_end(&mut buf).unwrap();
stdin.write_all(&buf).unwrap();
}
}
let res = tr.wait_with_output().unwrap().stdout;
String::from_utf8(res).expect("contain invalid utf-8 character")
}
/// preparation to ch02_12
pub fn extract_row(&self, n: usize) -> String {
let res = Command::new("cut")
.args(&["-f", &format!("{}", n + 1)]) // start at 0
.arg(&self.path)
.output().expect("fail to execute cut command");
String::from_utf8_lossy(&res.stdout).trim().to_string()
}
/// ch02.13 merge 2 files
pub fn merge<P: AsRef<Path>>(file1: &P, file2: &P)->String {
let res = Command::new("paste")
.args(&[file1.as_ref(), file2.as_ref()])
.output().expect("fail to execute paste command");
String::from_utf8_lossy(&res.stdout).trim().to_string()
}
/// helper for ch02. 14&15
fn take(&self, n: usize, pos: &str)->String {
let res = Command::new(pos)
.args(&["-n", format!("{}", n).as_str()])
.arg(&self.path)
.output().expect("fail to execute head command");
String::from_utf8_lossy(&res.stdout).trim().to_string()
}
/// ch02.14 `head -n ${file}`
pub fn head(&self, n: usize)->String {
self.take(n, "head")
}
/// ch02.15 `tail -n ${file}
pub fn tail(&self, n: usize)->String {
self.take(n, "tail")
}
/// ch02.16 split n files.
pub fn split<P: AsRef<Path>>(&self, n: usize, dst: &P) {
let size = self.count_lines().unwrap();
use ch02::util;
let lines = util::get_split_line_count(size, n);
debug!("split per {} lines", lines);
assert!(lines >0);
let res = Command::new("split")
.args(&["-l", &format!("{}", lines)])
.arg(&self.path) // src
.arg(dst.as_ref().to_str().unwrap()) // dst
.output()
.expect("fail to execute split command");
}
/// ch02.17 take unique items of first row.
pub fn uniq_first_row(&self)->String {
let cutf1 = Command::new("cut")
.args(&["-f", "1"])
.arg(&self.path)
.stdout(Stdio::piped())
.spawn().unwrap();
let sort = Command::new("sort")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.spawn().unwrap();
// note that sort and uniq cannot be swapped.
// uniq command makes duplicated items in sequence single!
let mut uniq = Command::new("uniq")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.spawn().unwrap();
let mut buf: Vec<u8> = Vec::new();
cutf1.stdout.unwrap().read_to_end(&mut buf).unwrap();
sort.stdin.unwrap().write_all(&buf).unwrap();
let mut buf: Vec<u8> = Vec::new();
sort.stdout.unwrap().read_to_end(&mut buf).unwrap();
if let Some(ref mut stdin) = uniq.stdin {
stdin.write_all(&buf).unwrap();
}
// wait_with_output(self) -> Result<Output>
let res = uniq.wait_with_output().unwrap().stdout;
String::from_utf8_lossy(&res).trim().to_string()
}
/// ch02.18 sort by third columns descendingly
/// that means `sort -r -k 3./data/ch02/hightemp.txt`
pub fn sort_in_descending(&self, key: usize)->String {
let res = Command::new("sort")
.arg("-r")
.args(&["-k", &format!("{}", key)])
.arg(&self.path)
.output().unwrap();
String::from_utf8_lossy(&res.stdout).trim().to_string()
}
}
#[cfg(test)]
mod tests {
use super::*;
extern crate env_logger;
extern crate getopts;
extern crate glob;
use self::getopts::Options;
/// env_logger output is controlled by RUST_LOG environmental variable
/// to debug only to this module, set `RUST_LOG=natural_lang::ch02::command=debug` in Environment variable.
/// before save file, confirm existance in file or create dir in fs::create_dir method.
/// create_dir method is equivalent to `mkdir -p` in unix command
#[test]
fn test_prepare() {
use std::fs;
env_logger::init().unwrap();
let save_path = Path::new("./data/ch02/hightemp.txt");
// Success or not, ignore result
// see also https://github.com/rust-lang/rust/pull/11754#issuecomment-33202664
let _ = fs::create_dir(save_path.parent().unwrap());
let commander = Commander::new(save_path);
commander.prepare();
assert!(save_path.exists())
}
#[test]
fn test_count_lines() {
let save_path = Path::new("./data/ch02/hightemp.txt");
let commander = Commander::new(save_path);
assert_eq!(commander.count_lines().unwrap(), 24);
}
#[test]
fn test_replace_tab_to_space() {
let save_path = Path::new("./data/ch02/hightemp.txt");
let commander = Commander::new(save_path);
let res = commander.replace_tab_to_space();
assert_eq!(
res.lines().take(1).collect::<String>(),
"高知県 江川崎 41 2013-08-12"
)
}
#[test]
fn test_extract_row() {
let load_path = Path::new("./data/ch02/hightemp.txt");
let commander = Commander::new(load_path);
assert_eq!(
commander.extract_row(0).lines().next().unwrap(), // take first line
"高知県"
);
}
#[test]
fn test_merge() {
let load_path = Path::new("./data/ch02/hightemp.txt");
let parent = load_path.parent().unwrap(); | debug!("{:?}", res);
assert_eq!(
(&mut res.lines()).next().unwrap(),
"高知県\t江川崎"
)
}
fn print_usage(program: &str, opts: Options) {
let brief = format!("Usage: {} FILE [options]", program);
print!("{}", opts.usage(&brief));
}
/// with cargo test -- [<OPTIONS>], there seems to be panicked at '"Unrecognized option: \'n\'."'
/// so set args directly instead of using env::args()
#[test]
fn test_head() {
// let args = env::args()::collect::<Vec<String>>();
let args = vec!["program", "-n", "5", "./data/ch02/hightemp.txt"];
let program = args[0].clone();
let mut opts = Options::new();
opts.optopt("n", "num", "set first ${num} rows", "NUMBER");
opts.optflag("h", "help", "print this help menu");
let matches = opts.parse(&args[1..]).unwrap();
if matches.opt_present("h") {
print_usage(&program, opts);
return;
}
let n = matches
.opt_str("n")
.expect("invalid number")
.parse::<usize>()
.unwrap();
let input = matches.free.first().unwrap();
let commander = Commander::new(input);
let res = commander.head(n);
assert_eq!(
res,
"高知県\t江川崎\t41\t2013-08-12\n埼玉県\t熊谷\t40.9\t2007-08-16\n\
岐阜県\t多治見\t40.9\t2007-08-16\n山形県\t山形\t40.8\t1933-07-25\n\
山梨県\t甲府\t40.7\t2013-08-10"
);
}
#[test]
fn test_tail() {
// let args = env::args()::collect::<Vec<String>>();
let args = vec!["program", "-n", "5", "./data/ch02/hightemp.txt"];
let program = args[0].clone();
let mut opts = Options::new();
opts.optopt("n", "num", "set first ${num} rows", "NUMBER");
opts.optflag("h", "help", "print this help menu");
let matches = opts.parse(&args[1..]).unwrap();
if matches.opt_present("h") {
print_usage(&program, opts);
return;
}
let n = matches
.opt_str("n")
.expect("invalid number")
.parse::<usize>()
.unwrap();
let input = matches.free.first().unwrap();
let commander = Commander::new(input);
let res = commander.tail(n);
println!("{:?}", res);
assert_eq!(
res,
"埼玉県\t鳩山\t39.9\t1997-07-05\n\
大阪府\t豊中\t39.9\t1994-08-08\n\
山梨県\t大月\t39.9\t1990-07-19\n\
山形県\t鶴岡\t39.9\t1978-08-03\n\
愛知県\t名古屋\t39.9\t1942-08-02"
);
}
#[test]
fn test_split() {
let args = vec!["program", "--line", "3", "./data/ch02/hightemp.txt", "./data/ch02/split_"];
let program = args[0].clone();
let mut opts = Options::new();
opts.optopt("l", "line", "set first ${num} rows", "NUMBER");
opts.optflag("h", "help", "print this help menu");
let matches = opts.parse(&args[1..]).unwrap();
if matches.opt_present("h") {
print_usage(&program, opts);
return;
}
let split_num = matches
.opt_str("l")
.unwrap()
.parse::<usize>()
.expect("invalid number");
let input = &matches.free[0..2];
let save_path = Path::new(&input[1]);
let commander = Commander::new(&input[0]);
//
commander.split(split_num, &input[1]);
let filename = format!("{}{}", save_path.file_name().unwrap().to_str().unwrap(), '*');
use self::glob::glob;
// check that all ok and the length of vector is equal to split_num
assert_eq!(
glob(save_path.parent().unwrap().join(&filename).to_str().unwrap())
.expect("failed to read glob pattern")
.collect::<Result<Vec<_>,_>>().unwrap().len(),
split_num
);
}
#[test]
fn test_uniq_first_row() {
let load_path = Path::new("./data/ch02/hightemp.txt");
let commander = Commander::new(load_path);
let res = commander.uniq_first_row();
assert_eq!(
commander.uniq_first_row(),
"千葉県\n和歌山県\n埼玉県\n大阪府\n山形県\n山梨県\n岐阜県\n愛媛県\n愛知県\n群馬県\n静岡県\n高知県"
)
}
#[test]
fn test_sort_in_descending() {
let load_path = Path::new("./data/ch02/hightemp.txt");
let commander = Commander::new(load_path);
let res = commander.sort_in_descending(3);
assert_eq!(
res.lines().take(5).collect::<Vec<&str>>(),
vec!["高知県\t江川崎\t41\t2013-08-12",
"岐阜県\t多治見\t40.9\t2007-08-16",
"埼玉県\t熊谷\t40.9\t2007-08-16",
"山形県\t山形\t40.8\t1933-07-25",
"山梨県\t甲府\t40.7\t2013-08-10"]
)
}
} |
let file1 = parent.join("col1.txt");
let file2 = parent.join("col2.txt");
let res = Commander::merge(&file1, &file2); | random_line_split |
command.rs | use std::process::{Command, Stdio};
use std::io::{BufReader, BufRead, Read, Write}; // Read is used for read_to_string
use std::fs::File;
use std::path::Path;
pub struct Commander {path: String}
impl Commander {
pub fn new<P: AsRef<Path>>(save_path: P) -> Commander {
Commander {
path: save_path.as_ref().to_str().expect("contains invalid utf-8 character").to_owned()
}
}
/// preparation for ch02; save the tab-splited file, hightemp.txt
/// exec `curl http://www.cl.ecei.tohoku.ac.jp/nlp100/data/hightemp.txt" -o ${save_path}`
/// The file is the record of maximum temperature in Japan.
/// these data are composed of prefecture, location, temperature and date.
pub fn prepare(&self) {
let path = "http://www.cl.ecei.tohoku.ac.jp/nlp100/data/hightemp.txt";
let output = Command::new("curl")
.arg(path)
.args(&["-o", &self.path])
.output().expect("fail to execute process");
debug!("status: {:?}", output.status);
debug!("stdout: {}", String::from_utf8_lossy(&output.stdout));
debug!("stderr: \n{}", String::from_utf8_lossy(&output.stderr));
}
/// test ch02_10; count lines in the designated file.
pub fn count_lines(&self) -> Result<usize, ::std::num::ParseIntError> {
let output = Command::new("wc")
.arg("-l")
.arg(&self.path)
.output().expect("fail to execute process");
String::from_utf8_lossy(&output.stdout)
.as_ref()
.trim()
.split(" ")
.take(1)
.collect::<String>()
.parse::<usize>()
}
/// ch02_11; replace tab to space
pub fn replace_tab_to_space(&self) -> String {
let mut cat = Command::new("cat")
.arg(&self.path)
.stdout(Stdio::piped())
.spawn().expect("fail to execute cat command");
let mut tr = Command::new("tr")
.arg("[:blank:]")
.arg(" ")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.spawn().expect("fail to execute tr command");
// see https://www.reddit.com/r/rust/comments/3azfie/how_to_pipe_one_process_into_another/
if let Some(ref mut stdout) = cat.stdout |
let res = tr.wait_with_output().unwrap().stdout;
String::from_utf8(res).expect("contain invalid utf-8 character")
}
/// preparation to ch02_12
pub fn extract_row(&self, n: usize) -> String {
let res = Command::new("cut")
.args(&["-f", &format!("{}", n + 1)]) // start at 0
.arg(&self.path)
.output().expect("fail to execute cut command");
String::from_utf8_lossy(&res.stdout).trim().to_string()
}
/// ch02.13 merge 2 files
pub fn merge<P: AsRef<Path>>(file1: &P, file2: &P)->String {
let res = Command::new("paste")
.args(&[file1.as_ref(), file2.as_ref()])
.output().expect("fail to execute paste command");
String::from_utf8_lossy(&res.stdout).trim().to_string()
}
/// helper for ch02. 14&15
fn take(&self, n: usize, pos: &str)->String {
let res = Command::new(pos)
.args(&["-n", format!("{}", n).as_str()])
.arg(&self.path)
.output().expect("fail to execute head command");
String::from_utf8_lossy(&res.stdout).trim().to_string()
}
/// ch02.14 `head -n ${file}`
pub fn head(&self, n: usize)->String {
self.take(n, "head")
}
/// ch02.15 `tail -n ${file}
pub fn tail(&self, n: usize)->String {
self.take(n, "tail")
}
/// ch02.16 split n files.
pub fn split<P: AsRef<Path>>(&self, n: usize, dst: &P) {
let size = self.count_lines().unwrap();
use ch02::util;
let lines = util::get_split_line_count(size, n);
debug!("split per {} lines", lines);
assert!(lines >0);
let res = Command::new("split")
.args(&["-l", &format!("{}", lines)])
.arg(&self.path) // src
.arg(dst.as_ref().to_str().unwrap()) // dst
.output()
.expect("fail to execute split command");
}
/// ch02.17 take unique items of first row.
pub fn uniq_first_row(&self)->String {
let cutf1 = Command::new("cut")
.args(&["-f", "1"])
.arg(&self.path)
.stdout(Stdio::piped())
.spawn().unwrap();
let sort = Command::new("sort")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.spawn().unwrap();
// note that sort and uniq cannot be swapped.
// uniq command makes duplicated items in sequence single!
let mut uniq = Command::new("uniq")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.spawn().unwrap();
let mut buf: Vec<u8> = Vec::new();
cutf1.stdout.unwrap().read_to_end(&mut buf).unwrap();
sort.stdin.unwrap().write_all(&buf).unwrap();
let mut buf: Vec<u8> = Vec::new();
sort.stdout.unwrap().read_to_end(&mut buf).unwrap();
if let Some(ref mut stdin) = uniq.stdin {
stdin.write_all(&buf).unwrap();
}
// wait_with_output(self) -> Result<Output>
let res = uniq.wait_with_output().unwrap().stdout;
String::from_utf8_lossy(&res).trim().to_string()
}
/// ch02.18 sort by third columns descendingly
/// that means `sort -r -k 3./data/ch02/hightemp.txt`
pub fn sort_in_descending(&self, key: usize)->String {
let res = Command::new("sort")
.arg("-r")
.args(&["-k", &format!("{}", key)])
.arg(&self.path)
.output().unwrap();
String::from_utf8_lossy(&res.stdout).trim().to_string()
}
}
#[cfg(test)]
mod tests {
use super::*;
extern crate env_logger;
extern crate getopts;
extern crate glob;
use self::getopts::Options;
/// env_logger output is controlled by RUST_LOG environmental variable
/// to debug only to this module, set `RUST_LOG=natural_lang::ch02::command=debug` in Environment variable.
/// before save file, confirm existance in file or create dir in fs::create_dir method.
/// create_dir method is equivalent to `mkdir -p` in unix command
#[test]
fn test_prepare() {
use std::fs;
env_logger::init().unwrap();
let save_path = Path::new("./data/ch02/hightemp.txt");
// Success or not, ignore result
// see also https://github.com/rust-lang/rust/pull/11754#issuecomment-33202664
let _ = fs::create_dir(save_path.parent().unwrap());
let commander = Commander::new(save_path);
commander.prepare();
assert!(save_path.exists())
}
#[test]
fn test_count_lines() {
let save_path = Path::new("./data/ch02/hightemp.txt");
let commander = Commander::new(save_path);
assert_eq!(commander.count_lines().unwrap(), 24);
}
#[test]
fn test_replace_tab_to_space() {
let save_path = Path::new("./data/ch02/hightemp.txt");
let commander = Commander::new(save_path);
let res = commander.replace_tab_to_space();
assert_eq!(
res.lines().take(1).collect::<String>(),
"高知県 江川崎 41 2013-08-12"
)
}
#[test]
fn test_extract_row() {
let load_path = Path::new("./data/ch02/hightemp.txt");
let commander = Commander::new(load_path);
assert_eq!(
commander.extract_row(0).lines().next().unwrap(), // take first line
"高知県"
);
}
#[test]
fn test_merge() {
let load_path = Path::new("./data/ch02/hightemp.txt");
let parent = load_path.parent().unwrap();
let file1 = parent.join("col1.txt");
let file2 = parent.join("col2.txt");
let res = Commander::merge(&file1, &file2);
debug!("{:?}", res);
assert_eq!(
(&mut res.lines()).next().unwrap(),
"高知県\t江川崎"
)
}
fn print_usage(program: &str, opts: Options) {
let brief = format!("Usage: {} FILE [options]", program);
print!("{}", opts.usage(&brief));
}
/// with cargo test -- [<OPTIONS>], there seems to be panicked at '"Unrecognized option: \'n\'."'
/// so set args directly instead of using env::args()
#[test]
fn test_head() {
// let args = env::args()::collect::<Vec<String>>();
let args = vec!["program", "-n", "5", "./data/ch02/hightemp.txt"];
let program = args[0].clone();
let mut opts = Options::new();
opts.optopt("n", "num", "set first ${num} rows", "NUMBER");
opts.optflag("h", "help", "print this help menu");
let matches = opts.parse(&args[1..]).unwrap();
if matches.opt_present("h") {
print_usage(&program, opts);
return;
}
let n = matches
.opt_str("n")
.expect("invalid number")
.parse::<usize>()
.unwrap();
let input = matches.free.first().unwrap();
let commander = Commander::new(input);
let res = commander.head(n);
assert_eq!(
res,
"高知県\t江川崎\t41\t2013-08-12\n埼玉県\t熊谷\t40.9\t2007-08-16\n\
岐阜県\t多治見\t40.9\t2007-08-16\n山形県\t山形\t40.8\t1933-07-25\n\
山梨県\t甲府\t40.7\t2013-08-10"
);
}
#[test]
fn test_tail() {
// let args = env::args()::collect::<Vec<String>>();
let args = vec!["program", "-n", "5", "./data/ch02/hightemp.txt"];
let program = args[0].clone();
let mut opts = Options::new();
opts.optopt("n", "num", "set first ${num} rows", "NUMBER");
opts.optflag("h", "help", "print this help menu");
let matches = opts.parse(&args[1..]).unwrap();
if matches.opt_present("h") {
print_usage(&program, opts);
return;
}
let n = matches
.opt_str("n")
.expect("invalid number")
.parse::<usize>()
.unwrap();
let input = matches.free.first().unwrap();
let commander = Commander::new(input);
let res = commander.tail(n);
println!("{:?}", res);
assert_eq!(
res,
"埼玉県\t鳩山\t39.9\t1997-07-05\n\
大阪府\t豊中\t39.9\t1994-08-08\n\
山梨県\t大月\t39.9\t1990-07-19\n\
山形県\t鶴岡\t39.9\t1978-08-03\n\
愛知県\t名古屋\t39.9\t1942-08-02"
);
}
#[test]
fn test_split() {
let args = vec!["program", "--line", "3", "./data/ch02/hightemp.txt", "./data/ch02/split_"];
let program = args[0].clone();
let mut opts = Options::new();
opts.optopt("l", "line", "set first ${num} rows", "NUMBER");
opts.optflag("h", "help", "print this help menu");
let matches = opts.parse(&args[1..]).unwrap();
if matches.opt_present("h") {
print_usage(&program, opts);
return;
}
let split_num = matches
.opt_str("l")
.unwrap()
.parse::<usize>()
.expect("invalid number");
let input = &matches.free[0..2];
let save_path = Path::new(&input[1]);
let commander = Commander::new(&input[0]);
//
commander.split(split_num, &input[1]);
let filename = format!("{}{}", save_path.file_name().unwrap().to_str().unwrap(), '*');
use self::glob::glob;
// check that all ok and the length of vector is equal to split_num
assert_eq!(
glob(save_path.parent().unwrap().join(&filename).to_str().unwrap())
.expect("failed to read glob pattern")
.collect::<Result<Vec<_>,_>>().unwrap().len(),
split_num
);
}
#[test]
fn test_uniq_first_row() {
let load_path = Path::new("./data/ch02/hightemp.txt");
let commander = Commander::new(load_path);
let res = commander.uniq_first_row();
assert_eq!(
commander.uniq_first_row(),
"千葉県\n和歌山県\n埼玉県\n大阪府\n山形県\n山梨県\n岐阜県\n愛媛県\n愛知県\n群馬県\n静岡県\n高知県"
)
}
#[test]
fn test_sort_in_descending() {
let load_path = Path::new("./data/ch02/hightemp.txt");
let commander = Commander::new(load_path);
let res = commander.sort_in_descending(3);
assert_eq!(
res.lines().take(5).collect::<Vec<&str>>(),
vec!["高知県\t江川崎\t41\t2013-08-12",
"岐阜県\t多治見\t40.9\t2007-08-16",
"埼玉県\t熊谷\t40.9\t2007-08-16",
"山形県\t山形\t40.8\t1933-07-25",
"山梨県\t甲府\t40.7\t2013-08-10"]
)
}
} | {
if let Some(ref mut stdin) = tr.stdin {
let mut buf: Vec<u8> = Vec::new();
stdout.read_to_end(&mut buf).unwrap();
stdin.write_all(&buf).unwrap();
}
} | conditional_block |
command.rs | use std::process::{Command, Stdio};
use std::io::{BufReader, BufRead, Read, Write}; // Read is used for read_to_string
use std::fs::File;
use std::path::Path;
pub struct Commander {path: String}
impl Commander {
pub fn new<P: AsRef<Path>>(save_path: P) -> Commander {
Commander {
path: save_path.as_ref().to_str().expect("contains invalid utf-8 character").to_owned()
}
}
/// preparation for ch02; save the tab-splited file, hightemp.txt
/// exec `curl http://www.cl.ecei.tohoku.ac.jp/nlp100/data/hightemp.txt" -o ${save_path}`
/// The file is the record of maximum temperature in Japan.
/// these data are composed of prefecture, location, temperature and date.
pub fn prepare(&self) {
let path = "http://www.cl.ecei.tohoku.ac.jp/nlp100/data/hightemp.txt";
let output = Command::new("curl")
.arg(path)
.args(&["-o", &self.path])
.output().expect("fail to execute process");
debug!("status: {:?}", output.status);
debug!("stdout: {}", String::from_utf8_lossy(&output.stdout));
debug!("stderr: \n{}", String::from_utf8_lossy(&output.stderr));
}
/// test ch02_10; count lines in the designated file.
pub fn count_lines(&self) -> Result<usize, ::std::num::ParseIntError> {
let output = Command::new("wc")
.arg("-l")
.arg(&self.path)
.output().expect("fail to execute process");
String::from_utf8_lossy(&output.stdout)
.as_ref()
.trim()
.split(" ")
.take(1)
.collect::<String>()
.parse::<usize>()
}
/// ch02_11; replace tab to space
pub fn replace_tab_to_space(&self) -> String {
let mut cat = Command::new("cat")
.arg(&self.path)
.stdout(Stdio::piped())
.spawn().expect("fail to execute cat command");
let mut tr = Command::new("tr")
.arg("[:blank:]")
.arg(" ")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.spawn().expect("fail to execute tr command");
// see https://www.reddit.com/r/rust/comments/3azfie/how_to_pipe_one_process_into_another/
if let Some(ref mut stdout) = cat.stdout {
if let Some(ref mut stdin) = tr.stdin {
let mut buf: Vec<u8> = Vec::new();
stdout.read_to_end(&mut buf).unwrap();
stdin.write_all(&buf).unwrap();
}
}
let res = tr.wait_with_output().unwrap().stdout;
String::from_utf8(res).expect("contain invalid utf-8 character")
}
/// preparation to ch02_12
pub fn extract_row(&self, n: usize) -> String {
let res = Command::new("cut")
.args(&["-f", &format!("{}", n + 1)]) // start at 0
.arg(&self.path)
.output().expect("fail to execute cut command");
String::from_utf8_lossy(&res.stdout).trim().to_string()
}
/// ch02.13 merge 2 files
pub fn merge<P: AsRef<Path>>(file1: &P, file2: &P)->String {
let res = Command::new("paste")
.args(&[file1.as_ref(), file2.as_ref()])
.output().expect("fail to execute paste command");
String::from_utf8_lossy(&res.stdout).trim().to_string()
}
/// helper for ch02. 14&15
fn take(&self, n: usize, pos: &str)->String {
let res = Command::new(pos)
.args(&["-n", format!("{}", n).as_str()])
.arg(&self.path)
.output().expect("fail to execute head command");
String::from_utf8_lossy(&res.stdout).trim().to_string()
}
/// ch02.14 `head -n ${file}`
pub fn head(&self, n: usize)->String {
self.take(n, "head")
}
/// ch02.15 `tail -n ${file}
pub fn tail(&self, n: usize)->String {
self.take(n, "tail")
}
/// ch02.16 split n files.
pub fn split<P: AsRef<Path>>(&self, n: usize, dst: &P) {
let size = self.count_lines().unwrap();
use ch02::util;
let lines = util::get_split_line_count(size, n);
debug!("split per {} lines", lines);
assert!(lines >0);
let res = Command::new("split")
.args(&["-l", &format!("{}", lines)])
.arg(&self.path) // src
.arg(dst.as_ref().to_str().unwrap()) // dst
.output()
.expect("fail to execute split command");
}
/// ch02.17 take unique items of first row.
pub fn uniq_first_row(&self)->String {
let cutf1 = Command::new("cut")
.args(&["-f", "1"])
.arg(&self.path)
.stdout(Stdio::piped())
.spawn().unwrap();
let sort = Command::new("sort")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.spawn().unwrap();
// note that sort and uniq cannot be swapped.
// uniq command makes duplicated items in sequence single!
let mut uniq = Command::new("uniq")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.spawn().unwrap();
let mut buf: Vec<u8> = Vec::new();
cutf1.stdout.unwrap().read_to_end(&mut buf).unwrap();
sort.stdin.unwrap().write_all(&buf).unwrap();
let mut buf: Vec<u8> = Vec::new();
sort.stdout.unwrap().read_to_end(&mut buf).unwrap();
if let Some(ref mut stdin) = uniq.stdin {
stdin.write_all(&buf).unwrap();
}
// wait_with_output(self) -> Result<Output>
let res = uniq.wait_with_output().unwrap().stdout;
String::from_utf8_lossy(&res).trim().to_string()
}
/// ch02.18 sort by third columns descendingly
/// that means `sort -r -k 3./data/ch02/hightemp.txt`
pub fn sort_in_descending(&self, key: usize)->String {
let res = Command::new("sort")
.arg("-r")
.args(&["-k", &format!("{}", key)])
.arg(&self.path)
.output().unwrap();
String::from_utf8_lossy(&res.stdout).trim().to_string()
}
}
#[cfg(test)]
mod tests {
use super::*;
extern crate env_logger;
extern crate getopts;
extern crate glob;
use self::getopts::Options;
/// env_logger output is controlled by RUST_LOG environmental variable
/// to debug only to this module, set `RUST_LOG=natural_lang::ch02::command=debug` in Environment variable.
/// before save file, confirm existance in file or create dir in fs::create_dir method.
/// create_dir method is equivalent to `mkdir -p` in unix command
#[test]
fn test_prepare() {
use std::fs;
env_logger::init().unwrap();
let save_path = Path::new("./data/ch02/hightemp.txt");
// Success or not, ignore result
// see also https://github.com/rust-lang/rust/pull/11754#issuecomment-33202664
let _ = fs::create_dir(save_path.parent().unwrap());
let commander = Commander::new(save_path);
commander.prepare();
assert!(save_path.exists())
}
#[test]
fn test_count_lines() {
let save_path = Path::new("./data/ch02/hightemp.txt");
let commander = Commander::new(save_path);
assert_eq!(commander.count_lines().unwrap(), 24);
}
#[test]
fn test_replace_tab_to_space() {
let save_path = Path::new("./data/ch02/hightemp.txt");
let commander = Commander::new(save_path);
let res = commander.replace_tab_to_space();
assert_eq!(
res.lines().take(1).collect::<String>(),
"高知県 江川崎 41 2013-08-12"
)
}
#[test]
fn test_extract_row() {
let load_path = Path::new("./data/ch02/hightemp.txt");
let commander = Commander::new(load_path);
assert_eq!(
commander.extract_row(0).lines().next().unwrap(), // take first line
"高知県"
);
}
#[test]
fn test_merge() {
let load_path = Path::new("./data/ch02/hightemp.txt");
let parent = load_path.parent().unwrap();
let file1 = parent.join("col1.txt");
let file2 = parent.join("col2.txt");
let res = Commander::merge(&file1, &file2);
debug!("{:?}", res);
assert_eq!(
(&mut res.lines()).next().unwrap(),
"高知県\t江川崎"
)
}
fn print_usage(program: &str, opts: Options) {
let brief = format!("Usage: {} FILE [options]", program);
print!("{}", opts.usage(&brief));
}
/// with cargo test -- [<OPTIONS>], there seems to be panicked at '"Unrecognized option: \'n\'."'
/// so set args directly instead of using env::args()
#[test]
fn test_head() {
// let args = env::args()::collect::<Vec<String>>();
let args = vec!["program", "-n", "5", "./data/ch02/hightemp.txt"];
let program = args[0].clone();
let mut opts = Options::new();
opts.optopt("n", "num", "set first ${num} rows", "NUMBER");
opts.optflag("h", "help", "print this help menu");
let matches = opts.parse(&args[1..]).unwrap();
if matches.opt_present("h") {
print_usage(&program, opts);
return;
}
let n = matches
.opt_str("n")
.expect("invalid number")
.parse::<usize>()
.unwrap();
let input = matches.free.first().unwrap();
let commander = Commander::new(input);
let res = commander.head(n);
assert_eq!(
res,
"高知県\t江川崎\t41\t2013-08-12\n埼玉県\t熊谷\t40.9\t2007-08-16\n\
岐阜県\t多治見\t40.9\t2007-08-16\n山形県\t山形\t40.8\t1933-07-25\n\
山梨県\t甲府\t40.7\t2013-08-10"
);
}
#[test]
fn test_tail() {
// let args = env::args()::collect::<Vec<String>>();
l | vec!["program", "-n", "5", "./data/ch02/hightemp.txt"];
let program = args[0].clone();
let mut opts = Options::new();
opts.optopt("n", "num", "set first ${num} rows", "NUMBER");
opts.optflag("h", "help", "print this help menu");
let matches = opts.parse(&args[1..]).unwrap();
if matches.opt_present("h") {
print_usage(&program, opts);
return;
}
let n = matches
.opt_str("n")
.expect("invalid number")
.parse::<usize>()
.unwrap();
let input = matches.free.first().unwrap();
let commander = Commander::new(input);
let res = commander.tail(n);
println!("{:?}", res);
assert_eq!(
res,
"埼玉県\t鳩山\t39.9\t1997-07-05\n\
大阪府\t豊中\t39.9\t1994-08-08\n\
山梨県\t大月\t39.9\t1990-07-19\n\
山形県\t鶴岡\t39.9\t1978-08-03\n\
愛知県\t名古屋\t39.9\t1942-08-02"
);
}
#[test]
fn test_split() {
let args = vec!["program", "--line", "3", "./data/ch02/hightemp.txt", "./data/ch02/split_"];
let program = args[0].clone();
let mut opts = Options::new();
opts.optopt("l", "line", "set first ${num} rows", "NUMBER");
opts.optflag("h", "help", "print this help menu");
let matches = opts.parse(&args[1..]).unwrap();
if matches.opt_present("h") {
print_usage(&program, opts);
return;
}
let split_num = matches
.opt_str("l")
.unwrap()
.parse::<usize>()
.expect("invalid number");
let input = &matches.free[0..2];
let save_path = Path::new(&input[1]);
let commander = Commander::new(&input[0]);
//
commander.split(split_num, &input[1]);
let filename = format!("{}{}", save_path.file_name().unwrap().to_str().unwrap(), '*');
use self::glob::glob;
// check that all ok and the length of vector is equal to split_num
assert_eq!(
glob(save_path.parent().unwrap().join(&filename).to_str().unwrap())
.expect("failed to read glob pattern")
.collect::<Result<Vec<_>,_>>().unwrap().len(),
split_num
);
}
#[test]
fn test_uniq_first_row() {
let load_path = Path::new("./data/ch02/hightemp.txt");
let commander = Commander::new(load_path);
let res = commander.uniq_first_row();
assert_eq!(
commander.uniq_first_row(),
"千葉県\n和歌山県\n埼玉県\n大阪府\n山形県\n山梨県\n岐阜県\n愛媛県\n愛知県\n群馬県\n静岡県\n高知県"
)
}
#[test]
fn test_sort_in_descending() {
let load_path = Path::new("./data/ch02/hightemp.txt");
let commander = Commander::new(load_path);
let res = commander.sort_in_descending(3);
assert_eq!(
res.lines().take(5).collect::<Vec<&str>>(),
vec!["高知県\t江川崎\t41\t2013-08-12",
"岐阜県\t多治見\t40.9\t2007-08-16",
"埼玉県\t熊谷\t40.9\t2007-08-16",
"山形県\t山形\t40.8\t1933-07-25",
"山梨県\t甲府\t40.7\t2013-08-10"]
)
}
} | et args = | identifier_name |
command.rs | use std::process::{Command, Stdio};
use std::io::{BufReader, BufRead, Read, Write}; // Read is used for read_to_string
use std::fs::File;
use std::path::Path;
pub struct Commander {path: String}
impl Commander {
pub fn new<P: AsRef<Path>>(save_path: P) -> Commander {
Commander {
path: save_path.as_ref().to_str().expect("contains invalid utf-8 character").to_owned()
}
}
/// preparation for ch02; save the tab-splited file, hightemp.txt
/// exec `curl http://www.cl.ecei.tohoku.ac.jp/nlp100/data/hightemp.txt" -o ${save_path}`
/// The file is the record of maximum temperature in Japan.
/// these data are composed of prefecture, location, temperature and date.
pub fn prepare(&self) {
let path = "http://www.cl.ecei.tohoku.ac.jp/nlp100/data/hightemp.txt";
let output = Command::new("curl")
.arg(path)
.args(&["-o", &self.path])
.output().expect("fail to execute process");
debug!("status: {:?}", output.status);
debug!("stdout: {}", String::from_utf8_lossy(&output.stdout));
debug!("stderr: \n{}", String::from_utf8_lossy(&output.stderr));
}
/// test ch02_10; count lines in the designated file.
pub fn count_lines(&self) -> Result<usize, ::std::num::ParseIntError> {
let output = Command::new("wc")
.arg("-l")
.arg(&self.path)
.output().expect("fail to execute process");
String::from_utf8_lossy(&output.stdout)
.as_ref()
.trim()
.split(" ")
.take(1)
.collect::<String>()
.parse::<usize>()
}
/// ch02_11; replace tab to space
pub fn replace_tab_to_space(&self) -> String {
let mut cat = Command::new("cat")
.arg(&self.path)
.stdout(Stdio::piped())
.spawn().expect("fail to execute cat command");
let mut tr = Command::new("tr")
.arg("[:blank:]")
.arg(" ")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.spawn().expect("fail to execute tr command");
// see https://www.reddit.com/r/rust/comments/3azfie/how_to_pipe_one_process_into_another/
if let Some(ref mut stdout) = cat.stdout {
if let Some(ref mut stdin) = tr.stdin {
let mut buf: Vec<u8> = Vec::new();
stdout.read_to_end(&mut buf).unwrap();
stdin.write_all(&buf).unwrap();
}
}
let res = tr.wait_with_output().unwrap().stdout;
String::from_utf8(res).expect("contain invalid utf-8 character")
}
/// preparation to ch02_12
pub fn extract_row(&self, n: usize) -> String {
let res = Command::new("cut")
.args(&["-f", &format!("{}", n + 1)]) // start at 0
.arg(&self.path)
.output().expect("fail to execute cut command");
String::from_utf8_lossy(&res.stdout).trim().to_string()
}
/// ch02.13 merge 2 files
pub fn merge<P: AsRef<Path>>(file1: &P, file2: &P)->String |
/// helper for ch02. 14&15
fn take(&self, n: usize, pos: &str)->String {
let res = Command::new(pos)
.args(&["-n", format!("{}", n).as_str()])
.arg(&self.path)
.output().expect("fail to execute head command");
String::from_utf8_lossy(&res.stdout).trim().to_string()
}
/// ch02.14 `head -n ${file}`
pub fn head(&self, n: usize)->String {
self.take(n, "head")
}
/// ch02.15 `tail -n ${file}
pub fn tail(&self, n: usize)->String {
self.take(n, "tail")
}
/// ch02.16 split n files.
pub fn split<P: AsRef<Path>>(&self, n: usize, dst: &P) {
let size = self.count_lines().unwrap();
use ch02::util;
let lines = util::get_split_line_count(size, n);
debug!("split per {} lines", lines);
assert!(lines >0);
let res = Command::new("split")
.args(&["-l", &format!("{}", lines)])
.arg(&self.path) // src
.arg(dst.as_ref().to_str().unwrap()) // dst
.output()
.expect("fail to execute split command");
}
/// ch02.17 take unique items of first row.
pub fn uniq_first_row(&self)->String {
let cutf1 = Command::new("cut")
.args(&["-f", "1"])
.arg(&self.path)
.stdout(Stdio::piped())
.spawn().unwrap();
let sort = Command::new("sort")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.spawn().unwrap();
// note that sort and uniq cannot be swapped.
// uniq command makes duplicated items in sequence single!
let mut uniq = Command::new("uniq")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.spawn().unwrap();
let mut buf: Vec<u8> = Vec::new();
cutf1.stdout.unwrap().read_to_end(&mut buf).unwrap();
sort.stdin.unwrap().write_all(&buf).unwrap();
let mut buf: Vec<u8> = Vec::new();
sort.stdout.unwrap().read_to_end(&mut buf).unwrap();
if let Some(ref mut stdin) = uniq.stdin {
stdin.write_all(&buf).unwrap();
}
// wait_with_output(self) -> Result<Output>
let res = uniq.wait_with_output().unwrap().stdout;
String::from_utf8_lossy(&res).trim().to_string()
}
/// ch02.18 sort by third columns descendingly
/// that means `sort -r -k 3./data/ch02/hightemp.txt`
pub fn sort_in_descending(&self, key: usize)->String {
let res = Command::new("sort")
.arg("-r")
.args(&["-k", &format!("{}", key)])
.arg(&self.path)
.output().unwrap();
String::from_utf8_lossy(&res.stdout).trim().to_string()
}
}
#[cfg(test)]
mod tests {
use super::*;
extern crate env_logger;
extern crate getopts;
extern crate glob;
use self::getopts::Options;
/// env_logger output is controlled by RUST_LOG environmental variable
/// to debug only to this module, set `RUST_LOG=natural_lang::ch02::command=debug` in Environment variable.
/// before save file, confirm existance in file or create dir in fs::create_dir method.
/// create_dir method is equivalent to `mkdir -p` in unix command
#[test]
fn test_prepare() {
use std::fs;
env_logger::init().unwrap();
let save_path = Path::new("./data/ch02/hightemp.txt");
// Success or not, ignore result
// see also https://github.com/rust-lang/rust/pull/11754#issuecomment-33202664
let _ = fs::create_dir(save_path.parent().unwrap());
let commander = Commander::new(save_path);
commander.prepare();
assert!(save_path.exists())
}
#[test]
fn test_count_lines() {
let save_path = Path::new("./data/ch02/hightemp.txt");
let commander = Commander::new(save_path);
assert_eq!(commander.count_lines().unwrap(), 24);
}
#[test]
fn test_replace_tab_to_space() {
let save_path = Path::new("./data/ch02/hightemp.txt");
let commander = Commander::new(save_path);
let res = commander.replace_tab_to_space();
assert_eq!(
res.lines().take(1).collect::<String>(),
"高知県 江川崎 41 2013-08-12"
)
}
#[test]
fn test_extract_row() {
let load_path = Path::new("./data/ch02/hightemp.txt");
let commander = Commander::new(load_path);
assert_eq!(
commander.extract_row(0).lines().next().unwrap(), // take first line
"高知県"
);
}
#[test]
fn test_merge() {
let load_path = Path::new("./data/ch02/hightemp.txt");
let parent = load_path.parent().unwrap();
let file1 = parent.join("col1.txt");
let file2 = parent.join("col2.txt");
let res = Commander::merge(&file1, &file2);
debug!("{:?}", res);
assert_eq!(
(&mut res.lines()).next().unwrap(),
"高知県\t江川崎"
)
}
fn print_usage(program: &str, opts: Options) {
let brief = format!("Usage: {} FILE [options]", program);
print!("{}", opts.usage(&brief));
}
/// with cargo test -- [<OPTIONS>], there seems to be panicked at '"Unrecognized option: \'n\'."'
/// so set args directly instead of using env::args()
#[test]
fn test_head() {
// let args = env::args()::collect::<Vec<String>>();
let args = vec!["program", "-n", "5", "./data/ch02/hightemp.txt"];
let program = args[0].clone();
let mut opts = Options::new();
opts.optopt("n", "num", "set first ${num} rows", "NUMBER");
opts.optflag("h", "help", "print this help menu");
let matches = opts.parse(&args[1..]).unwrap();
if matches.opt_present("h") {
print_usage(&program, opts);
return;
}
let n = matches
.opt_str("n")
.expect("invalid number")
.parse::<usize>()
.unwrap();
let input = matches.free.first().unwrap();
let commander = Commander::new(input);
let res = commander.head(n);
assert_eq!(
res,
"高知県\t江川崎\t41\t2013-08-12\n埼玉県\t熊谷\t40.9\t2007-08-16\n\
岐阜県\t多治見\t40.9\t2007-08-16\n山形県\t山形\t40.8\t1933-07-25\n\
山梨県\t甲府\t40.7\t2013-08-10"
);
}
#[test]
fn test_tail() {
// let args = env::args()::collect::<Vec<String>>();
let args = vec!["program", "-n", "5", "./data/ch02/hightemp.txt"];
let program = args[0].clone();
let mut opts = Options::new();
opts.optopt("n", "num", "set first ${num} rows", "NUMBER");
opts.optflag("h", "help", "print this help menu");
let matches = opts.parse(&args[1..]).unwrap();
if matches.opt_present("h") {
print_usage(&program, opts);
return;
}
let n = matches
.opt_str("n")
.expect("invalid number")
.parse::<usize>()
.unwrap();
let input = matches.free.first().unwrap();
let commander = Commander::new(input);
let res = commander.tail(n);
println!("{:?}", res);
assert_eq!(
res,
"埼玉県\t鳩山\t39.9\t1997-07-05\n\
大阪府\t豊中\t39.9\t1994-08-08\n\
山梨県\t大月\t39.9\t1990-07-19\n\
山形県\t鶴岡\t39.9\t1978-08-03\n\
愛知県\t名古屋\t39.9\t1942-08-02"
);
}
#[test]
fn test_split() {
let args = vec!["program", "--line", "3", "./data/ch02/hightemp.txt", "./data/ch02/split_"];
let program = args[0].clone();
let mut opts = Options::new();
opts.optopt("l", "line", "set first ${num} rows", "NUMBER");
opts.optflag("h", "help", "print this help menu");
let matches = opts.parse(&args[1..]).unwrap();
if matches.opt_present("h") {
print_usage(&program, opts);
return;
}
let split_num = matches
.opt_str("l")
.unwrap()
.parse::<usize>()
.expect("invalid number");
let input = &matches.free[0..2];
let save_path = Path::new(&input[1]);
let commander = Commander::new(&input[0]);
//
commander.split(split_num, &input[1]);
let filename = format!("{}{}", save_path.file_name().unwrap().to_str().unwrap(), '*');
use self::glob::glob;
// check that all ok and the length of vector is equal to split_num
assert_eq!(
glob(save_path.parent().unwrap().join(&filename).to_str().unwrap())
.expect("failed to read glob pattern")
.collect::<Result<Vec<_>,_>>().unwrap().len(),
split_num
);
}
#[test]
fn test_uniq_first_row() {
let load_path = Path::new("./data/ch02/hightemp.txt");
let commander = Commander::new(load_path);
let res = commander.uniq_first_row();
assert_eq!(
commander.uniq_first_row(),
"千葉県\n和歌山県\n埼玉県\n大阪府\n山形県\n山梨県\n岐阜県\n愛媛県\n愛知県\n群馬県\n静岡県\n高知県"
)
}
#[test]
fn test_sort_in_descending() {
let load_path = Path::new("./data/ch02/hightemp.txt");
let commander = Commander::new(load_path);
let res = commander.sort_in_descending(3);
assert_eq!(
res.lines().take(5).collect::<Vec<&str>>(),
vec!["高知県\t江川崎\t41\t2013-08-12",
"岐阜県\t多治見\t40.9\t2007-08-16",
"埼玉県\t熊谷\t40.9\t2007-08-16",
"山形県\t山形\t40.8\t1933-07-25",
"山梨県\t甲府\t40.7\t2013-08-10"]
)
}
} | {
let res = Command::new("paste")
.args(&[file1.as_ref(), file2.as_ref()])
.output().expect("fail to execute paste command");
String::from_utf8_lossy(&res.stdout).trim().to_string()
} | identifier_body |
lib.rs | // Copyright 2020 Netwarps Ltd.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use libp2prs_core::PeerId;
use libp2prs_runtime::task;
use libp2prs_swarm::Control;
use serde::{Deserialize, Serialize};
use std::str::FromStr;
use tide::http::mime;
use tide::{Body, Request, Response, Server};
#[macro_use]
extern crate lazy_static;
lazy_static! {
static ref NON_PARAM_ROUTE: Vec<String> = {
vec![
"".to_string(),
"/recv".to_string(),
"/send".to_string(),
"/peer".to_string(),
"/connection".to_string(),
]
};
static ref PARAM_ROUTE: Vec<String> = vec!["/peer/_".to_string(), "/protocol?protocol_id=_".to_string()];
}
/// Response, message contains error info if statusCode isn't 200.
#[derive(Serialize, Deserialize)]
struct ResponseBody {
status: i64,
message: String,
result: Vec<String>,
}
/// Tide server
pub struct InfoServer {
monitor: Server<Control>,
// map: HashMap<String, IRouteHandler>,
}
/// Save package count&size
#[derive(Serialize, Deserialize)]
struct PackageInfo {
package_count: usize,
package_bytes: usize,
}
/// Save package count&size by peer_id or protocol_id
#[derive(Serialize, Deserialize)]
struct SpecInfo {
package_in: usize,
package_out: usize,
}
/// A struct that deserialize protocol_id.
#[derive(Serialize, Deserialize, Debug)]
struct Protocol {
protocol_id: String,
}
/// A struct that deserialize peer_id.
#[derive(Serialize, Deserialize, Debug)]
struct Peer {
count: usize,
}
/// Save data from network_info.
#[derive(Serialize, Deserialize, Debug)]
struct NetworkConnectionStatus {
/// The total number of connections, both established and pending.
num_connections: usize,
/// The total number of pending connections, both incoming and outgoing.
num_connections_pending: usize,
/// The total number of established connections.
num_connections_established: usize,
/// The total number of active sub streams.
num_active_streams: usize,
/// The information of all established connections.
connection_info: Vec<NetworkConnectionInfo>,
}
/// A struct that save connection info.
#[derive(Serialize, Deserialize, Debug)]
struct NetworkConnectionInfo {
la: Vec<u8>,
ra: Vec<u8>,
local_peer_id: String,
remote_peer_id: String,
num_inbound_streams: usize,
num_outbound_streams: usize,
}
impl InfoServer {
pub fn new(control: Control) -> Self {
let mut monitor = tide::with_state(control);
monitor.at("").get(get_all);
monitor.at("/recv").get(get_recv_pkg);
monitor.at("/send").get(get_sent_pkg);
monitor.at("/protocol").get(get_protocol_info);
monitor.at("/peer").get(get_peer_count).at("/:peer_id").get(get_peer_info);
monitor.at("/connection").get(get_connection_info);
InfoServer { monitor }
}
pub fn start(self, addr: String) {
task::spawn(async move {
let r = self.monitor.listen(addr).await;
log::info!("Info server started result={:?}", r);
});
}
}
/// Return route list
async fn | (req: Request<Control>) -> tide::Result {
let addr = req.local_addr().unwrap();
let mut available = "<h3>Available Endpoints:</h3></br>".to_string();
for item in NON_PARAM_ROUTE.iter() {
let route = addr.to_owned() + item;
available = available + &format!("<a href=//{}>{}</a></br>", route, route);
}
let mut argument = "<h3>Endpoints that require arguments:</h3></br>".to_string();
for item in PARAM_ROUTE.iter() {
let route = addr.to_owned() + item;
argument += &format!("<a href=//{}>{}</a></br>", route, route);
}
let res_body =
"<head><link rel=\"icon\" href=\"data:;base64,=\"></head>".to_string() + "<body>" + &available + &argument + "</body>";
let response = Response::builder(200).content_type(mime::HTML).body(res_body).build();
Ok(response)
}
/// Get peer count
async fn get_peer_count(req: Request<Control>) -> tide::Result {
let mut control = req.state().clone();
let network_info = control.retrieve_networkinfo().await.map_err(|e| {
log::error!("{:?}", e);
tide::Error::new(500, e)
})?;
let peer = serde_json::to_string(&Peer {
count: network_info.num_peers,
})
.unwrap();
let result_body = Body::from_json(&ResponseBody {
status: 0,
message: "".to_string(),
result: vec![peer],
})?;
let response = Response::builder(200).body(result_body).build();
Ok(response)
}
/// Get connection info
async fn get_connection_info(req: Request<Control>) -> tide::Result {
let mut control = req.state().clone();
let network_info = control.retrieve_networkinfo().await.map_err(|e| {
log::error!("{:?}", e);
tide::Error::new(500, e)
})?;
let cis = control.dump_connections(None).await.map_err(|e| {
log::error!("{:?}", e);
tide::Error::new(500, e)
})?;
let mut connection_info = Vec::new();
for item in cis {
let info = NetworkConnectionInfo {
la: item.info.la.to_vec(),
ra: item.info.ra.to_vec(),
local_peer_id: item.info.local_peer_id.to_string(),
remote_peer_id: item.info.remote_peer_id.to_string(),
num_inbound_streams: item.info.num_inbound_streams,
num_outbound_streams: item.info.num_outbound_streams,
};
connection_info.push(info);
}
let network_connection_status = NetworkConnectionStatus {
num_connections: network_info.num_connections,
num_connections_pending: network_info.num_connections_pending,
num_connections_established: network_info.num_connections_established,
num_active_streams: network_info.num_active_streams,
connection_info,
};
let result_body = Body::from_json(&ResponseBody {
status: 0,
message: "".to_string(),
result: vec![serde_json::to_string(&network_connection_status).unwrap()],
})?;
let response = Response::builder(200).body(result_body).build();
Ok(response)
}
/// Get received package counts and bytes
async fn get_recv_pkg(req: Request<Control>) -> tide::Result {
let (package_count, package_bytes) = req.state().get_recv_count_and_size();
let package = PackageInfo {
package_count,
package_bytes,
};
let result_body = Body::from_json(&package)?;
let response = Response::builder(200).body(result_body).build();
Ok(response)
}
/// Get sent package counts and bytes
async fn get_sent_pkg(req: Request<Control>) -> tide::Result {
let (package_count, package_bytes) = req.state().get_sent_count_and_size();
let package = PackageInfo {
package_count,
package_bytes,
};
let result_body = Body::from_json(&ResponseBody {
status: 0,
message: "".to_string(),
result: vec![serde_json::to_string(&package).unwrap()],
})?;
let response = Response::builder(200).body(result_body).build();
Ok(response)
}
/// Get sent&received package bytes by protocol_id
async fn get_protocol_info(req: Request<Control>) -> tide::Result {
let protocol: Protocol = req.query()?;
let (receive, send) = req.state().get_protocol_in_and_out(&protocol.protocol_id);
let mut spec_info = SpecInfo {
package_in: 0,
package_out: 0,
};
if let Some(value) = receive {
spec_info.package_in = value
}
if let Some(value) = send {
spec_info.package_out = value
}
let result_body = Body::from_json(&ResponseBody {
status: 0,
message: "".to_string(),
result: vec![serde_json::to_string(&spec_info).unwrap()],
})?;
let response = Response::builder(200).body(result_body).build();
Ok(response)
}
/// Get sent&received package bytes by peer_id
async fn get_peer_info(req: Request<Control>) -> tide::Result {
let peer = req.param("peer_id")?;
let peer_id = match PeerId::from_str(peer) {
Ok(info) => info,
Err(e) => {
let err_body = Body::from_json(&ResponseBody {
status: 1,
message: format!("Cannot parse : {:?}", e),
result: vec![],
})?;
return Ok(Response::builder(400).body(err_body).build());
}
};
let (receive, send) = req.state().get_peer_in_and_out(&peer_id);
let mut spec_info = SpecInfo {
package_in: 0,
package_out: 0,
};
if let Some(value) = receive {
spec_info.package_in = value
}
if let Some(value) = send {
spec_info.package_out = value
}
let result_body = Body::from_json(&ResponseBody {
status: 0,
message: "".to_string(),
result: vec![serde_json::to_string(&spec_info).unwrap()],
})?;
let response = Response::builder(200).body(result_body).build();
Ok(response)
}
| get_all | identifier_name |
lib.rs | // Copyright 2020 Netwarps Ltd.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use libp2prs_core::PeerId;
use libp2prs_runtime::task;
use libp2prs_swarm::Control;
use serde::{Deserialize, Serialize};
use std::str::FromStr;
use tide::http::mime;
use tide::{Body, Request, Response, Server};
#[macro_use]
extern crate lazy_static;
lazy_static! {
static ref NON_PARAM_ROUTE: Vec<String> = {
vec![
"".to_string(),
"/recv".to_string(),
"/send".to_string(),
"/peer".to_string(),
"/connection".to_string(),
]
};
static ref PARAM_ROUTE: Vec<String> = vec!["/peer/_".to_string(), "/protocol?protocol_id=_".to_string()];
}
/// Response, message contains error info if statusCode isn't 200.
#[derive(Serialize, Deserialize)]
struct ResponseBody {
status: i64,
message: String,
result: Vec<String>,
}
/// Tide server
pub struct InfoServer {
monitor: Server<Control>,
// map: HashMap<String, IRouteHandler>,
}
/// Save package count&size
#[derive(Serialize, Deserialize)]
struct PackageInfo {
package_count: usize,
package_bytes: usize,
}
/// Save package count&size by peer_id or protocol_id
#[derive(Serialize, Deserialize)]
struct SpecInfo {
package_in: usize,
package_out: usize,
}
/// A struct that deserialize protocol_id.
#[derive(Serialize, Deserialize, Debug)]
struct Protocol {
protocol_id: String,
}
/// A struct that deserialize peer_id.
#[derive(Serialize, Deserialize, Debug)]
struct Peer {
count: usize,
}
/// Save data from network_info.
#[derive(Serialize, Deserialize, Debug)]
struct NetworkConnectionStatus {
/// The total number of connections, both established and pending.
num_connections: usize,
/// The total number of pending connections, both incoming and outgoing.
num_connections_pending: usize,
/// The total number of established connections.
num_connections_established: usize,
/// The total number of active sub streams.
num_active_streams: usize,
/// The information of all established connections.
connection_info: Vec<NetworkConnectionInfo>,
}
/// A struct that save connection info.
#[derive(Serialize, Deserialize, Debug)]
struct NetworkConnectionInfo {
la: Vec<u8>,
ra: Vec<u8>,
local_peer_id: String,
remote_peer_id: String,
num_inbound_streams: usize,
num_outbound_streams: usize,
}
impl InfoServer {
pub fn new(control: Control) -> Self {
let mut monitor = tide::with_state(control);
monitor.at("").get(get_all);
monitor.at("/recv").get(get_recv_pkg);
monitor.at("/send").get(get_sent_pkg);
monitor.at("/protocol").get(get_protocol_info);
monitor.at("/peer").get(get_peer_count).at("/:peer_id").get(get_peer_info);
monitor.at("/connection").get(get_connection_info);
InfoServer { monitor }
}
pub fn start(self, addr: String) {
task::spawn(async move {
let r = self.monitor.listen(addr).await;
log::info!("Info server started result={:?}", r);
});
}
}
/// Return route list
async fn get_all(req: Request<Control>) -> tide::Result {
let addr = req.local_addr().unwrap();
let mut available = "<h3>Available Endpoints:</h3></br>".to_string();
for item in NON_PARAM_ROUTE.iter() {
let route = addr.to_owned() + item;
available = available + &format!("<a href=//{}>{}</a></br>", route, route);
}
let mut argument = "<h3>Endpoints that require arguments:</h3></br>".to_string();
for item in PARAM_ROUTE.iter() {
let route = addr.to_owned() + item;
argument += &format!("<a href=//{}>{}</a></br>", route, route);
}
let res_body =
"<head><link rel=\"icon\" href=\"data:;base64,=\"></head>".to_string() + "<body>" + &available + &argument + "</body>";
let response = Response::builder(200).content_type(mime::HTML).body(res_body).build();
Ok(response)
}
/// Get peer count
async fn get_peer_count(req: Request<Control>) -> tide::Result {
let mut control = req.state().clone();
let network_info = control.retrieve_networkinfo().await.map_err(|e| {
log::error!("{:?}", e);
tide::Error::new(500, e)
})?;
let peer = serde_json::to_string(&Peer {
count: network_info.num_peers,
})
.unwrap();
let result_body = Body::from_json(&ResponseBody {
status: 0,
message: "".to_string(),
result: vec![peer],
})?;
let response = Response::builder(200).body(result_body).build();
Ok(response)
}
/// Get connection info
async fn get_connection_info(req: Request<Control>) -> tide::Result {
let mut control = req.state().clone();
let network_info = control.retrieve_networkinfo().await.map_err(|e| {
log::error!("{:?}", e);
tide::Error::new(500, e)
})?;
let cis = control.dump_connections(None).await.map_err(|e| {
log::error!("{:?}", e);
tide::Error::new(500, e)
})?;
let mut connection_info = Vec::new();
for item in cis {
let info = NetworkConnectionInfo {
la: item.info.la.to_vec(),
ra: item.info.ra.to_vec(),
local_peer_id: item.info.local_peer_id.to_string(),
remote_peer_id: item.info.remote_peer_id.to_string(),
num_inbound_streams: item.info.num_inbound_streams,
num_outbound_streams: item.info.num_outbound_streams,
};
connection_info.push(info);
}
let network_connection_status = NetworkConnectionStatus {
num_connections: network_info.num_connections,
num_connections_pending: network_info.num_connections_pending,
num_connections_established: network_info.num_connections_established,
num_active_streams: network_info.num_active_streams,
connection_info,
};
let result_body = Body::from_json(&ResponseBody {
status: 0,
message: "".to_string(),
result: vec![serde_json::to_string(&network_connection_status).unwrap()],
})?;
let response = Response::builder(200).body(result_body).build();
Ok(response)
}
/// Get received package counts and bytes
async fn get_recv_pkg(req: Request<Control>) -> tide::Result {
let (package_count, package_bytes) = req.state().get_recv_count_and_size();
let package = PackageInfo {
package_count,
package_bytes,
};
let result_body = Body::from_json(&package)?;
let response = Response::builder(200).body(result_body).build();
Ok(response)
}
/// Get sent package counts and bytes
async fn get_sent_pkg(req: Request<Control>) -> tide::Result {
let (package_count, package_bytes) = req.state().get_sent_count_and_size();
let package = PackageInfo {
package_count,
package_bytes,
};
let result_body = Body::from_json(&ResponseBody {
status: 0,
message: "".to_string(),
result: vec![serde_json::to_string(&package).unwrap()],
})?;
let response = Response::builder(200).body(result_body).build();
Ok(response)
}
/// Get sent&received package bytes by protocol_id
async fn get_protocol_info(req: Request<Control>) -> tide::Result {
let protocol: Protocol = req.query()?;
let (receive, send) = req.state().get_protocol_in_and_out(&protocol.protocol_id);
let mut spec_info = SpecInfo {
package_in: 0,
package_out: 0,
};
if let Some(value) = receive {
spec_info.package_in = value
}
if let Some(value) = send |
let result_body = Body::from_json(&ResponseBody {
status: 0,
message: "".to_string(),
result: vec![serde_json::to_string(&spec_info).unwrap()],
})?;
let response = Response::builder(200).body(result_body).build();
Ok(response)
}
/// Get sent&received package bytes by peer_id
async fn get_peer_info(req: Request<Control>) -> tide::Result {
let peer = req.param("peer_id")?;
let peer_id = match PeerId::from_str(peer) {
Ok(info) => info,
Err(e) => {
let err_body = Body::from_json(&ResponseBody {
status: 1,
message: format!("Cannot parse : {:?}", e),
result: vec![],
})?;
return Ok(Response::builder(400).body(err_body).build());
}
};
let (receive, send) = req.state().get_peer_in_and_out(&peer_id);
let mut spec_info = SpecInfo {
package_in: 0,
package_out: 0,
};
if let Some(value) = receive {
spec_info.package_in = value
}
if let Some(value) = send {
spec_info.package_out = value
}
let result_body = Body::from_json(&ResponseBody {
status: 0,
message: "".to_string(),
result: vec![serde_json::to_string(&spec_info).unwrap()],
})?;
let response = Response::builder(200).body(result_body).build();
Ok(response)
}
| {
spec_info.package_out = value
} | conditional_block |
lib.rs | // Copyright 2020 Netwarps Ltd.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use libp2prs_core::PeerId;
use libp2prs_runtime::task;
use libp2prs_swarm::Control;
use serde::{Deserialize, Serialize}; | extern crate lazy_static;
lazy_static! {
static ref NON_PARAM_ROUTE: Vec<String> = {
vec![
"".to_string(),
"/recv".to_string(),
"/send".to_string(),
"/peer".to_string(),
"/connection".to_string(),
]
};
static ref PARAM_ROUTE: Vec<String> = vec!["/peer/_".to_string(), "/protocol?protocol_id=_".to_string()];
}
/// Response, message contains error info if statusCode isn't 200.
#[derive(Serialize, Deserialize)]
struct ResponseBody {
status: i64,
message: String,
result: Vec<String>,
}
/// Tide server
pub struct InfoServer {
monitor: Server<Control>,
// map: HashMap<String, IRouteHandler>,
}
/// Save package count&size
#[derive(Serialize, Deserialize)]
struct PackageInfo {
package_count: usize,
package_bytes: usize,
}
/// Save package count&size by peer_id or protocol_id
#[derive(Serialize, Deserialize)]
struct SpecInfo {
package_in: usize,
package_out: usize,
}
/// A struct that deserialize protocol_id.
#[derive(Serialize, Deserialize, Debug)]
struct Protocol {
protocol_id: String,
}
/// A struct that deserialize peer_id.
#[derive(Serialize, Deserialize, Debug)]
struct Peer {
count: usize,
}
/// Save data from network_info.
#[derive(Serialize, Deserialize, Debug)]
struct NetworkConnectionStatus {
/// The total number of connections, both established and pending.
num_connections: usize,
/// The total number of pending connections, both incoming and outgoing.
num_connections_pending: usize,
/// The total number of established connections.
num_connections_established: usize,
/// The total number of active sub streams.
num_active_streams: usize,
/// The information of all established connections.
connection_info: Vec<NetworkConnectionInfo>,
}
/// A struct that save connection info.
#[derive(Serialize, Deserialize, Debug)]
struct NetworkConnectionInfo {
la: Vec<u8>,
ra: Vec<u8>,
local_peer_id: String,
remote_peer_id: String,
num_inbound_streams: usize,
num_outbound_streams: usize,
}
impl InfoServer {
pub fn new(control: Control) -> Self {
let mut monitor = tide::with_state(control);
monitor.at("").get(get_all);
monitor.at("/recv").get(get_recv_pkg);
monitor.at("/send").get(get_sent_pkg);
monitor.at("/protocol").get(get_protocol_info);
monitor.at("/peer").get(get_peer_count).at("/:peer_id").get(get_peer_info);
monitor.at("/connection").get(get_connection_info);
InfoServer { monitor }
}
pub fn start(self, addr: String) {
task::spawn(async move {
let r = self.monitor.listen(addr).await;
log::info!("Info server started result={:?}", r);
});
}
}
/// Return route list
async fn get_all(req: Request<Control>) -> tide::Result {
let addr = req.local_addr().unwrap();
let mut available = "<h3>Available Endpoints:</h3></br>".to_string();
for item in NON_PARAM_ROUTE.iter() {
let route = addr.to_owned() + item;
available = available + &format!("<a href=//{}>{}</a></br>", route, route);
}
let mut argument = "<h3>Endpoints that require arguments:</h3></br>".to_string();
for item in PARAM_ROUTE.iter() {
let route = addr.to_owned() + item;
argument += &format!("<a href=//{}>{}</a></br>", route, route);
}
let res_body =
"<head><link rel=\"icon\" href=\"data:;base64,=\"></head>".to_string() + "<body>" + &available + &argument + "</body>";
let response = Response::builder(200).content_type(mime::HTML).body(res_body).build();
Ok(response)
}
/// Get peer count
async fn get_peer_count(req: Request<Control>) -> tide::Result {
let mut control = req.state().clone();
let network_info = control.retrieve_networkinfo().await.map_err(|e| {
log::error!("{:?}", e);
tide::Error::new(500, e)
})?;
let peer = serde_json::to_string(&Peer {
count: network_info.num_peers,
})
.unwrap();
let result_body = Body::from_json(&ResponseBody {
status: 0,
message: "".to_string(),
result: vec![peer],
})?;
let response = Response::builder(200).body(result_body).build();
Ok(response)
}
/// Get connection info
async fn get_connection_info(req: Request<Control>) -> tide::Result {
let mut control = req.state().clone();
let network_info = control.retrieve_networkinfo().await.map_err(|e| {
log::error!("{:?}", e);
tide::Error::new(500, e)
})?;
let cis = control.dump_connections(None).await.map_err(|e| {
log::error!("{:?}", e);
tide::Error::new(500, e)
})?;
let mut connection_info = Vec::new();
for item in cis {
let info = NetworkConnectionInfo {
la: item.info.la.to_vec(),
ra: item.info.ra.to_vec(),
local_peer_id: item.info.local_peer_id.to_string(),
remote_peer_id: item.info.remote_peer_id.to_string(),
num_inbound_streams: item.info.num_inbound_streams,
num_outbound_streams: item.info.num_outbound_streams,
};
connection_info.push(info);
}
let network_connection_status = NetworkConnectionStatus {
num_connections: network_info.num_connections,
num_connections_pending: network_info.num_connections_pending,
num_connections_established: network_info.num_connections_established,
num_active_streams: network_info.num_active_streams,
connection_info,
};
let result_body = Body::from_json(&ResponseBody {
status: 0,
message: "".to_string(),
result: vec![serde_json::to_string(&network_connection_status).unwrap()],
})?;
let response = Response::builder(200).body(result_body).build();
Ok(response)
}
/// Get received package counts and bytes
async fn get_recv_pkg(req: Request<Control>) -> tide::Result {
let (package_count, package_bytes) = req.state().get_recv_count_and_size();
let package = PackageInfo {
package_count,
package_bytes,
};
let result_body = Body::from_json(&package)?;
let response = Response::builder(200).body(result_body).build();
Ok(response)
}
/// Get sent package counts and bytes
async fn get_sent_pkg(req: Request<Control>) -> tide::Result {
let (package_count, package_bytes) = req.state().get_sent_count_and_size();
let package = PackageInfo {
package_count,
package_bytes,
};
let result_body = Body::from_json(&ResponseBody {
status: 0,
message: "".to_string(),
result: vec![serde_json::to_string(&package).unwrap()],
})?;
let response = Response::builder(200).body(result_body).build();
Ok(response)
}
/// Get sent&received package bytes by protocol_id
async fn get_protocol_info(req: Request<Control>) -> tide::Result {
let protocol: Protocol = req.query()?;
let (receive, send) = req.state().get_protocol_in_and_out(&protocol.protocol_id);
let mut spec_info = SpecInfo {
package_in: 0,
package_out: 0,
};
if let Some(value) = receive {
spec_info.package_in = value
}
if let Some(value) = send {
spec_info.package_out = value
}
let result_body = Body::from_json(&ResponseBody {
status: 0,
message: "".to_string(),
result: vec![serde_json::to_string(&spec_info).unwrap()],
})?;
let response = Response::builder(200).body(result_body).build();
Ok(response)
}
/// Get sent&received package bytes by peer_id
async fn get_peer_info(req: Request<Control>) -> tide::Result {
let peer = req.param("peer_id")?;
let peer_id = match PeerId::from_str(peer) {
Ok(info) => info,
Err(e) => {
let err_body = Body::from_json(&ResponseBody {
status: 1,
message: format!("Cannot parse : {:?}", e),
result: vec![],
})?;
return Ok(Response::builder(400).body(err_body).build());
}
};
let (receive, send) = req.state().get_peer_in_and_out(&peer_id);
let mut spec_info = SpecInfo {
package_in: 0,
package_out: 0,
};
if let Some(value) = receive {
spec_info.package_in = value
}
if let Some(value) = send {
spec_info.package_out = value
}
let result_body = Body::from_json(&ResponseBody {
status: 0,
message: "".to_string(),
result: vec![serde_json::to_string(&spec_info).unwrap()],
})?;
let response = Response::builder(200).body(result_body).build();
Ok(response)
} | use std::str::FromStr;
use tide::http::mime;
use tide::{Body, Request, Response, Server};
#[macro_use] | random_line_split |
linux.rs | use std::ffi::OsStr;
use std::io;
use std::os::unix::io::AsRawFd;
use std::path::Path;
use nix::sys::signal::{Signal};
use crate::ffi_util::ToCString;
use crate::{Command, Namespace};
use crate::idmap::{UidMap, GidMap};
use crate::stdio::dup_file_cloexec;
use crate::namespace::to_clone_flag;
use crate::caps::Capability;
impl Command {
/// Allow child process to daemonize. By default we run equivalent of
/// `set_parent_death_signal(SIGKILL)`. See the `set_parent_death_signal`
/// for better explanation.
pub fn allow_daemonize(&mut self) -> &mut Command {
self.config.death_sig = None;
self
}
/// Set a signal that is sent to a process when it's parent is dead.
/// This is by default set to `SIGKILL`. And you should keep it that way
/// unless you know what you are doing.
///
/// Particularly you should consider the following choices:
///
/// 1. Instead of setting ``PDEATHSIG`` to some other signal, send signal
/// yourself and wait until child gracefully finishes.
///
/// 2. Instead of daemonizing use ``systemd``/``upstart``/whatever system
/// init script to run your service
///
/// Another issue with this option is that it works only with immediate
/// child. To better control all descendant processes you may need the
/// following:
///
/// 1. The `prctl(PR_SET_CHILD_SUBREAPER..)` in parent which allows to
/// "catch" descendant processes.
///
/// 2. The pid namespaces
///
/// The former is out of scope of this library. The latter works by
/// ``cmd.unshare(Namespace::Pid)``, but you may need to setup mount points
/// and other important things (which are out of scope too).
///
/// To reset this behavior use ``allow_daemonize()``.
///
pub fn set_parent_death_signal(&mut self, sig: Signal) -> &mut Command {
self.config.death_sig = Some(sig);
self
}
/// Set chroot dir. Only absolute path is supported
///
/// This method has a non-standard security feature: even if current_dir
/// is unspecified we set it to the directory inside the new root dir.
/// see more details in the description of `Command::current_dir`.
///
/// Note that if both chroot dir and pivot_root specified. The chroot dir
/// is applied after pivot root. If chroot dir is relative it's relative
/// to either suffix of the current directory with stripped off pivot dir
/// or the pivot dir itself (if old workdir is not prefixed by pivot dir)
///
/// # Panics
///
/// If directory is not absolute
pub fn chroot_dir<P: AsRef<Path>>(&mut self, dir: P) -> &mut Command
{
let dir = dir.as_ref();
if!dir.is_absolute() {
panic!("Chroot dir must be absolute");
}
self.chroot_dir = Some(dir.to_path_buf());
self
}
/// Moves the root of the file system to the directory `put_old` and
/// makes `new_root` the new root file system. Also it's optionally
/// unmount `new_root` mount point after moving root (but it must exist
/// anyway).
///
/// The documentation says that `put_old` must be underneath the
/// `new_root`. Currently we have a restriction that both must be absolute
/// and `new_root` be prefix of `put_old`, but we may lift it later.
///
/// **Warning** if you don't unshare the mount namespace you will get
/// moved filesystem root for *all processes running in that namespace*
/// including parent (currently running) process itself. If you don't
/// run equivalent to ``mount --make-private`` for the old root filesystem
/// and set ``unmount`` to true, you may get unmounted filesystem for
/// running processes too.
///
/// See `man 2 pivot` for further details
///
/// Note that if both chroot dir and pivot_root specified. The chroot dir
/// is applied after pivot root.
///
/// # Panics
///
/// Panics if either path is not absolute or new_root is not a prefix of
/// put_old.
pub fn pivot_root<A: AsRef<Path>, B:AsRef<Path>>(&mut self,
new_root: A, put_old: B, unmount: bool)
-> &mut Command
{
let new_root = new_root.as_ref();
let put_old = put_old.as_ref();
if!new_root.is_absolute() {
panic!("New root must be absolute");
};
if!put_old.is_absolute() {
panic!("The `put_old` dir must be absolute");
}
let mut old_cmp = put_old.components();
for (n, o) in new_root.components().zip(old_cmp.by_ref()) {
if n!= o {
panic!("The new_root is not a prefix of put old");
}
}
self.pivot_root = Some((new_root.to_path_buf(), put_old.to_path_buf(),
unmount));
self
}
/// Unshare given namespaces
///
/// Note: each namespace have some consequences on how new process will
/// work, some of them are described in the `Namespace` type documentation.
pub fn unshare<'x>(&mut self, iter: impl IntoIterator<Item=&'x Namespace>)
-> &mut Command
|
/// Reassociate child process with a namespace specified by a file
/// descriptor
///
/// `file` argument is an open file referring to a namespace
///
/// 'ns' is a namespace type
///
/// See `man 2 setns` for further details
///
/// Note: using `unshare` and `setns` for the same namespace is meaningless.
pub fn set_namespace<F: AsRawFd>(&mut self, file: &F, ns: Namespace)
-> io::Result<&mut Command>
{
let fd = dup_file_cloexec(file)?;
self.config.setns_namespaces.insert(ns, fd);
Ok(self)
}
/// Sets user id and group id mappings for new process
///
/// This automatically enables `User` namespace. You should also set `uid`
/// and `gid` with respective methods for the new process.
///
/// Note there are basically two ways to enable id maps:
///
/// 1. Write them directly
/// 2. Invoke a `newuidmap`, `newgidmap` commands
///
/// First option works either if current process is root or if resulting
/// map only contains current user in the mapping.
///
/// The library will not try to guess the behavior. By default it will
/// write directly. You need to call the `set_id_map_commands` when you
/// want non-default behavior.
///
/// See `man 7 user_namespaces` for more info
pub fn set_id_maps(&mut self, uid_map: Vec<UidMap>, gid_map: Vec<GidMap>)
-> &mut Command
{
self.unshare(&[Namespace::User]);
self.config.id_maps = Some((uid_map, gid_map));
self
}
/// Set path to command-line utilities for writing uid/gid maps
///
/// The utilities provided my obey same interface as `newuidmap` and
/// `newgidmap` from `shadow` (or sometimes `uidmap`) package. To get it
/// working you usually need to setup `/etc/subuid` and `/etc/subgid`
/// files.
///
/// See `man 1 newuidmap`, `man 1 newgidmap` for details
///
/// This method is no-op unless `set_id_maps` is called.
pub fn set_id_map_commands<A: AsRef<Path>, B: AsRef<Path>>(&mut self,
newuidmap: A, newgidmap: B)
-> &mut Command
{
self.id_map_commands = Some((
newuidmap.as_ref().to_path_buf(),
newgidmap.as_ref().to_path_buf()));
self
}
/// Keep signal mask intact after executing child, keeps also ignored
/// signals
///
/// By default signal mask is empty and all signals are reset to the
/// `SIG_DFL` value right before `execve()` syscall.
///
/// This is only useful if started process is aware of the issue and sets
/// sigmasks to some reasonable value. When used wisely it may avoid some
/// race conditions when signal is sent after child is cloned but before
/// child have been able to establish it's state.
pub fn keep_sigmask(&mut self) -> &mut Command {
self.config.restore_sigmask = false;
self
}
/// Set the argument zero for the process
///
/// By default argument zero is same as path to the program to run. You
/// may set it to a short name of the command or to something else to
/// pretend there is a symlink to a program (for example to run `gzip` as
/// `gunzip`).
pub fn arg0<S: AsRef<OsStr>>(&mut self, arg: S) -> &mut Command {
self.args[0] = arg.to_cstring();
self
}
/// Makes child process a group leader
///
/// If child process is being launched as a foreground job,
/// the child process group needs to be put into the foreground on
/// the controlling terminal using `tcsetpgrp`. To request status
/// information from stopped child process you should call `waitpid` with
/// `WUNTRACED` flag. And then check status with `WIFSTOPPED` macro.
/// After giving child process group access to the controlling terminal
/// you should send the SIGCONT signal to the child process group.
pub fn make_group_leader(&mut self, make_group_leader: bool) -> &mut Command {
self.config.make_group_leader = make_group_leader;
self
}
/// Inserts a magic environment variable that will contain pid of spawned
/// process
///
/// This is usually needed to avoid accidental propagation of the
/// environment variables targeted only at this specific process.
///
/// # Example
///
/// This is how you can encode [systemd activation] protocol:
///
/// ```rust,ignore
/// cmd.env_var_with_pid("LISTEN_PID");
/// cmd.env("LISTEN_FDS", "1");
/// ```
///
/// [systemd activation]: https://www.freedesktop.org/software/systemd/man/sd_listen_fds.html
pub fn env_var_with_pid<K>(&mut self, key: K) -> &mut Command
where K: AsRef<OsStr>,
{
self.init_env_map();
self.environ.as_mut().unwrap().remove(key.as_ref());
self.pid_env_vars.insert(key.as_ref().to_os_string());
self
}
/// Drop all capabilities, but keep only ones set by this setter
///
/// This method sets three or four sets of capabilities:
/// * Permitted
/// * Inherited
/// * Effective
/// * Ambient (if supported)
///
/// This works both when uid changes (from 0 to other) and when it
/// isn't changed, but requires process to have all capabilities
/// granted by this method.
///
/// This method replaces whole capability mask on each invocation
pub fn keep_caps<'x>(&mut self,
caps: impl IntoIterator<Item=&'x Capability>)
{
let mut buf = [0u32; 2];
for item in caps {
let item = *item as u32;
buf[(item >> 5) as usize] |= 1 << (item & 31);
}
self.keep_caps = Some(buf);
}
}
| {
for ns in iter {
self.config.namespaces |= to_clone_flag(*ns);
}
self
} | identifier_body |
linux.rs | use std::ffi::OsStr;
use std::io;
use std::os::unix::io::AsRawFd;
use std::path::Path;
use nix::sys::signal::{Signal};
use crate::ffi_util::ToCString;
use crate::{Command, Namespace};
use crate::idmap::{UidMap, GidMap};
use crate::stdio::dup_file_cloexec;
use crate::namespace::to_clone_flag;
use crate::caps::Capability;
impl Command {
/// Allow child process to daemonize. By default we run equivalent of
/// `set_parent_death_signal(SIGKILL)`. See the `set_parent_death_signal`
/// for better explanation.
pub fn allow_daemonize(&mut self) -> &mut Command {
self.config.death_sig = None;
self
}
/// Set a signal that is sent to a process when it's parent is dead.
/// This is by default set to `SIGKILL`. And you should keep it that way
/// unless you know what you are doing.
///
/// Particularly you should consider the following choices:
///
/// 1. Instead of setting ``PDEATHSIG`` to some other signal, send signal
/// yourself and wait until child gracefully finishes.
///
/// 2. Instead of daemonizing use ``systemd``/``upstart``/whatever system
/// init script to run your service
///
/// Another issue with this option is that it works only with immediate
/// child. To better control all descendant processes you may need the
/// following:
///
/// 1. The `prctl(PR_SET_CHILD_SUBREAPER..)` in parent which allows to
/// "catch" descendant processes.
///
/// 2. The pid namespaces
///
/// The former is out of scope of this library. The latter works by
/// ``cmd.unshare(Namespace::Pid)``, but you may need to setup mount points
/// and other important things (which are out of scope too).
///
/// To reset this behavior use ``allow_daemonize()``.
///
pub fn set_parent_death_signal(&mut self, sig: Signal) -> &mut Command {
self.config.death_sig = Some(sig);
self
}
/// Set chroot dir. Only absolute path is supported
///
/// This method has a non-standard security feature: even if current_dir
/// is unspecified we set it to the directory inside the new root dir.
/// see more details in the description of `Command::current_dir`.
///
/// Note that if both chroot dir and pivot_root specified. The chroot dir
/// is applied after pivot root. If chroot dir is relative it's relative
/// to either suffix of the current directory with stripped off pivot dir
/// or the pivot dir itself (if old workdir is not prefixed by pivot dir)
///
/// # Panics
///
/// If directory is not absolute
pub fn chroot_dir<P: AsRef<Path>>(&mut self, dir: P) -> &mut Command
{
let dir = dir.as_ref();
if!dir.is_absolute() {
panic!("Chroot dir must be absolute");
}
self.chroot_dir = Some(dir.to_path_buf());
self
}
/// Moves the root of the file system to the directory `put_old` and
/// makes `new_root` the new root file system. Also it's optionally
/// unmount `new_root` mount point after moving root (but it must exist
/// anyway).
///
/// The documentation says that `put_old` must be underneath the
/// `new_root`. Currently we have a restriction that both must be absolute
/// and `new_root` be prefix of `put_old`, but we may lift it later.
///
/// **Warning** if you don't unshare the mount namespace you will get
/// moved filesystem root for *all processes running in that namespace*
/// including parent (currently running) process itself. If you don't
/// run equivalent to ``mount --make-private`` for the old root filesystem
/// and set ``unmount`` to true, you may get unmounted filesystem for
/// running processes too.
///
/// See `man 2 pivot` for further details
///
/// Note that if both chroot dir and pivot_root specified. The chroot dir
/// is applied after pivot root.
/// | pub fn pivot_root<A: AsRef<Path>, B:AsRef<Path>>(&mut self,
new_root: A, put_old: B, unmount: bool)
-> &mut Command
{
let new_root = new_root.as_ref();
let put_old = put_old.as_ref();
if!new_root.is_absolute() {
panic!("New root must be absolute");
};
if!put_old.is_absolute() {
panic!("The `put_old` dir must be absolute");
}
let mut old_cmp = put_old.components();
for (n, o) in new_root.components().zip(old_cmp.by_ref()) {
if n!= o {
panic!("The new_root is not a prefix of put old");
}
}
self.pivot_root = Some((new_root.to_path_buf(), put_old.to_path_buf(),
unmount));
self
}
/// Unshare given namespaces
///
/// Note: each namespace have some consequences on how new process will
/// work, some of them are described in the `Namespace` type documentation.
pub fn unshare<'x>(&mut self, iter: impl IntoIterator<Item=&'x Namespace>)
-> &mut Command
{
for ns in iter {
self.config.namespaces |= to_clone_flag(*ns);
}
self
}
/// Reassociate child process with a namespace specified by a file
/// descriptor
///
/// `file` argument is an open file referring to a namespace
///
/// 'ns' is a namespace type
///
/// See `man 2 setns` for further details
///
/// Note: using `unshare` and `setns` for the same namespace is meaningless.
pub fn set_namespace<F: AsRawFd>(&mut self, file: &F, ns: Namespace)
-> io::Result<&mut Command>
{
let fd = dup_file_cloexec(file)?;
self.config.setns_namespaces.insert(ns, fd);
Ok(self)
}
/// Sets user id and group id mappings for new process
///
/// This automatically enables `User` namespace. You should also set `uid`
/// and `gid` with respective methods for the new process.
///
/// Note there are basically two ways to enable id maps:
///
/// 1. Write them directly
/// 2. Invoke a `newuidmap`, `newgidmap` commands
///
/// First option works either if current process is root or if resulting
/// map only contains current user in the mapping.
///
/// The library will not try to guess the behavior. By default it will
/// write directly. You need to call the `set_id_map_commands` when you
/// want non-default behavior.
///
/// See `man 7 user_namespaces` for more info
pub fn set_id_maps(&mut self, uid_map: Vec<UidMap>, gid_map: Vec<GidMap>)
-> &mut Command
{
self.unshare(&[Namespace::User]);
self.config.id_maps = Some((uid_map, gid_map));
self
}
/// Set path to command-line utilities for writing uid/gid maps
///
/// The utilities provided my obey same interface as `newuidmap` and
/// `newgidmap` from `shadow` (or sometimes `uidmap`) package. To get it
/// working you usually need to setup `/etc/subuid` and `/etc/subgid`
/// files.
///
/// See `man 1 newuidmap`, `man 1 newgidmap` for details
///
/// This method is no-op unless `set_id_maps` is called.
pub fn set_id_map_commands<A: AsRef<Path>, B: AsRef<Path>>(&mut self,
newuidmap: A, newgidmap: B)
-> &mut Command
{
self.id_map_commands = Some((
newuidmap.as_ref().to_path_buf(),
newgidmap.as_ref().to_path_buf()));
self
}
/// Keep signal mask intact after executing child, keeps also ignored
/// signals
///
/// By default signal mask is empty and all signals are reset to the
/// `SIG_DFL` value right before `execve()` syscall.
///
/// This is only useful if started process is aware of the issue and sets
/// sigmasks to some reasonable value. When used wisely it may avoid some
/// race conditions when signal is sent after child is cloned but before
/// child have been able to establish it's state.
pub fn keep_sigmask(&mut self) -> &mut Command {
self.config.restore_sigmask = false;
self
}
/// Set the argument zero for the process
///
/// By default argument zero is same as path to the program to run. You
/// may set it to a short name of the command or to something else to
/// pretend there is a symlink to a program (for example to run `gzip` as
/// `gunzip`).
pub fn arg0<S: AsRef<OsStr>>(&mut self, arg: S) -> &mut Command {
self.args[0] = arg.to_cstring();
self
}
/// Makes child process a group leader
///
/// If child process is being launched as a foreground job,
/// the child process group needs to be put into the foreground on
/// the controlling terminal using `tcsetpgrp`. To request status
/// information from stopped child process you should call `waitpid` with
/// `WUNTRACED` flag. And then check status with `WIFSTOPPED` macro.
/// After giving child process group access to the controlling terminal
/// you should send the SIGCONT signal to the child process group.
pub fn make_group_leader(&mut self, make_group_leader: bool) -> &mut Command {
self.config.make_group_leader = make_group_leader;
self
}
/// Inserts a magic environment variable that will contain pid of spawned
/// process
///
/// This is usually needed to avoid accidental propagation of the
/// environment variables targeted only at this specific process.
///
/// # Example
///
/// This is how you can encode [systemd activation] protocol:
///
/// ```rust,ignore
/// cmd.env_var_with_pid("LISTEN_PID");
/// cmd.env("LISTEN_FDS", "1");
/// ```
///
/// [systemd activation]: https://www.freedesktop.org/software/systemd/man/sd_listen_fds.html
pub fn env_var_with_pid<K>(&mut self, key: K) -> &mut Command
where K: AsRef<OsStr>,
{
self.init_env_map();
self.environ.as_mut().unwrap().remove(key.as_ref());
self.pid_env_vars.insert(key.as_ref().to_os_string());
self
}
/// Drop all capabilities, but keep only ones set by this setter
///
/// This method sets three or four sets of capabilities:
/// * Permitted
/// * Inherited
/// * Effective
/// * Ambient (if supported)
///
/// This works both when uid changes (from 0 to other) and when it
/// isn't changed, but requires process to have all capabilities
/// granted by this method.
///
/// This method replaces whole capability mask on each invocation
pub fn keep_caps<'x>(&mut self,
caps: impl IntoIterator<Item=&'x Capability>)
{
let mut buf = [0u32; 2];
for item in caps {
let item = *item as u32;
buf[(item >> 5) as usize] |= 1 << (item & 31);
}
self.keep_caps = Some(buf);
}
} | /// # Panics
///
/// Panics if either path is not absolute or new_root is not a prefix of
/// put_old. | random_line_split |
linux.rs | use std::ffi::OsStr;
use std::io;
use std::os::unix::io::AsRawFd;
use std::path::Path;
use nix::sys::signal::{Signal};
use crate::ffi_util::ToCString;
use crate::{Command, Namespace};
use crate::idmap::{UidMap, GidMap};
use crate::stdio::dup_file_cloexec;
use crate::namespace::to_clone_flag;
use crate::caps::Capability;
impl Command {
/// Allow child process to daemonize. By default we run equivalent of
/// `set_parent_death_signal(SIGKILL)`. See the `set_parent_death_signal`
/// for better explanation.
pub fn allow_daemonize(&mut self) -> &mut Command {
self.config.death_sig = None;
self
}
/// Set a signal that is sent to a process when it's parent is dead.
/// This is by default set to `SIGKILL`. And you should keep it that way
/// unless you know what you are doing.
///
/// Particularly you should consider the following choices:
///
/// 1. Instead of setting ``PDEATHSIG`` to some other signal, send signal
/// yourself and wait until child gracefully finishes.
///
/// 2. Instead of daemonizing use ``systemd``/``upstart``/whatever system
/// init script to run your service
///
/// Another issue with this option is that it works only with immediate
/// child. To better control all descendant processes you may need the
/// following:
///
/// 1. The `prctl(PR_SET_CHILD_SUBREAPER..)` in parent which allows to
/// "catch" descendant processes.
///
/// 2. The pid namespaces
///
/// The former is out of scope of this library. The latter works by
/// ``cmd.unshare(Namespace::Pid)``, but you may need to setup mount points
/// and other important things (which are out of scope too).
///
/// To reset this behavior use ``allow_daemonize()``.
///
pub fn set_parent_death_signal(&mut self, sig: Signal) -> &mut Command {
self.config.death_sig = Some(sig);
self
}
/// Set chroot dir. Only absolute path is supported
///
/// This method has a non-standard security feature: even if current_dir
/// is unspecified we set it to the directory inside the new root dir.
/// see more details in the description of `Command::current_dir`.
///
/// Note that if both chroot dir and pivot_root specified. The chroot dir
/// is applied after pivot root. If chroot dir is relative it's relative
/// to either suffix of the current directory with stripped off pivot dir
/// or the pivot dir itself (if old workdir is not prefixed by pivot dir)
///
/// # Panics
///
/// If directory is not absolute
pub fn | <P: AsRef<Path>>(&mut self, dir: P) -> &mut Command
{
let dir = dir.as_ref();
if!dir.is_absolute() {
panic!("Chroot dir must be absolute");
}
self.chroot_dir = Some(dir.to_path_buf());
self
}
/// Moves the root of the file system to the directory `put_old` and
/// makes `new_root` the new root file system. Also it's optionally
/// unmount `new_root` mount point after moving root (but it must exist
/// anyway).
///
/// The documentation says that `put_old` must be underneath the
/// `new_root`. Currently we have a restriction that both must be absolute
/// and `new_root` be prefix of `put_old`, but we may lift it later.
///
/// **Warning** if you don't unshare the mount namespace you will get
/// moved filesystem root for *all processes running in that namespace*
/// including parent (currently running) process itself. If you don't
/// run equivalent to ``mount --make-private`` for the old root filesystem
/// and set ``unmount`` to true, you may get unmounted filesystem for
/// running processes too.
///
/// See `man 2 pivot` for further details
///
/// Note that if both chroot dir and pivot_root specified. The chroot dir
/// is applied after pivot root.
///
/// # Panics
///
/// Panics if either path is not absolute or new_root is not a prefix of
/// put_old.
pub fn pivot_root<A: AsRef<Path>, B:AsRef<Path>>(&mut self,
new_root: A, put_old: B, unmount: bool)
-> &mut Command
{
let new_root = new_root.as_ref();
let put_old = put_old.as_ref();
if!new_root.is_absolute() {
panic!("New root must be absolute");
};
if!put_old.is_absolute() {
panic!("The `put_old` dir must be absolute");
}
let mut old_cmp = put_old.components();
for (n, o) in new_root.components().zip(old_cmp.by_ref()) {
if n!= o {
panic!("The new_root is not a prefix of put old");
}
}
self.pivot_root = Some((new_root.to_path_buf(), put_old.to_path_buf(),
unmount));
self
}
/// Unshare given namespaces
///
/// Note: each namespace have some consequences on how new process will
/// work, some of them are described in the `Namespace` type documentation.
pub fn unshare<'x>(&mut self, iter: impl IntoIterator<Item=&'x Namespace>)
-> &mut Command
{
for ns in iter {
self.config.namespaces |= to_clone_flag(*ns);
}
self
}
/// Reassociate child process with a namespace specified by a file
/// descriptor
///
/// `file` argument is an open file referring to a namespace
///
/// 'ns' is a namespace type
///
/// See `man 2 setns` for further details
///
/// Note: using `unshare` and `setns` for the same namespace is meaningless.
pub fn set_namespace<F: AsRawFd>(&mut self, file: &F, ns: Namespace)
-> io::Result<&mut Command>
{
let fd = dup_file_cloexec(file)?;
self.config.setns_namespaces.insert(ns, fd);
Ok(self)
}
/// Sets user id and group id mappings for new process
///
/// This automatically enables `User` namespace. You should also set `uid`
/// and `gid` with respective methods for the new process.
///
/// Note there are basically two ways to enable id maps:
///
/// 1. Write them directly
/// 2. Invoke a `newuidmap`, `newgidmap` commands
///
/// First option works either if current process is root or if resulting
/// map only contains current user in the mapping.
///
/// The library will not try to guess the behavior. By default it will
/// write directly. You need to call the `set_id_map_commands` when you
/// want non-default behavior.
///
/// See `man 7 user_namespaces` for more info
pub fn set_id_maps(&mut self, uid_map: Vec<UidMap>, gid_map: Vec<GidMap>)
-> &mut Command
{
self.unshare(&[Namespace::User]);
self.config.id_maps = Some((uid_map, gid_map));
self
}
/// Set path to command-line utilities for writing uid/gid maps
///
/// The utilities provided my obey same interface as `newuidmap` and
/// `newgidmap` from `shadow` (or sometimes `uidmap`) package. To get it
/// working you usually need to setup `/etc/subuid` and `/etc/subgid`
/// files.
///
/// See `man 1 newuidmap`, `man 1 newgidmap` for details
///
/// This method is no-op unless `set_id_maps` is called.
pub fn set_id_map_commands<A: AsRef<Path>, B: AsRef<Path>>(&mut self,
newuidmap: A, newgidmap: B)
-> &mut Command
{
self.id_map_commands = Some((
newuidmap.as_ref().to_path_buf(),
newgidmap.as_ref().to_path_buf()));
self
}
/// Keep signal mask intact after executing child, keeps also ignored
/// signals
///
/// By default signal mask is empty and all signals are reset to the
/// `SIG_DFL` value right before `execve()` syscall.
///
/// This is only useful if started process is aware of the issue and sets
/// sigmasks to some reasonable value. When used wisely it may avoid some
/// race conditions when signal is sent after child is cloned but before
/// child have been able to establish it's state.
pub fn keep_sigmask(&mut self) -> &mut Command {
self.config.restore_sigmask = false;
self
}
/// Set the argument zero for the process
///
/// By default argument zero is same as path to the program to run. You
/// may set it to a short name of the command or to something else to
/// pretend there is a symlink to a program (for example to run `gzip` as
/// `gunzip`).
pub fn arg0<S: AsRef<OsStr>>(&mut self, arg: S) -> &mut Command {
self.args[0] = arg.to_cstring();
self
}
/// Makes child process a group leader
///
/// If child process is being launched as a foreground job,
/// the child process group needs to be put into the foreground on
/// the controlling terminal using `tcsetpgrp`. To request status
/// information from stopped child process you should call `waitpid` with
/// `WUNTRACED` flag. And then check status with `WIFSTOPPED` macro.
/// After giving child process group access to the controlling terminal
/// you should send the SIGCONT signal to the child process group.
pub fn make_group_leader(&mut self, make_group_leader: bool) -> &mut Command {
self.config.make_group_leader = make_group_leader;
self
}
/// Inserts a magic environment variable that will contain pid of spawned
/// process
///
/// This is usually needed to avoid accidental propagation of the
/// environment variables targeted only at this specific process.
///
/// # Example
///
/// This is how you can encode [systemd activation] protocol:
///
/// ```rust,ignore
/// cmd.env_var_with_pid("LISTEN_PID");
/// cmd.env("LISTEN_FDS", "1");
/// ```
///
/// [systemd activation]: https://www.freedesktop.org/software/systemd/man/sd_listen_fds.html
pub fn env_var_with_pid<K>(&mut self, key: K) -> &mut Command
where K: AsRef<OsStr>,
{
self.init_env_map();
self.environ.as_mut().unwrap().remove(key.as_ref());
self.pid_env_vars.insert(key.as_ref().to_os_string());
self
}
/// Drop all capabilities, but keep only ones set by this setter
///
/// This method sets three or four sets of capabilities:
/// * Permitted
/// * Inherited
/// * Effective
/// * Ambient (if supported)
///
/// This works both when uid changes (from 0 to other) and when it
/// isn't changed, but requires process to have all capabilities
/// granted by this method.
///
/// This method replaces whole capability mask on each invocation
pub fn keep_caps<'x>(&mut self,
caps: impl IntoIterator<Item=&'x Capability>)
{
let mut buf = [0u32; 2];
for item in caps {
let item = *item as u32;
buf[(item >> 5) as usize] |= 1 << (item & 31);
}
self.keep_caps = Some(buf);
}
}
| chroot_dir | identifier_name |
linux.rs | use std::ffi::OsStr;
use std::io;
use std::os::unix::io::AsRawFd;
use std::path::Path;
use nix::sys::signal::{Signal};
use crate::ffi_util::ToCString;
use crate::{Command, Namespace};
use crate::idmap::{UidMap, GidMap};
use crate::stdio::dup_file_cloexec;
use crate::namespace::to_clone_flag;
use crate::caps::Capability;
impl Command {
/// Allow child process to daemonize. By default we run equivalent of
/// `set_parent_death_signal(SIGKILL)`. See the `set_parent_death_signal`
/// for better explanation.
pub fn allow_daemonize(&mut self) -> &mut Command {
self.config.death_sig = None;
self
}
/// Set a signal that is sent to a process when it's parent is dead.
/// This is by default set to `SIGKILL`. And you should keep it that way
/// unless you know what you are doing.
///
/// Particularly you should consider the following choices:
///
/// 1. Instead of setting ``PDEATHSIG`` to some other signal, send signal
/// yourself and wait until child gracefully finishes.
///
/// 2. Instead of daemonizing use ``systemd``/``upstart``/whatever system
/// init script to run your service
///
/// Another issue with this option is that it works only with immediate
/// child. To better control all descendant processes you may need the
/// following:
///
/// 1. The `prctl(PR_SET_CHILD_SUBREAPER..)` in parent which allows to
/// "catch" descendant processes.
///
/// 2. The pid namespaces
///
/// The former is out of scope of this library. The latter works by
/// ``cmd.unshare(Namespace::Pid)``, but you may need to setup mount points
/// and other important things (which are out of scope too).
///
/// To reset this behavior use ``allow_daemonize()``.
///
pub fn set_parent_death_signal(&mut self, sig: Signal) -> &mut Command {
self.config.death_sig = Some(sig);
self
}
/// Set chroot dir. Only absolute path is supported
///
/// This method has a non-standard security feature: even if current_dir
/// is unspecified we set it to the directory inside the new root dir.
/// see more details in the description of `Command::current_dir`.
///
/// Note that if both chroot dir and pivot_root specified. The chroot dir
/// is applied after pivot root. If chroot dir is relative it's relative
/// to either suffix of the current directory with stripped off pivot dir
/// or the pivot dir itself (if old workdir is not prefixed by pivot dir)
///
/// # Panics
///
/// If directory is not absolute
pub fn chroot_dir<P: AsRef<Path>>(&mut self, dir: P) -> &mut Command
{
let dir = dir.as_ref();
if!dir.is_absolute() {
panic!("Chroot dir must be absolute");
}
self.chroot_dir = Some(dir.to_path_buf());
self
}
/// Moves the root of the file system to the directory `put_old` and
/// makes `new_root` the new root file system. Also it's optionally
/// unmount `new_root` mount point after moving root (but it must exist
/// anyway).
///
/// The documentation says that `put_old` must be underneath the
/// `new_root`. Currently we have a restriction that both must be absolute
/// and `new_root` be prefix of `put_old`, but we may lift it later.
///
/// **Warning** if you don't unshare the mount namespace you will get
/// moved filesystem root for *all processes running in that namespace*
/// including parent (currently running) process itself. If you don't
/// run equivalent to ``mount --make-private`` for the old root filesystem
/// and set ``unmount`` to true, you may get unmounted filesystem for
/// running processes too.
///
/// See `man 2 pivot` for further details
///
/// Note that if both chroot dir and pivot_root specified. The chroot dir
/// is applied after pivot root.
///
/// # Panics
///
/// Panics if either path is not absolute or new_root is not a prefix of
/// put_old.
pub fn pivot_root<A: AsRef<Path>, B:AsRef<Path>>(&mut self,
new_root: A, put_old: B, unmount: bool)
-> &mut Command
{
let new_root = new_root.as_ref();
let put_old = put_old.as_ref();
if!new_root.is_absolute() {
panic!("New root must be absolute");
};
if!put_old.is_absolute() |
let mut old_cmp = put_old.components();
for (n, o) in new_root.components().zip(old_cmp.by_ref()) {
if n!= o {
panic!("The new_root is not a prefix of put old");
}
}
self.pivot_root = Some((new_root.to_path_buf(), put_old.to_path_buf(),
unmount));
self
}
/// Unshare given namespaces
///
/// Note: each namespace have some consequences on how new process will
/// work, some of them are described in the `Namespace` type documentation.
pub fn unshare<'x>(&mut self, iter: impl IntoIterator<Item=&'x Namespace>)
-> &mut Command
{
for ns in iter {
self.config.namespaces |= to_clone_flag(*ns);
}
self
}
/// Reassociate child process with a namespace specified by a file
/// descriptor
///
/// `file` argument is an open file referring to a namespace
///
/// 'ns' is a namespace type
///
/// See `man 2 setns` for further details
///
/// Note: using `unshare` and `setns` for the same namespace is meaningless.
pub fn set_namespace<F: AsRawFd>(&mut self, file: &F, ns: Namespace)
-> io::Result<&mut Command>
{
let fd = dup_file_cloexec(file)?;
self.config.setns_namespaces.insert(ns, fd);
Ok(self)
}
/// Sets user id and group id mappings for new process
///
/// This automatically enables `User` namespace. You should also set `uid`
/// and `gid` with respective methods for the new process.
///
/// Note there are basically two ways to enable id maps:
///
/// 1. Write them directly
/// 2. Invoke a `newuidmap`, `newgidmap` commands
///
/// First option works either if current process is root or if resulting
/// map only contains current user in the mapping.
///
/// The library will not try to guess the behavior. By default it will
/// write directly. You need to call the `set_id_map_commands` when you
/// want non-default behavior.
///
/// See `man 7 user_namespaces` for more info
pub fn set_id_maps(&mut self, uid_map: Vec<UidMap>, gid_map: Vec<GidMap>)
-> &mut Command
{
self.unshare(&[Namespace::User]);
self.config.id_maps = Some((uid_map, gid_map));
self
}
/// Set path to command-line utilities for writing uid/gid maps
///
/// The utilities provided my obey same interface as `newuidmap` and
/// `newgidmap` from `shadow` (or sometimes `uidmap`) package. To get it
/// working you usually need to setup `/etc/subuid` and `/etc/subgid`
/// files.
///
/// See `man 1 newuidmap`, `man 1 newgidmap` for details
///
/// This method is no-op unless `set_id_maps` is called.
pub fn set_id_map_commands<A: AsRef<Path>, B: AsRef<Path>>(&mut self,
newuidmap: A, newgidmap: B)
-> &mut Command
{
self.id_map_commands = Some((
newuidmap.as_ref().to_path_buf(),
newgidmap.as_ref().to_path_buf()));
self
}
/// Keep signal mask intact after executing child, keeps also ignored
/// signals
///
/// By default signal mask is empty and all signals are reset to the
/// `SIG_DFL` value right before `execve()` syscall.
///
/// This is only useful if started process is aware of the issue and sets
/// sigmasks to some reasonable value. When used wisely it may avoid some
/// race conditions when signal is sent after child is cloned but before
/// child have been able to establish it's state.
pub fn keep_sigmask(&mut self) -> &mut Command {
self.config.restore_sigmask = false;
self
}
/// Set the argument zero for the process
///
/// By default argument zero is same as path to the program to run. You
/// may set it to a short name of the command or to something else to
/// pretend there is a symlink to a program (for example to run `gzip` as
/// `gunzip`).
pub fn arg0<S: AsRef<OsStr>>(&mut self, arg: S) -> &mut Command {
self.args[0] = arg.to_cstring();
self
}
/// Makes child process a group leader
///
/// If child process is being launched as a foreground job,
/// the child process group needs to be put into the foreground on
/// the controlling terminal using `tcsetpgrp`. To request status
/// information from stopped child process you should call `waitpid` with
/// `WUNTRACED` flag. And then check status with `WIFSTOPPED` macro.
/// After giving child process group access to the controlling terminal
/// you should send the SIGCONT signal to the child process group.
pub fn make_group_leader(&mut self, make_group_leader: bool) -> &mut Command {
self.config.make_group_leader = make_group_leader;
self
}
/// Inserts a magic environment variable that will contain pid of spawned
/// process
///
/// This is usually needed to avoid accidental propagation of the
/// environment variables targeted only at this specific process.
///
/// # Example
///
/// This is how you can encode [systemd activation] protocol:
///
/// ```rust,ignore
/// cmd.env_var_with_pid("LISTEN_PID");
/// cmd.env("LISTEN_FDS", "1");
/// ```
///
/// [systemd activation]: https://www.freedesktop.org/software/systemd/man/sd_listen_fds.html
pub fn env_var_with_pid<K>(&mut self, key: K) -> &mut Command
where K: AsRef<OsStr>,
{
self.init_env_map();
self.environ.as_mut().unwrap().remove(key.as_ref());
self.pid_env_vars.insert(key.as_ref().to_os_string());
self
}
/// Drop all capabilities, but keep only ones set by this setter
///
/// This method sets three or four sets of capabilities:
/// * Permitted
/// * Inherited
/// * Effective
/// * Ambient (if supported)
///
/// This works both when uid changes (from 0 to other) and when it
/// isn't changed, but requires process to have all capabilities
/// granted by this method.
///
/// This method replaces whole capability mask on each invocation
pub fn keep_caps<'x>(&mut self,
caps: impl IntoIterator<Item=&'x Capability>)
{
let mut buf = [0u32; 2];
for item in caps {
let item = *item as u32;
buf[(item >> 5) as usize] |= 1 << (item & 31);
}
self.keep_caps = Some(buf);
}
}
| {
panic!("The `put_old` dir must be absolute");
} | conditional_block |
lib.rs | // Copyright 2018-2022 argmin developers
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://apache.org/licenses/LICENSE-2.0> or the MIT license <LICENSE-MIT or
// http://opensource.org/licenses/MIT>, at your option. This file may not be
// copied, modified, or distributed except according to those terms.
//! argmin-math provides mathematics related abstractions needed in argmin. It supports
//! implementations of these abstractions for basic `Vec`s and for `ndarray` and `nalgebra`.
//! The traits can of course also be implemented for your own types to make them compatible with
//! argmin.
//!
//! For an introduction on how to use argmin, please also have a look at the
//! [book](https://www.argmin-rs.org/book/).
//!
//! # Usage
//!
//! Add the following line to your dependencies list:
//!
//! ```toml
//! [dependencies]
#![doc = concat!("argmin-math = \"", env!("CARGO_PKG_VERSION"), "\"")]
//! ```
//!
//! This will activate the `primitives` and `vec` features. For other backends see the section
//! below.
//!
//! ## Features
//!
//! Support for the various backends can be switched on via features. Please read this section
//! carefully to the end before choosing a backend.
//!
//! ### Default features
//!
//! | Feature | Default | Comment |
//! |------------------------|---------|-------------------------------------------------------|
//! | `primitives` | yes | basic integer and floating point types |
//! | `vec` | yes | `Vec`s (basic functionality) |
//!
//! ### `ndarray`
//!
//! | Feature | Default | Comment |
//! |---------------------------------|---------|--------------------------------------------------------------------|
//! | `ndarray_latest` | no | latest supported version |
//! | `ndarray_latest-nolinalg` | no | latest supported version without `ndarray-linalg` |
//! | `ndarray_latest-serde` | no | latest supported version + serde support |
//! | `ndarray_latest-nolinalg-serde` | no | latest supported version without `ndarray-linalg` + serde support |
//! | `ndarray_v0_15` | no | version 0.15 with ndarray-linalg 0.16 |
//! | `ndarray_v0_15-nolinalg` | no | version 0.15 without `ndarray-linalg` |
//! | `ndarray_v0_15-serde` | no | version 0.15 with ndarray-linalg 0.16 + serde support |
//! | `ndarray_v0_15-nolinalg-serde` | no | version 0.15 without `ndarray-linalg` + serde support |
//! | `ndarray_v0_14` | no | version 0.14 with ndarray-linalg 0.13 |
//! | `ndarray_v0_14-nolinalg` | no | version 0.14 without `ndarray-linalg` |
//! | `ndarray_v0_14-serde` | no | version 0.14 with ndarray-linalg 0.13 + serde support |
//! | `ndarray_v0_14-nolinalg-serde` | no | version 0.14 without `ndarray-linalg` + serde support |
//! | `ndarray_v0_13` | no | version 0.13 with ndarray-linalg 0.12 |
//! | `ndarray_v0_13-nolinalg` | no | version 0.13 without `ndarray-linalg` |
//! | `ndarray_v0_13-serde` | no | version 0.13 with ndarray-linalg 0.12 + serde support |
//! | `ndarray_v0_13-nolinalg-serde` | no | version 0.13 without `ndarray-linalg` + serde support |
//!
//! Note that the `*-nolinalg*` features do NOT pull in `ndarray-linalg` as a dependency. This
//! avoids linking against a BLAS library. This will however disable the implementation of
//! `ArgminInv`, meaning that any solver which requires the matrix inverse will not work with the
//! `ndarray` backend. It is recommended to use the `*-nolinalg*` options if the matrix inverse is
//! not needed in order to keep the compilation times low and avoid problems when linking against a
//! BLAS library.
//!
//! Using the `ndarray_*` features with `ndarray-linalg` support may require to explicitly choose
//! the `ndarray-linalg` BLAS backend in your `Cargo.toml` (see the [`ndarray-linalg` documentation
//! for details](https://github.com/rust-ndarray/ndarray-linalg)):
//!
//! ```toml
//! ndarray-linalg = { version = "<appropriate_version>", features = ["<linalg_backend>"] }
//! ```
//!
//! ### `nalgebra`
//!
//! | Feature | Default | Comment |
//! |------------------------|---------|------------------------------------------|
//! | `nalgebra_latest` | no | latest supported version |
//! | `nalgebra_latest-serde`| no | latest supported version + serde support |
//! | `nalgebra_v0_32` | no | version 0.32 |
//! | `nalgebra_v0_32-serde` | no | version 0.32 + serde support |
//! | `nalgebra_v0_31` | no | version 0.31 |
//! | `nalgebra_v0_31-serde` | no | version 0.31 + serde support |
//! | `nalgebra_v0_30` | no | version 0.30 |
//! | `nalgebra_v0_30-serde` | no | version 0.30 + serde support |
//! | `nalgebra_v0_29` | no | version 0.29 |
//! | `nalgebra_v0_29-serde` | no | version 0.29 + serde support |
//!
//!
//! ## Choosing a backend
//!
//! It is not possible to activate two versions of the same backend.
//!
//! The features labeled `*latest*` are an alias for the most recent supported version of the
//! respective backend. It is however recommended to explicitly specify the desired version instead
//! of using any of the `*latest*` features (see section about semantic versioning below).
//!
//! Note that `argmin` by default compiles with `serde` support. Therefore, unless `serde` is
//! deliberately turned off in `argmin`, it is necessary to activate the `serde` support in
//! `argmin-math` as well.
//!
//! The default features `primitives` and `vec` can be turned off in order to only compile the
//! trait definitions. If another backend is chosen, `primitives` will automatically be turned on
//! again.
//!
//! ### Example
//!
//! Activate support for the latest supported `ndarray` version:
//!
//! ```toml
//! [dependencies]
#![doc = concat!("argmin-math = { version = \"", env!("CARGO_PKG_VERSION"), "\", features = [\"ndarray_latest-serde\"] }")]
//! ```
//!
//! # Semantic versioning
//!
//! This crate follows semantic versioning. Adding a new backend or a new version of a backend is
//! not considered a breaking change. However, your code may still break if you use any of the
//! features containing `*latest*`. It is therefore recommended to specify the actual version of the
//! backend you are using.
//!
//! # Development
//!
//! For development and running the tests a backend for `ndarray-linalg` must be chosen. Normally
//! one would add those as dev dependencies (the features would then be unified with the regular
//! dependencies). However, linking somehow fails when the non-dev `ndarra-linalg` dependency is
//! missing (which is the case for the `*-nolinalg*` features of the ndarray backend). To fix that,
//! the `_dev_linalg_*` features were introduced. When testing and developing with one of the
//! ndarray features with linalg support on, the appropriate `_dev_linalg_*` feature must be turned
//! on as well. Note that the version number in `_dev_linalg_*` is always one below the `ndarray`
//! version. For instance, for ndarray 0.15, one would use the `_dev_linalg_0_14` feature.
//!
//! | Development Feature | Comment |
//! |-----------------------|----------------------------------------------|
//! | `_dev_linalg_latest` | latest `ndarray-linalg` for latest `ndarray` |
//! | `_dev_linalg_0_16` | `ndarray-linalg` v0.16 for `ndarray` v0.15 |
//! | `_dev_linalg_0_13` | `ndarray-linalg` v0.13 for `ndarray` v0.14 |
//! | `_dev_linalg_0_12` | `ndarray-linalg` v0.12 for `ndarray` v0.13 |
//!
//! # Contributing
//!
//! You found a bug? Your favorite backend is not supported? Feel free to open an issue or ideally
//! submit a PR.
//!
//! # License
//!
//! Licensed under either of
//!
//! * Apache License, Version 2.0,
//! ([LICENSE-APACHE](https://github.com/argmin-rs/argmin/blob/main/LICENSE-APACHE) or
//! <http://www.apache.org/licenses/LICENSE-2.0>)
//! * MIT License ([LICENSE-MIT](https://github.com/argmin-rs/argmin/blob/main/LICENSE-MIT) or
//! <http://opensource.org/licenses/MIT>)
//!
//! at your option.
//!
//! ## Contribution
//!
//! Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion
//! in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above,
//! without any additional terms or conditions.
#![warn(missing_docs)]
// Explicitly disallow EQ comparison of floats. (This clippy lint is denied by default; however,
// this is just to make sure that it will always stay this way.)
#![deny(clippy::float_cmp)]
cfg_if::cfg_if! {
if #[cfg(feature = "nalgebra_0_32")] {
extern crate nalgebra_0_32 as nalgebra;
} else if #[cfg(feature = "nalgebra_0_31")] {
extern crate nalgebra_0_31 as nalgebra;
} else if #[cfg(feature = "nalgebra_0_30")] {
extern crate nalgebra_0_30 as nalgebra;
} else if #[cfg(feature = "nalgebra_0_29")] {
extern crate nalgebra_0_29 as nalgebra;
}
}
cfg_if::cfg_if! {
if #[cfg(feature = "ndarray_0_15")] {
extern crate ndarray_0_15 as ndarray;
} else if #[cfg(feature = "ndarray_0_14")] {
extern crate ndarray_0_14 as ndarray;
} else if #[cfg(feature = "ndarray_0_13")] {
extern crate ndarray_0_13 as ndarray;
}
}
cfg_if::cfg_if! {
if #[cfg(feature = "ndarray-linalg_0_16")] {
extern crate ndarray_linalg_0_16 as ndarray_linalg;
} else if #[cfg(feature = "ndarray-linalg_0_13")] {
extern crate ndarray_linalg_0_13 as ndarray_linalg;
} else if #[cfg(feature = "ndarray-linalg_0_12")] {
extern crate ndarray_linalg_0_12 as ndarray_linalg;
}
}
cfg_if::cfg_if! {
if #[cfg(feature = "num-complex_0_2")] {
extern crate num_complex_0_2 as num_complex;
} else if #[cfg(feature = "num-complex_0_3")] {
extern crate num_complex_0_3 as num_complex;
} else if #[cfg(feature = "num-complex_0_4")] {
extern crate num_complex_0_4 as num_complex;
}
}
#[cfg(feature = "primitives")]
mod primitives;
#[cfg(feature = "primitives")]
pub use crate::primitives::*;
#[cfg(feature = "ndarray_all")]
mod ndarray_m;
#[cfg(feature = "ndarray_all")]
pub use crate::ndarray_m::*;
#[cfg(feature = "nalgebra_all")]
mod nalgebra_m;
#[cfg(feature = "nalgebra_all")]
pub use crate::nalgebra_m::*;
#[cfg(feature = "vec")]
mod vec;
#[cfg(feature = "vec")]
pub use crate::vec::*;
use anyhow::Error;
/// Dot/scalar product of `T` and `self`
pub trait ArgminDot<T, U> {
/// Dot/scalar product of `T` and `self`
fn dot(&self, other: &T) -> U;
}
/// Dot/scalar product of `T` and `self` weighted by W (p^TWv)
pub trait ArgminWeightedDot<T, U, V> {
/// Dot/scalar product of `T` and `self`
fn weighted_dot(&self, w: &V, vec: &T) -> U;
}
/// Return param vector of all zeros (for now, this is a hack. It should be done better)
pub trait ArgminZero {
/// Return zero(s)
fn zero() -> Self;
}
/// Return the conjugate
pub trait ArgminConj {
/// Return conjugate
#[must_use]
fn conj(&self) -> Self;
}
/// Zero for dynamically sized objects
pub trait ArgminZeroLike {
/// Return zero(s)
#[must_use]
fn zero_like(&self) -> Self;
}
/// Identity matrix
pub trait ArgminEye {
/// Identity matrix of size `n`
fn eye(n: usize) -> Self;
/// Identity matrix of same size as `self`
#[must_use]
fn eye_like(&self) -> Self;
}
/// Add a `T` to `self`
pub trait ArgminAdd<T, U> {
/// Add a `T` to `self` | pub trait ArgminSub<T, U> {
/// Subtract a `T` from `self`
fn sub(&self, other: &T) -> U;
}
/// (Pointwise) Multiply a `T` with `self`
pub trait ArgminMul<T, U> {
/// (Pointwise) Multiply a `T` with `self`
fn mul(&self, other: &T) -> U;
}
/// (Pointwise) Divide a `T` by `self`
pub trait ArgminDiv<T, U> {
/// (Pointwise) Divide a `T` by `self`
fn div(&self, other: &T) -> U;
}
/// Add a `T` scaled by an `U` to `self`
pub trait ArgminScaledAdd<T, U, V> {
/// Add a `T` scaled by an `U` to `self`
fn scaled_add(&self, factor: &U, vec: &T) -> V;
}
/// Subtract a `T` scaled by an `U` from `self`
pub trait ArgminScaledSub<T, U, V> {
/// Subtract a `T` scaled by an `U` from `self`
fn scaled_sub(&self, factor: &U, vec: &T) -> V;
}
/// Compute the l1-norm (`U`) of `self`
pub trait ArgminL1Norm<U> {
/// Compute the l1-norm (`U`) of `self`
fn l1_norm(&self) -> U;
}
/// Compute the l2-norm (`U`) of `self`
pub trait ArgminL2Norm<U> {
/// Compute the l2-norm (`U`) of `self`
fn l2_norm(&self) -> U;
}
// Sub-optimal: self is moved. ndarray however offers array views...
/// Return the transpose (`U`) of `self`
pub trait ArgminTranspose<U> {
/// Transpose
fn t(self) -> U;
}
/// Compute the inverse (`T`) of `self`
pub trait ArgminInv<T> {
/// Compute the inverse
fn inv(&self) -> Result<T, Error>;
}
/// Create a random number
pub trait ArgminRandom {
/// Get a random element between min and max,
fn rand_from_range(min: &Self, max: &Self) -> Self;
}
/// Minimum and Maximum of type `T`
pub trait ArgminMinMax {
/// Select piecewise minimum
fn min(x: &Self, y: &Self) -> Self;
/// Select piecewise maximum
fn max(x: &Self, y: &Self) -> Self;
}
/// Returns a number that represents the sign of `self`.
pub trait ArgminSignum {
/// Returns a number that represents the sign of `self`.
fn signum(self) -> Self;
} | fn add(&self, other: &T) -> U;
}
/// Subtract a `T` from `self` | random_line_split |
main.rs | #![allow(dead_code)]
extern crate doom_iow;
use doom_iow::*;
use std::f32;
use std::process::Command;
use minifb;
// returns the RGBA color corresponding to UV(hitx, hity) on tex_walls
fn wall_x_texcoord(hitx: f32, hity: f32, tex_walls: &Texture) -> i32 {
let x = hitx - f32::floor(hitx + 0.5);
let y = hity - f32::floor(hity + 0.5);
let x_texcoord: i32 = if f32::abs(y) > f32::abs(x) {
(y * tex_walls.size as f32) as i32
} else {
(x * tex_walls.size as f32) as i32
};
let x_texcoord = if x_texcoord < 0 {
x_texcoord + tex_walls.size as i32
} else {
x_texcoord
};
assert!(x_texcoord >= 0 && x_texcoord < tex_walls.size as i32);
x_texcoord
}
fn draw_sprite(
sprite: &Sprite,
depth_buffer: &[f32],
fb: &mut Framebuffer,
player: &Player,
tex_sprites: &Texture,
) -> Result<(), FrameError> {
//absolute direction from player to sprite (rads)
let mut sprite_dir = f32::atan2(sprite.y - player.y, sprite.x - player.x);
//remap to range [-pi, pi]
while sprite_dir > f32::consts::PI { sprite_dir -= 2.0 * f32::consts::PI; }
while sprite_dir < -f32::consts::PI { sprite_dir += 2.0 * f32::consts::PI; }
//distance from player to sprite
// let sprite_dist =
// f32::sqrt(f32::powi(player.x - sprite.x, 2) + f32::powi(player.y - sprite.y, 2));
// let sprite_screen_size = f32::min(2000.0, fb.h as f32 / sprite_dist) as i32;
let sprite_screen_size = f32::min(1000.0, fb.h as f32/sprite.player_dist) as i32;
let screen_size = fb.w as i32 / 2;
let h_offset: i32 = ((sprite_dir - player.get_a()) * (fb.w as f32/2.0)/(player.fov) +
(fb.w as f32/2.0)/2.0 - (sprite_screen_size as f32)/2.0) as i32;
let v_offset: i32 = (fb.h as i32/2 - sprite_screen_size/2) as i32;
// println!("h_offset = {} = ({} - {}) * {}/2/{} + {}/2/2 - {}/2", h_offset, sprite_dir, player.a, fb.w, player.fov, fb.w, sprite_screen_size);
for i in 0..sprite_screen_size {
if h_offset+i<0 || h_offset+i >= screen_size { continue; }
if depth_buffer[(h_offset+i) as usize] < sprite.player_dist { continue; }
for j in 0..sprite_screen_size {
if v_offset+j<0 || v_offset+j >= fb.h as i32 { continue; }
let color = tex_sprites.get(i as u32*tex_sprites.size/sprite_screen_size as u32,
j as u32*tex_sprites.size/sprite_screen_size as u32, sprite.tex_id)
.unwrap();
let (r, g, b, a) = utils::unpack_color(color);
if a > 128 {
let color = utils::pack_color_bgra(b, g, r, a);
fb.set_pixel(fb.w/2 + (h_offset+i) as usize, (v_offset+j) as usize, color)?;
}
}
}
Ok(())
}
fn map_show_sprite(sprite: &Sprite, fb: &mut Framebuffer, map: &Map) -> Result<(), FrameError> {
//(rect_w, rect_h) == size of one map tile
let rect_w = (fb.w / (map.w as usize * 2)) as f32;
let rect_h = (fb.h / map.h as usize) as f32;
fb.draw_rectangle(
(sprite.x * rect_w - 3.0) as usize,
(sprite.y * rect_h - 3.0) as usize,
6,
6,
// utils::pack_color_rgb(255, 0, 0),
utils::pack_color_bgra(0, 0, 255, 255),
)
}
fn render(
fb: &mut Framebuffer,
map: &Map,
player: &Player,
sprites: &mut Vec<Sprite>, // will change order of sprites according to distance from player
tex_walls: &Texture,
tex_monsters: &Texture,
) -> Result<(), FrameError> {
// fb.clear(utils::pack_color_rgb(249, 209, 152));
fb.clear(utils::pack_color_bgra(152, 209, 249, 255));
let rect_w = fb.w / (map.w as usize * 2); //size of one map cell on the screen
let rect_h = fb.h / map.h as usize;
// draw overhead map
for j in 0..map.h {
for i in 0..map.w {
if map.is_empty(i, j) {
continue; //skip empty spaces
}
let rect_x = i as usize * rect_w;
let rect_y = j as usize * rect_h;
let texid = map.get(i, j).expect("i, j not in map range");
fb.draw_rectangle(
rect_x,
rect_y,
rect_w,
rect_h,
tex_walls.get(0, 0, texid).expect("no texture at texid"),
)?;
}
}
let mut depth_buffer = vec![1e3; (fb.w/2) as usize];
for i in 0..fb.w / 2 {
//cast field of vision on map AND generate 3D view
let angle: f32 = player.get_a() - player.fov / 2. + player.fov * i as f32 / (fb.w / 2) as f32;
for t in 0..2000 {
//since Rust doesn't allow step by float, remap so step==1
let t = t as f32 / 100.; //then transform back to original range
let x = player.x + t * angle.cos();
let y = player.y + t * angle.sin();
// draw the visibility cone on the map
fb.set_pixel(
(x * rect_w as f32) as usize,
(y * rect_h as f32) as usize,
// utils::pack_color_rgb(160, 160, 160),
utils::pack_color_bgra(160, 160, 160, 255),
)
.expect("Could not set pixel");
// if this map tile isn't empty, we've hit a wall
if map.is_empty(x as u32, y as u32) {
continue;
}
// hit a wall
let texid = map
.get(x as u32, y as u32)
.expect("Cannot index this map tile");
assert!(texid < tex_walls.count);
let distance = t * f32::cos(angle - player.get_a());
depth_buffer[i as usize] = distance;
let column_height = (fb.h as f32 / distance) as u32;
let x_texcoord = wall_x_texcoord(x, y, tex_walls);
let column = tex_walls
.get_scaled_column(texid, x_texcoord as u32, column_height)
.expect("Cannot retrieve scaled column");
let pix_x = i + fb.w / 2;
for j in 0..column_height {
let pix_y = j as usize + fb.h / 2 - column_height as usize / 2;
if pix_y < fb.h {
fb.set_pixel(pix_x, pix_y, column[j as usize])
.expect("Could not set pixel");
}
}
break;
}
}
// update distances from sprites to player
for sprite in sprites.iter_mut() {
sprite.player_dist = f32::sqrt(f32::powi(player.x - sprite.x, 2) + f32::powi(player.y - sprite.y, 2));
}
// sort sprites in reverse order of distance to player
sprites.sort_unstable_by(|lhs, rhs| rhs.player_dist.partial_cmp(&lhs.player_dist).unwrap());
// render sprites on map
for sprite in sprites.iter().take(sprites.len()) {
map_show_sprite(sprite, fb, &map)?;
draw_sprite(sprite, &depth_buffer, fb, &player, &tex_monsters)?;
}
Ok(())
}
fn main() -> std::io::Result<()> {
// TODO: unfuck colors
// TODO: create variable color schemes (RGBA vs BGRA)
// TODO: cleanup code
let mut fb = Framebuffer::new(1024, 512);
let mut window = minifb::Window::new("doom-iow", fb.w, fb.h, minifb::WindowOptions::default()).unwrap();
let mut player = Player::new (
3.456,
2.345,
1.523,
std::f32::consts::PI / 3.,
);
let map = match Map::init(16, 16) {
Ok(m) => m,
Err(_) => {
panic!("Could not open map");
}
};
let tex_walls = Texture::new("./walltex.png").expect("Could not open wall texture");
let tex_monsters = Texture::new("./monsters.png").expect("Could not open monster texture");
let mut sprites = vec![
Sprite::new(3.523, 3.812, 2, 0.0),
Sprite::new(1.834, 8.765, 0, 0.0),
Sprite::new(5.323, 5.365, 1, 0.0),
Sprite::new(4.123, 10.265, 1, 0.0),
];
make_gif(&mut player, &mut fb, &map, &mut sprites, &tex_walls, &tex_monsters).unwrap();
while window.is_open() &&!window.is_key_down(minifb::Key::Escape) {
render(&mut fb, &map, &player, &mut sprites, &tex_walls, &tex_monsters).unwrap();
player.set_a(player.get_a() - 0.1 * (2. * std::f32::consts::PI / 360.));
window.update_with_buffer(fb.img.as_slice()).unwrap();
}
Ok(())
}
fn make_gif(player: &mut Player, fb: &mut Framebuffer, map: &Map, sprites: &mut Vec<Sprite>,
tex_walls: &Texture, tex_monsters: &Texture) -> Result<(), std::io::Error> {
//clear the /out folder
Command::new("rm")
.arg("-rf")
.arg("out/")
.output()
.expect("failed to clear out directory");
//create new /out folder
Command::new("mkdir") |
for frame in 0..360 {
// for frame in 0..5 {
// for frame in 0..1 {
let output_path = "./out/";
let ss = format!("{}{:05}.ppm", output_path, frame);
// player.a -= 2. * std::f32::consts::PI / 360.;
player.set_a( player.get_a() - (2. * std::f32::consts::PI / 360.) );
render(fb, &map, &player, sprites, &tex_walls, &tex_monsters).expect("Could not render image");
utils::drop_ppm_image(ss.as_str(), &fb.img, fb.w as usize, fb.h as usize)
.expect("Could not drop image");
}
println!("Rendered all frames, collecting into gif...");
let output = Command::new("convert")
.args(&["-delay", "10", "-loop", "0", "*.ppm", "rendered.gif"])
.current_dir("out/")
.output()
.expect("Could not start process");
println!("Status: {}", output.status);
println!("Stdout: {}", String::from_utf8_lossy(&output.stdout));
println!("Stderr: {}", String::from_utf8_lossy(&output.stderr));
println!("done");
//open results in Finder
Command::new("open")
.arg("out/")
.output()
.expect("Could not open folder");
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn packs_ints() {
let r = 2;
let g = 4;
let b = 8;
let a = 16;
let packed = utils::pack_color_rgba(r, g, b, a);
assert_eq!(packed, 0b0001_0000_0000_1000_0000_0100_0000_0010);
}
#[test]
fn unpacks_ints() {
let packed = 0b0001_0000_0000_1000_0000_0100_0000_0010;
let (r, g, b, a) = utils::unpack_color(packed);
assert_eq!(vec![2, 4, 8, 16], vec![r, g, b, a]);
}
#[test]
fn packs_ints_idempotently() {
let r = 2;
let g = 4;
let b = 8;
let a = 255;
let color = utils::pack_color_rgba(r, g, b, a);
let (rc, gc, bc, ac) = utils::unpack_color(color);
assert_eq!(vec![r, g, b, a], vec![rc, gc, bc, ac]);
}
} | .arg("out")
.output()
.expect("failed to create directory"); | random_line_split |
main.rs | #![allow(dead_code)]
extern crate doom_iow;
use doom_iow::*;
use std::f32;
use std::process::Command;
use minifb;
// returns the RGBA color corresponding to UV(hitx, hity) on tex_walls
fn wall_x_texcoord(hitx: f32, hity: f32, tex_walls: &Texture) -> i32 {
let x = hitx - f32::floor(hitx + 0.5);
let y = hity - f32::floor(hity + 0.5);
let x_texcoord: i32 = if f32::abs(y) > f32::abs(x) {
(y * tex_walls.size as f32) as i32
} else {
(x * tex_walls.size as f32) as i32
};
let x_texcoord = if x_texcoord < 0 {
x_texcoord + tex_walls.size as i32
} else {
x_texcoord
};
assert!(x_texcoord >= 0 && x_texcoord < tex_walls.size as i32);
x_texcoord
}
fn draw_sprite(
sprite: &Sprite,
depth_buffer: &[f32],
fb: &mut Framebuffer,
player: &Player,
tex_sprites: &Texture,
) -> Result<(), FrameError> {
//absolute direction from player to sprite (rads)
let mut sprite_dir = f32::atan2(sprite.y - player.y, sprite.x - player.x);
//remap to range [-pi, pi]
while sprite_dir > f32::consts::PI { sprite_dir -= 2.0 * f32::consts::PI; }
while sprite_dir < -f32::consts::PI { sprite_dir += 2.0 * f32::consts::PI; }
//distance from player to sprite
// let sprite_dist =
// f32::sqrt(f32::powi(player.x - sprite.x, 2) + f32::powi(player.y - sprite.y, 2));
// let sprite_screen_size = f32::min(2000.0, fb.h as f32 / sprite_dist) as i32;
let sprite_screen_size = f32::min(1000.0, fb.h as f32/sprite.player_dist) as i32;
let screen_size = fb.w as i32 / 2;
let h_offset: i32 = ((sprite_dir - player.get_a()) * (fb.w as f32/2.0)/(player.fov) +
(fb.w as f32/2.0)/2.0 - (sprite_screen_size as f32)/2.0) as i32;
let v_offset: i32 = (fb.h as i32/2 - sprite_screen_size/2) as i32;
// println!("h_offset = {} = ({} - {}) * {}/2/{} + {}/2/2 - {}/2", h_offset, sprite_dir, player.a, fb.w, player.fov, fb.w, sprite_screen_size);
for i in 0..sprite_screen_size {
if h_offset+i<0 || h_offset+i >= screen_size { continue; }
if depth_buffer[(h_offset+i) as usize] < sprite.player_dist { continue; }
for j in 0..sprite_screen_size {
if v_offset+j<0 || v_offset+j >= fb.h as i32 { continue; }
let color = tex_sprites.get(i as u32*tex_sprites.size/sprite_screen_size as u32,
j as u32*tex_sprites.size/sprite_screen_size as u32, sprite.tex_id)
.unwrap();
let (r, g, b, a) = utils::unpack_color(color);
if a > 128 {
let color = utils::pack_color_bgra(b, g, r, a);
fb.set_pixel(fb.w/2 + (h_offset+i) as usize, (v_offset+j) as usize, color)?;
}
}
}
Ok(())
}
fn map_show_sprite(sprite: &Sprite, fb: &mut Framebuffer, map: &Map) -> Result<(), FrameError> {
//(rect_w, rect_h) == size of one map tile
let rect_w = (fb.w / (map.w as usize * 2)) as f32;
let rect_h = (fb.h / map.h as usize) as f32;
fb.draw_rectangle(
(sprite.x * rect_w - 3.0) as usize,
(sprite.y * rect_h - 3.0) as usize,
6,
6,
// utils::pack_color_rgb(255, 0, 0),
utils::pack_color_bgra(0, 0, 255, 255),
)
}
fn render(
fb: &mut Framebuffer,
map: &Map,
player: &Player,
sprites: &mut Vec<Sprite>, // will change order of sprites according to distance from player
tex_walls: &Texture,
tex_monsters: &Texture,
) -> Result<(), FrameError> {
// fb.clear(utils::pack_color_rgb(249, 209, 152));
fb.clear(utils::pack_color_bgra(152, 209, 249, 255));
let rect_w = fb.w / (map.w as usize * 2); //size of one map cell on the screen
let rect_h = fb.h / map.h as usize;
// draw overhead map
for j in 0..map.h {
for i in 0..map.w {
if map.is_empty(i, j) {
continue; //skip empty spaces
}
let rect_x = i as usize * rect_w;
let rect_y = j as usize * rect_h;
let texid = map.get(i, j).expect("i, j not in map range");
fb.draw_rectangle(
rect_x,
rect_y,
rect_w,
rect_h,
tex_walls.get(0, 0, texid).expect("no texture at texid"),
)?;
}
}
let mut depth_buffer = vec![1e3; (fb.w/2) as usize];
for i in 0..fb.w / 2 {
//cast field of vision on map AND generate 3D view
let angle: f32 = player.get_a() - player.fov / 2. + player.fov * i as f32 / (fb.w / 2) as f32;
for t in 0..2000 {
//since Rust doesn't allow step by float, remap so step==1
let t = t as f32 / 100.; //then transform back to original range
let x = player.x + t * angle.cos();
let y = player.y + t * angle.sin();
// draw the visibility cone on the map
fb.set_pixel(
(x * rect_w as f32) as usize,
(y * rect_h as f32) as usize,
// utils::pack_color_rgb(160, 160, 160),
utils::pack_color_bgra(160, 160, 160, 255),
)
.expect("Could not set pixel");
// if this map tile isn't empty, we've hit a wall
if map.is_empty(x as u32, y as u32) {
continue;
}
// hit a wall
let texid = map
.get(x as u32, y as u32)
.expect("Cannot index this map tile");
assert!(texid < tex_walls.count);
let distance = t * f32::cos(angle - player.get_a());
depth_buffer[i as usize] = distance;
let column_height = (fb.h as f32 / distance) as u32;
let x_texcoord = wall_x_texcoord(x, y, tex_walls);
let column = tex_walls
.get_scaled_column(texid, x_texcoord as u32, column_height)
.expect("Cannot retrieve scaled column");
let pix_x = i + fb.w / 2;
for j in 0..column_height {
let pix_y = j as usize + fb.h / 2 - column_height as usize / 2;
if pix_y < fb.h {
fb.set_pixel(pix_x, pix_y, column[j as usize])
.expect("Could not set pixel");
}
}
break;
}
}
// update distances from sprites to player
for sprite in sprites.iter_mut() {
sprite.player_dist = f32::sqrt(f32::powi(player.x - sprite.x, 2) + f32::powi(player.y - sprite.y, 2));
}
// sort sprites in reverse order of distance to player
sprites.sort_unstable_by(|lhs, rhs| rhs.player_dist.partial_cmp(&lhs.player_dist).unwrap());
// render sprites on map
for sprite in sprites.iter().take(sprites.len()) {
map_show_sprite(sprite, fb, &map)?;
draw_sprite(sprite, &depth_buffer, fb, &player, &tex_monsters)?;
}
Ok(())
}
fn main() -> std::io::Result<()> {
// TODO: unfuck colors
// TODO: create variable color schemes (RGBA vs BGRA)
// TODO: cleanup code
let mut fb = Framebuffer::new(1024, 512);
let mut window = minifb::Window::new("doom-iow", fb.w, fb.h, minifb::WindowOptions::default()).unwrap();
let mut player = Player::new (
3.456,
2.345,
1.523,
std::f32::consts::PI / 3.,
);
let map = match Map::init(16, 16) {
Ok(m) => m,
Err(_) => {
panic!("Could not open map");
}
};
let tex_walls = Texture::new("./walltex.png").expect("Could not open wall texture");
let tex_monsters = Texture::new("./monsters.png").expect("Could not open monster texture");
let mut sprites = vec![
Sprite::new(3.523, 3.812, 2, 0.0),
Sprite::new(1.834, 8.765, 0, 0.0),
Sprite::new(5.323, 5.365, 1, 0.0),
Sprite::new(4.123, 10.265, 1, 0.0),
];
make_gif(&mut player, &mut fb, &map, &mut sprites, &tex_walls, &tex_monsters).unwrap();
while window.is_open() &&!window.is_key_down(minifb::Key::Escape) {
render(&mut fb, &map, &player, &mut sprites, &tex_walls, &tex_monsters).unwrap();
player.set_a(player.get_a() - 0.1 * (2. * std::f32::consts::PI / 360.));
window.update_with_buffer(fb.img.as_slice()).unwrap();
}
Ok(())
}
fn make_gif(player: &mut Player, fb: &mut Framebuffer, map: &Map, sprites: &mut Vec<Sprite>,
tex_walls: &Texture, tex_monsters: &Texture) -> Result<(), std::io::Error> {
//clear the /out folder
Command::new("rm")
.arg("-rf")
.arg("out/")
.output()
.expect("failed to clear out directory");
//create new /out folder
Command::new("mkdir")
.arg("out")
.output()
.expect("failed to create directory");
for frame in 0..360 {
// for frame in 0..5 {
// for frame in 0..1 {
let output_path = "./out/";
let ss = format!("{}{:05}.ppm", output_path, frame);
// player.a -= 2. * std::f32::consts::PI / 360.;
player.set_a( player.get_a() - (2. * std::f32::consts::PI / 360.) );
render(fb, &map, &player, sprites, &tex_walls, &tex_monsters).expect("Could not render image");
utils::drop_ppm_image(ss.as_str(), &fb.img, fb.w as usize, fb.h as usize)
.expect("Could not drop image");
}
println!("Rendered all frames, collecting into gif...");
let output = Command::new("convert")
.args(&["-delay", "10", "-loop", "0", "*.ppm", "rendered.gif"])
.current_dir("out/")
.output()
.expect("Could not start process");
println!("Status: {}", output.status);
println!("Stdout: {}", String::from_utf8_lossy(&output.stdout));
println!("Stderr: {}", String::from_utf8_lossy(&output.stderr));
println!("done");
//open results in Finder
Command::new("open")
.arg("out/")
.output()
.expect("Could not open folder");
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn packs_ints() {
let r = 2;
let g = 4;
let b = 8;
let a = 16;
let packed = utils::pack_color_rgba(r, g, b, a);
assert_eq!(packed, 0b0001_0000_0000_1000_0000_0100_0000_0010);
}
#[test]
fn unpacks_ints() {
let packed = 0b0001_0000_0000_1000_0000_0100_0000_0010;
let (r, g, b, a) = utils::unpack_color(packed);
assert_eq!(vec![2, 4, 8, 16], vec![r, g, b, a]);
}
#[test]
fn packs_ints_idempotently() |
}
| {
let r = 2;
let g = 4;
let b = 8;
let a = 255;
let color = utils::pack_color_rgba(r, g, b, a);
let (rc, gc, bc, ac) = utils::unpack_color(color);
assert_eq!(vec![r, g, b, a], vec![rc, gc, bc, ac]);
} | identifier_body |
main.rs | #![allow(dead_code)]
extern crate doom_iow;
use doom_iow::*;
use std::f32;
use std::process::Command;
use minifb;
// returns the RGBA color corresponding to UV(hitx, hity) on tex_walls
fn wall_x_texcoord(hitx: f32, hity: f32, tex_walls: &Texture) -> i32 {
let x = hitx - f32::floor(hitx + 0.5);
let y = hity - f32::floor(hity + 0.5);
let x_texcoord: i32 = if f32::abs(y) > f32::abs(x) {
(y * tex_walls.size as f32) as i32
} else {
(x * tex_walls.size as f32) as i32
};
let x_texcoord = if x_texcoord < 0 {
x_texcoord + tex_walls.size as i32
} else {
x_texcoord
};
assert!(x_texcoord >= 0 && x_texcoord < tex_walls.size as i32);
x_texcoord
}
fn draw_sprite(
sprite: &Sprite,
depth_buffer: &[f32],
fb: &mut Framebuffer,
player: &Player,
tex_sprites: &Texture,
) -> Result<(), FrameError> {
//absolute direction from player to sprite (rads)
let mut sprite_dir = f32::atan2(sprite.y - player.y, sprite.x - player.x);
//remap to range [-pi, pi]
while sprite_dir > f32::consts::PI { sprite_dir -= 2.0 * f32::consts::PI; }
while sprite_dir < -f32::consts::PI { sprite_dir += 2.0 * f32::consts::PI; }
//distance from player to sprite
// let sprite_dist =
// f32::sqrt(f32::powi(player.x - sprite.x, 2) + f32::powi(player.y - sprite.y, 2));
// let sprite_screen_size = f32::min(2000.0, fb.h as f32 / sprite_dist) as i32;
let sprite_screen_size = f32::min(1000.0, fb.h as f32/sprite.player_dist) as i32;
let screen_size = fb.w as i32 / 2;
let h_offset: i32 = ((sprite_dir - player.get_a()) * (fb.w as f32/2.0)/(player.fov) +
(fb.w as f32/2.0)/2.0 - (sprite_screen_size as f32)/2.0) as i32;
let v_offset: i32 = (fb.h as i32/2 - sprite_screen_size/2) as i32;
// println!("h_offset = {} = ({} - {}) * {}/2/{} + {}/2/2 - {}/2", h_offset, sprite_dir, player.a, fb.w, player.fov, fb.w, sprite_screen_size);
for i in 0..sprite_screen_size {
if h_offset+i<0 || h_offset+i >= screen_size { continue; }
if depth_buffer[(h_offset+i) as usize] < sprite.player_dist { continue; }
for j in 0..sprite_screen_size {
if v_offset+j<0 || v_offset+j >= fb.h as i32 { continue; }
let color = tex_sprites.get(i as u32*tex_sprites.size/sprite_screen_size as u32,
j as u32*tex_sprites.size/sprite_screen_size as u32, sprite.tex_id)
.unwrap();
let (r, g, b, a) = utils::unpack_color(color);
if a > 128 {
let color = utils::pack_color_bgra(b, g, r, a);
fb.set_pixel(fb.w/2 + (h_offset+i) as usize, (v_offset+j) as usize, color)?;
}
}
}
Ok(())
}
fn map_show_sprite(sprite: &Sprite, fb: &mut Framebuffer, map: &Map) -> Result<(), FrameError> {
//(rect_w, rect_h) == size of one map tile
let rect_w = (fb.w / (map.w as usize * 2)) as f32;
let rect_h = (fb.h / map.h as usize) as f32;
fb.draw_rectangle(
(sprite.x * rect_w - 3.0) as usize,
(sprite.y * rect_h - 3.0) as usize,
6,
6,
// utils::pack_color_rgb(255, 0, 0),
utils::pack_color_bgra(0, 0, 255, 255),
)
}
fn render(
fb: &mut Framebuffer,
map: &Map,
player: &Player,
sprites: &mut Vec<Sprite>, // will change order of sprites according to distance from player
tex_walls: &Texture,
tex_monsters: &Texture,
) -> Result<(), FrameError> {
// fb.clear(utils::pack_color_rgb(249, 209, 152));
fb.clear(utils::pack_color_bgra(152, 209, 249, 255));
let rect_w = fb.w / (map.w as usize * 2); //size of one map cell on the screen
let rect_h = fb.h / map.h as usize;
// draw overhead map
for j in 0..map.h {
for i in 0..map.w {
if map.is_empty(i, j) |
let rect_x = i as usize * rect_w;
let rect_y = j as usize * rect_h;
let texid = map.get(i, j).expect("i, j not in map range");
fb.draw_rectangle(
rect_x,
rect_y,
rect_w,
rect_h,
tex_walls.get(0, 0, texid).expect("no texture at texid"),
)?;
}
}
let mut depth_buffer = vec![1e3; (fb.w/2) as usize];
for i in 0..fb.w / 2 {
//cast field of vision on map AND generate 3D view
let angle: f32 = player.get_a() - player.fov / 2. + player.fov * i as f32 / (fb.w / 2) as f32;
for t in 0..2000 {
//since Rust doesn't allow step by float, remap so step==1
let t = t as f32 / 100.; //then transform back to original range
let x = player.x + t * angle.cos();
let y = player.y + t * angle.sin();
// draw the visibility cone on the map
fb.set_pixel(
(x * rect_w as f32) as usize,
(y * rect_h as f32) as usize,
// utils::pack_color_rgb(160, 160, 160),
utils::pack_color_bgra(160, 160, 160, 255),
)
.expect("Could not set pixel");
// if this map tile isn't empty, we've hit a wall
if map.is_empty(x as u32, y as u32) {
continue;
}
// hit a wall
let texid = map
.get(x as u32, y as u32)
.expect("Cannot index this map tile");
assert!(texid < tex_walls.count);
let distance = t * f32::cos(angle - player.get_a());
depth_buffer[i as usize] = distance;
let column_height = (fb.h as f32 / distance) as u32;
let x_texcoord = wall_x_texcoord(x, y, tex_walls);
let column = tex_walls
.get_scaled_column(texid, x_texcoord as u32, column_height)
.expect("Cannot retrieve scaled column");
let pix_x = i + fb.w / 2;
for j in 0..column_height {
let pix_y = j as usize + fb.h / 2 - column_height as usize / 2;
if pix_y < fb.h {
fb.set_pixel(pix_x, pix_y, column[j as usize])
.expect("Could not set pixel");
}
}
break;
}
}
// update distances from sprites to player
for sprite in sprites.iter_mut() {
sprite.player_dist = f32::sqrt(f32::powi(player.x - sprite.x, 2) + f32::powi(player.y - sprite.y, 2));
}
// sort sprites in reverse order of distance to player
sprites.sort_unstable_by(|lhs, rhs| rhs.player_dist.partial_cmp(&lhs.player_dist).unwrap());
// render sprites on map
for sprite in sprites.iter().take(sprites.len()) {
map_show_sprite(sprite, fb, &map)?;
draw_sprite(sprite, &depth_buffer, fb, &player, &tex_monsters)?;
}
Ok(())
}
fn main() -> std::io::Result<()> {
// TODO: unfuck colors
// TODO: create variable color schemes (RGBA vs BGRA)
// TODO: cleanup code
let mut fb = Framebuffer::new(1024, 512);
let mut window = minifb::Window::new("doom-iow", fb.w, fb.h, minifb::WindowOptions::default()).unwrap();
let mut player = Player::new (
3.456,
2.345,
1.523,
std::f32::consts::PI / 3.,
);
let map = match Map::init(16, 16) {
Ok(m) => m,
Err(_) => {
panic!("Could not open map");
}
};
let tex_walls = Texture::new("./walltex.png").expect("Could not open wall texture");
let tex_monsters = Texture::new("./monsters.png").expect("Could not open monster texture");
let mut sprites = vec![
Sprite::new(3.523, 3.812, 2, 0.0),
Sprite::new(1.834, 8.765, 0, 0.0),
Sprite::new(5.323, 5.365, 1, 0.0),
Sprite::new(4.123, 10.265, 1, 0.0),
];
make_gif(&mut player, &mut fb, &map, &mut sprites, &tex_walls, &tex_monsters).unwrap();
while window.is_open() &&!window.is_key_down(minifb::Key::Escape) {
render(&mut fb, &map, &player, &mut sprites, &tex_walls, &tex_monsters).unwrap();
player.set_a(player.get_a() - 0.1 * (2. * std::f32::consts::PI / 360.));
window.update_with_buffer(fb.img.as_slice()).unwrap();
}
Ok(())
}
fn make_gif(player: &mut Player, fb: &mut Framebuffer, map: &Map, sprites: &mut Vec<Sprite>,
tex_walls: &Texture, tex_monsters: &Texture) -> Result<(), std::io::Error> {
//clear the /out folder
Command::new("rm")
.arg("-rf")
.arg("out/")
.output()
.expect("failed to clear out directory");
//create new /out folder
Command::new("mkdir")
.arg("out")
.output()
.expect("failed to create directory");
for frame in 0..360 {
// for frame in 0..5 {
// for frame in 0..1 {
let output_path = "./out/";
let ss = format!("{}{:05}.ppm", output_path, frame);
// player.a -= 2. * std::f32::consts::PI / 360.;
player.set_a( player.get_a() - (2. * std::f32::consts::PI / 360.) );
render(fb, &map, &player, sprites, &tex_walls, &tex_monsters).expect("Could not render image");
utils::drop_ppm_image(ss.as_str(), &fb.img, fb.w as usize, fb.h as usize)
.expect("Could not drop image");
}
println!("Rendered all frames, collecting into gif...");
let output = Command::new("convert")
.args(&["-delay", "10", "-loop", "0", "*.ppm", "rendered.gif"])
.current_dir("out/")
.output()
.expect("Could not start process");
println!("Status: {}", output.status);
println!("Stdout: {}", String::from_utf8_lossy(&output.stdout));
println!("Stderr: {}", String::from_utf8_lossy(&output.stderr));
println!("done");
//open results in Finder
Command::new("open")
.arg("out/")
.output()
.expect("Could not open folder");
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn packs_ints() {
let r = 2;
let g = 4;
let b = 8;
let a = 16;
let packed = utils::pack_color_rgba(r, g, b, a);
assert_eq!(packed, 0b0001_0000_0000_1000_0000_0100_0000_0010);
}
#[test]
fn unpacks_ints() {
let packed = 0b0001_0000_0000_1000_0000_0100_0000_0010;
let (r, g, b, a) = utils::unpack_color(packed);
assert_eq!(vec![2, 4, 8, 16], vec![r, g, b, a]);
}
#[test]
fn packs_ints_idempotently() {
let r = 2;
let g = 4;
let b = 8;
let a = 255;
let color = utils::pack_color_rgba(r, g, b, a);
let (rc, gc, bc, ac) = utils::unpack_color(color);
assert_eq!(vec![r, g, b, a], vec![rc, gc, bc, ac]);
}
}
| {
continue; //skip empty spaces
} | conditional_block |
main.rs | #![allow(dead_code)]
extern crate doom_iow;
use doom_iow::*;
use std::f32;
use std::process::Command;
use minifb;
// returns the RGBA color corresponding to UV(hitx, hity) on tex_walls
fn wall_x_texcoord(hitx: f32, hity: f32, tex_walls: &Texture) -> i32 {
let x = hitx - f32::floor(hitx + 0.5);
let y = hity - f32::floor(hity + 0.5);
let x_texcoord: i32 = if f32::abs(y) > f32::abs(x) {
(y * tex_walls.size as f32) as i32
} else {
(x * tex_walls.size as f32) as i32
};
let x_texcoord = if x_texcoord < 0 {
x_texcoord + tex_walls.size as i32
} else {
x_texcoord
};
assert!(x_texcoord >= 0 && x_texcoord < tex_walls.size as i32);
x_texcoord
}
fn draw_sprite(
sprite: &Sprite,
depth_buffer: &[f32],
fb: &mut Framebuffer,
player: &Player,
tex_sprites: &Texture,
) -> Result<(), FrameError> {
//absolute direction from player to sprite (rads)
let mut sprite_dir = f32::atan2(sprite.y - player.y, sprite.x - player.x);
//remap to range [-pi, pi]
while sprite_dir > f32::consts::PI { sprite_dir -= 2.0 * f32::consts::PI; }
while sprite_dir < -f32::consts::PI { sprite_dir += 2.0 * f32::consts::PI; }
//distance from player to sprite
// let sprite_dist =
// f32::sqrt(f32::powi(player.x - sprite.x, 2) + f32::powi(player.y - sprite.y, 2));
// let sprite_screen_size = f32::min(2000.0, fb.h as f32 / sprite_dist) as i32;
let sprite_screen_size = f32::min(1000.0, fb.h as f32/sprite.player_dist) as i32;
let screen_size = fb.w as i32 / 2;
let h_offset: i32 = ((sprite_dir - player.get_a()) * (fb.w as f32/2.0)/(player.fov) +
(fb.w as f32/2.0)/2.0 - (sprite_screen_size as f32)/2.0) as i32;
let v_offset: i32 = (fb.h as i32/2 - sprite_screen_size/2) as i32;
// println!("h_offset = {} = ({} - {}) * {}/2/{} + {}/2/2 - {}/2", h_offset, sprite_dir, player.a, fb.w, player.fov, fb.w, sprite_screen_size);
for i in 0..sprite_screen_size {
if h_offset+i<0 || h_offset+i >= screen_size { continue; }
if depth_buffer[(h_offset+i) as usize] < sprite.player_dist { continue; }
for j in 0..sprite_screen_size {
if v_offset+j<0 || v_offset+j >= fb.h as i32 { continue; }
let color = tex_sprites.get(i as u32*tex_sprites.size/sprite_screen_size as u32,
j as u32*tex_sprites.size/sprite_screen_size as u32, sprite.tex_id)
.unwrap();
let (r, g, b, a) = utils::unpack_color(color);
if a > 128 {
let color = utils::pack_color_bgra(b, g, r, a);
fb.set_pixel(fb.w/2 + (h_offset+i) as usize, (v_offset+j) as usize, color)?;
}
}
}
Ok(())
}
fn map_show_sprite(sprite: &Sprite, fb: &mut Framebuffer, map: &Map) -> Result<(), FrameError> {
//(rect_w, rect_h) == size of one map tile
let rect_w = (fb.w / (map.w as usize * 2)) as f32;
let rect_h = (fb.h / map.h as usize) as f32;
fb.draw_rectangle(
(sprite.x * rect_w - 3.0) as usize,
(sprite.y * rect_h - 3.0) as usize,
6,
6,
// utils::pack_color_rgb(255, 0, 0),
utils::pack_color_bgra(0, 0, 255, 255),
)
}
fn render(
fb: &mut Framebuffer,
map: &Map,
player: &Player,
sprites: &mut Vec<Sprite>, // will change order of sprites according to distance from player
tex_walls: &Texture,
tex_monsters: &Texture,
) -> Result<(), FrameError> {
// fb.clear(utils::pack_color_rgb(249, 209, 152));
fb.clear(utils::pack_color_bgra(152, 209, 249, 255));
let rect_w = fb.w / (map.w as usize * 2); //size of one map cell on the screen
let rect_h = fb.h / map.h as usize;
// draw overhead map
for j in 0..map.h {
for i in 0..map.w {
if map.is_empty(i, j) {
continue; //skip empty spaces
}
let rect_x = i as usize * rect_w;
let rect_y = j as usize * rect_h;
let texid = map.get(i, j).expect("i, j not in map range");
fb.draw_rectangle(
rect_x,
rect_y,
rect_w,
rect_h,
tex_walls.get(0, 0, texid).expect("no texture at texid"),
)?;
}
}
let mut depth_buffer = vec![1e3; (fb.w/2) as usize];
for i in 0..fb.w / 2 {
//cast field of vision on map AND generate 3D view
let angle: f32 = player.get_a() - player.fov / 2. + player.fov * i as f32 / (fb.w / 2) as f32;
for t in 0..2000 {
//since Rust doesn't allow step by float, remap so step==1
let t = t as f32 / 100.; //then transform back to original range
let x = player.x + t * angle.cos();
let y = player.y + t * angle.sin();
// draw the visibility cone on the map
fb.set_pixel(
(x * rect_w as f32) as usize,
(y * rect_h as f32) as usize,
// utils::pack_color_rgb(160, 160, 160),
utils::pack_color_bgra(160, 160, 160, 255),
)
.expect("Could not set pixel");
// if this map tile isn't empty, we've hit a wall
if map.is_empty(x as u32, y as u32) {
continue;
}
// hit a wall
let texid = map
.get(x as u32, y as u32)
.expect("Cannot index this map tile");
assert!(texid < tex_walls.count);
let distance = t * f32::cos(angle - player.get_a());
depth_buffer[i as usize] = distance;
let column_height = (fb.h as f32 / distance) as u32;
let x_texcoord = wall_x_texcoord(x, y, tex_walls);
let column = tex_walls
.get_scaled_column(texid, x_texcoord as u32, column_height)
.expect("Cannot retrieve scaled column");
let pix_x = i + fb.w / 2;
for j in 0..column_height {
let pix_y = j as usize + fb.h / 2 - column_height as usize / 2;
if pix_y < fb.h {
fb.set_pixel(pix_x, pix_y, column[j as usize])
.expect("Could not set pixel");
}
}
break;
}
}
// update distances from sprites to player
for sprite in sprites.iter_mut() {
sprite.player_dist = f32::sqrt(f32::powi(player.x - sprite.x, 2) + f32::powi(player.y - sprite.y, 2));
}
// sort sprites in reverse order of distance to player
sprites.sort_unstable_by(|lhs, rhs| rhs.player_dist.partial_cmp(&lhs.player_dist).unwrap());
// render sprites on map
for sprite in sprites.iter().take(sprites.len()) {
map_show_sprite(sprite, fb, &map)?;
draw_sprite(sprite, &depth_buffer, fb, &player, &tex_monsters)?;
}
Ok(())
}
fn main() -> std::io::Result<()> {
// TODO: unfuck colors
// TODO: create variable color schemes (RGBA vs BGRA)
// TODO: cleanup code
let mut fb = Framebuffer::new(1024, 512);
let mut window = minifb::Window::new("doom-iow", fb.w, fb.h, minifb::WindowOptions::default()).unwrap();
let mut player = Player::new (
3.456,
2.345,
1.523,
std::f32::consts::PI / 3.,
);
let map = match Map::init(16, 16) {
Ok(m) => m,
Err(_) => {
panic!("Could not open map");
}
};
let tex_walls = Texture::new("./walltex.png").expect("Could not open wall texture");
let tex_monsters = Texture::new("./monsters.png").expect("Could not open monster texture");
let mut sprites = vec![
Sprite::new(3.523, 3.812, 2, 0.0),
Sprite::new(1.834, 8.765, 0, 0.0),
Sprite::new(5.323, 5.365, 1, 0.0),
Sprite::new(4.123, 10.265, 1, 0.0),
];
make_gif(&mut player, &mut fb, &map, &mut sprites, &tex_walls, &tex_monsters).unwrap();
while window.is_open() &&!window.is_key_down(minifb::Key::Escape) {
render(&mut fb, &map, &player, &mut sprites, &tex_walls, &tex_monsters).unwrap();
player.set_a(player.get_a() - 0.1 * (2. * std::f32::consts::PI / 360.));
window.update_with_buffer(fb.img.as_slice()).unwrap();
}
Ok(())
}
fn make_gif(player: &mut Player, fb: &mut Framebuffer, map: &Map, sprites: &mut Vec<Sprite>,
tex_walls: &Texture, tex_monsters: &Texture) -> Result<(), std::io::Error> {
//clear the /out folder
Command::new("rm")
.arg("-rf")
.arg("out/")
.output()
.expect("failed to clear out directory");
//create new /out folder
Command::new("mkdir")
.arg("out")
.output()
.expect("failed to create directory");
for frame in 0..360 {
// for frame in 0..5 {
// for frame in 0..1 {
let output_path = "./out/";
let ss = format!("{}{:05}.ppm", output_path, frame);
// player.a -= 2. * std::f32::consts::PI / 360.;
player.set_a( player.get_a() - (2. * std::f32::consts::PI / 360.) );
render(fb, &map, &player, sprites, &tex_walls, &tex_monsters).expect("Could not render image");
utils::drop_ppm_image(ss.as_str(), &fb.img, fb.w as usize, fb.h as usize)
.expect("Could not drop image");
}
println!("Rendered all frames, collecting into gif...");
let output = Command::new("convert")
.args(&["-delay", "10", "-loop", "0", "*.ppm", "rendered.gif"])
.current_dir("out/")
.output()
.expect("Could not start process");
println!("Status: {}", output.status);
println!("Stdout: {}", String::from_utf8_lossy(&output.stdout));
println!("Stderr: {}", String::from_utf8_lossy(&output.stderr));
println!("done");
//open results in Finder
Command::new("open")
.arg("out/")
.output()
.expect("Could not open folder");
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn packs_ints() {
let r = 2;
let g = 4;
let b = 8;
let a = 16;
let packed = utils::pack_color_rgba(r, g, b, a);
assert_eq!(packed, 0b0001_0000_0000_1000_0000_0100_0000_0010);
}
#[test]
fn | () {
let packed = 0b0001_0000_0000_1000_0000_0100_0000_0010;
let (r, g, b, a) = utils::unpack_color(packed);
assert_eq!(vec![2, 4, 8, 16], vec![r, g, b, a]);
}
#[test]
fn packs_ints_idempotently() {
let r = 2;
let g = 4;
let b = 8;
let a = 255;
let color = utils::pack_color_rgba(r, g, b, a);
let (rc, gc, bc, ac) = utils::unpack_color(color);
assert_eq!(vec![r, g, b, a], vec![rc, gc, bc, ac]);
}
}
| unpacks_ints | identifier_name |
container.rs | // Copyright 2021-2022 Sony Group Corporation
//
// SPDX-License-Identifier: Apache-2.0
//
use crate::cgroup::{freeze, remove_cgroup_dir};
use crate::status::{self, get_current_container_state, Status};
use anyhow::{anyhow, Result};
use cgroups;
use cgroups::freezer::FreezerState;
use cgroups::hierarchies::is_cgroup2_unified_mode;
use nix::sys::signal::kill;
use nix::{
sys::signal::Signal,
sys::signal::SIGKILL,
unistd::{chdir, unlink, Pid},
};
use oci::{ContainerState, State as OCIState};
use procfs;
use rustjail::cgroups::fs::Manager as CgroupManager;
use rustjail::{
container::{BaseContainer, LinuxContainer, EXEC_FIFO_FILENAME},
process::{Process, ProcessOperations},
specconv::CreateOpts,
};
use scopeguard::defer;
use slog::{debug, info, Logger};
use std::{
env::current_dir,
fs,
path::{Path, PathBuf},
};
use kata_sys_util::hooks::HookStates;
pub const CONFIG_FILE_NAME: &str = "config.json";
#[derive(Debug, Copy, Clone, PartialEq)]
pub enum ContainerAction {
Create,
Start,
Run,
}
#[derive(Debug)]
pub struct Container {
pub status: Status,
pub state: ContainerState,
pub cgroup: cgroups::Cgroup,
}
// Container represents a container that is created by the container runtime.
impl Container {
pub fn load(state_root: &Path, id: &str) -> Result<Self> {
let status = Status::load(state_root, id)?;
let spec = status
.config
.spec
.as_ref()
.ok_or_else(|| anyhow!("spec config was not present"))?;
let linux = spec
.linux
.as_ref()
.ok_or_else(|| anyhow!("linux config was not present"))?;
let cpath = if linux.cgroups_path.is_empty() {
id.to_string()
} else {
linux
.cgroups_path
.clone()
.trim_start_matches('/')
.to_string()
};
let cgroup = cgroups::Cgroup::load(cgroups::hierarchies::auto(), cpath);
let state = get_current_container_state(&status, &cgroup)?;
Ok(Self {
status,
state,
cgroup,
})
}
pub fn processes(&self) -> Result<Vec<Pid>> {
let pids = self.cgroup.tasks();
let result = pids.iter().map(|x| Pid::from_raw(x.pid as i32)).collect();
Ok(result)
}
pub fn kill(&self, signal: Signal, all: bool) -> Result<()> {
if all {
let pids = self.processes()?;
for pid in pids {
if!status::is_process_running(pid)? {
continue;
}
kill(pid, signal)?;
}
} else {
// If --all option is not specified and the container is stopped,
// kill operation generates an error in accordance with the OCI runtime spec.
if self.state == ContainerState::Stopped {
return Err(anyhow!(
"container {} can't be killed because it is {:?}",
self.status.id,
self.state
));
}
let pid = Pid::from_raw(self.status.pid);
if status::is_process_running(pid)? {
kill(pid, signal)?;
}
}
// For cgroup v1, killing a process in a frozen cgroup does nothing until it's thawed.
// Only thaw the cgroup for SIGKILL.
// Ref: https://github.com/opencontainers/runc/pull/3217
if!is_cgroup2_unified_mode() && self.state == ContainerState::Paused && signal == SIGKILL {
freeze(&self.cgroup, FreezerState::Thawed)?;
}
Ok(())
}
pub async fn delete(&self, force: bool, logger: &Logger) -> Result<()> {
let status = &self.status;
let spec = status
.config
.spec
.as_ref()
.ok_or_else(|| anyhow!("spec config was not present in the status"))?;
let oci_state = OCIState {
version: status.oci_version.clone(),
id: status.id.clone(),
status: self.state,
pid: status.pid,
bundle: status
.bundle
.to_str()
.ok_or_else(|| anyhow!("invalid bundle path"))?
.to_string(),
annotations: spec.annotations.clone(),
};
if let Some(hooks) = spec.hooks.as_ref() {
info!(&logger, "Poststop Hooks");
let mut poststop_hookstates = HookStates::new();
poststop_hookstates.execute_hooks(&hooks.poststop, Some(oci_state.clone()))?;
}
match oci_state.status {
ContainerState::Stopped => {
self.destroy()?;
}
ContainerState::Created => {
// Kill an init process
self.kill(SIGKILL, false)?;
self.destroy()?;
}
_ => {
if force {
self.kill(SIGKILL, true)?;
self.destroy()?;
} else {
return Err(anyhow!(
"cannot delete container {} that is not stopped",
&status.id
));
}
}
}
Ok(())
}
pub fn pause(&self) -> Result<()> {
if self.state!= ContainerState::Running && self.state!= ContainerState::Created {
return Err(anyhow!(
"failed to pause container: current status is: {:?}",
self.state
));
}
freeze(&self.cgroup, FreezerState::Frozen)?;
Ok(())
}
pub fn resume(&self) -> Result<()> {
if self.state!= ContainerState::Paused {
return Err(anyhow!(
"failed to resume container: current status is: {:?}",
self.state
));
}
freeze(&self.cgroup, FreezerState::Thawed)?;
Ok(())
}
pub fn destroy(&self) -> Result<()> {
remove_cgroup_dir(&self.cgroup)?;
self.status.remove_dir()
}
}
/// Used to run a process. If init is set, it will create a container and run the process in it.
/// If init is not set, it will run the process in an existing container.
#[derive(Debug)]
pub struct ContainerLauncher {
pub id: String,
pub bundle: PathBuf,
pub state_root: PathBuf,
pub init: bool,
pub runner: LinuxContainer,
pub pid_file: Option<PathBuf>,
}
impl ContainerLauncher {
pub fn new(
id: &str,
bundle: &Path,
state_root: &Path,
init: bool,
runner: LinuxContainer,
pid_file: Option<PathBuf>,
) -> Self {
ContainerLauncher {
id: id.to_string(),
bundle: bundle.to_path_buf(),
state_root: state_root.to_path_buf(),
init,
runner,
pid_file,
}
}
/// Launch a process. For init containers, we will create a container. For non-init, it will join an existing container.
pub async fn launch(&mut self, action: ContainerAction, logger: &Logger) -> Result<()> {
if self.init {
self.spawn_container(action, logger).await?;
} else {
if action == ContainerAction::Create {
return Err(anyhow!(
"ContainerAction::Create is used for init-container only"
));
}
self.spawn_process(action, logger).await?;
}
if let Some(pid_file) = self.pid_file.as_ref() {
fs::write(
pid_file,
format!("{}", self.runner.get_process(self.id.as_str())?.pid()),
)?;
}
Ok(())
}
/// Create the container by invoking runner to spawn the first process and save status.
async fn spawn_container(&mut self, action: ContainerAction, logger: &Logger) -> Result<()> {
// State root path root/id has been created in LinuxContainer::new(),
// so we don't have to create it again.
// Spawn a new process in the container by using the agent's codes.
self.spawn_process(action, logger).await?;
let status = self.get_status()?;
status.save()?;
debug!(logger, "saved status is {:?}", status);
// Clean up the fifo file created by LinuxContainer, which is used for block the created process.
if action == ContainerAction::Run || action == ContainerAction::Start {
let fifo_path = get_fifo_path(&status);
if fifo_path.exists() {
unlink(&fifo_path)?;
}
}
Ok(())
}
/// Generate rustjail::Process from OCI::Process
fn get_process(&self, logger: &Logger) -> Result<Process> {
let spec = self.runner.config.spec.as_ref().unwrap();
if spec.process.is_some() {
Ok(Process::new(
logger,
spec.process
.as_ref()
.ok_or_else(|| anyhow!("process config was not present in the spec file"))?,
// rustjail::LinuxContainer use the exec_id to identify processes in a container,
// so we can get the spawned process by ctr.get_process(exec_id) later.
// Since LinuxContainer is temporarily created to spawn one process in each runk invocation,
// we can use arbitrary string as the exec_id. Here we choose the container id.
&self.id,
self.init,
0,
)?)
} else {
Err(anyhow!("no process configuration"))
}
}
/// Spawn a new process in the container by invoking runner.
async fn spawn_process(&mut self, action: ContainerAction, logger: &Logger) -> Result<()> {
// Agent will chdir to bundle_path before creating LinuxContainer. Just do the same as agent.
let current_dir = current_dir()?;
chdir(&self.bundle)?;
defer! {
chdir(¤t_dir).unwrap();
}
let process = self.get_process(logger)?;
match action {
ContainerAction::Create => {
self.runner.start(process).await?;
}
ContainerAction::Start => {
self.runner.exec().await?;
}
ContainerAction::Run => {
self.runner.run(process).await?;
}
}
Ok(())
}
/// Generate runk specified Status
fn get_status(&self) -> Result<Status> {
let oci_state = self.runner.oci_state()?;
// read start time from /proc/<pid>/stat
let proc = procfs::process::Process::new(self.runner.init_process_pid)?;
let process_start_time = proc.stat()?.starttime;
Status::new(
&self.state_root,
&self.bundle,
oci_state,
process_start_time,
self.runner.created,
self.runner
.cgroup_manager
.as_ref()
.as_any()?
.downcast_ref::<CgroupManager>()
.unwrap()
.clone(),
self.runner.config.clone(),
)
}
}
pub fn create_linux_container(
id: &str,
root: &Path,
config: CreateOpts,
console_socket: Option<PathBuf>,
logger: &Logger,
) -> Result<LinuxContainer> {
let mut container = LinuxContainer::new(
id,
root.to_str()
.map(|s| s.to_string())
.ok_or_else(|| anyhow!("failed to convert bundle path"))?
.as_str(),
config,
logger,
)?;
if let Some(socket_path) = console_socket.as_ref() {
container.set_console_socket(socket_path)?;
}
Ok(container)
}
// Load rustjail's Linux container.
// "uid_map_path" and "gid_map_path" are always empty, so they are not set.
pub fn load_linux_container(
status: &Status,
console_socket: Option<PathBuf>,
logger: &Logger,
) -> Result<LinuxContainer> {
let mut container = LinuxContainer::new(
&status.id,
&status
.root
.to_str()
.map(|s| s.to_string())
.ok_or_else(|| anyhow!("failed to convert a root path"))?,
status.config.clone(),
logger,
)?;
if let Some(socket_path) = console_socket.as_ref() {
container.set_console_socket(socket_path)?;
}
container.init_process_pid = status.pid;
container.init_process_start_time = status.process_start_time;
container.created = status.created.into();
Ok(container)
}
pub fn get_config_path<P: AsRef<Path>>(bundle: P) -> PathBuf {
bundle.as_ref().join(CONFIG_FILE_NAME)
}
pub fn get_fifo_path(status: &Status) -> PathBuf {
status.root.join(&status.id).join(EXEC_FIFO_FILENAME)
} | use rustjail::container::EXEC_FIFO_FILENAME;
use std::path::PathBuf;
#[test]
fn test_get_config_path() {
let test_data = PathBuf::from(TEST_BUNDLE_PATH).join(CONFIG_FILE_NAME);
assert_eq!(get_config_path(TEST_BUNDLE_PATH), test_data);
}
#[test]
fn test_get_fifo_path() {
let test_data = PathBuf::from(TEST_STATE_ROOT_PATH)
.join(TEST_CONTAINER_ID)
.join(EXEC_FIFO_FILENAME);
let status = create_dummy_status();
assert_eq!(get_fifo_path(&status), test_data);
}
} |
#[cfg(test)]
mod tests {
use super::*;
use crate::utils::test_utils::*; | random_line_split |
container.rs | // Copyright 2021-2022 Sony Group Corporation
//
// SPDX-License-Identifier: Apache-2.0
//
use crate::cgroup::{freeze, remove_cgroup_dir};
use crate::status::{self, get_current_container_state, Status};
use anyhow::{anyhow, Result};
use cgroups;
use cgroups::freezer::FreezerState;
use cgroups::hierarchies::is_cgroup2_unified_mode;
use nix::sys::signal::kill;
use nix::{
sys::signal::Signal,
sys::signal::SIGKILL,
unistd::{chdir, unlink, Pid},
};
use oci::{ContainerState, State as OCIState};
use procfs;
use rustjail::cgroups::fs::Manager as CgroupManager;
use rustjail::{
container::{BaseContainer, LinuxContainer, EXEC_FIFO_FILENAME},
process::{Process, ProcessOperations},
specconv::CreateOpts,
};
use scopeguard::defer;
use slog::{debug, info, Logger};
use std::{
env::current_dir,
fs,
path::{Path, PathBuf},
};
use kata_sys_util::hooks::HookStates;
pub const CONFIG_FILE_NAME: &str = "config.json";
#[derive(Debug, Copy, Clone, PartialEq)]
pub enum ContainerAction {
Create,
Start,
Run,
}
#[derive(Debug)]
pub struct Container {
pub status: Status,
pub state: ContainerState,
pub cgroup: cgroups::Cgroup,
}
// Container represents a container that is created by the container runtime.
impl Container {
pub fn load(state_root: &Path, id: &str) -> Result<Self> {
let status = Status::load(state_root, id)?;
let spec = status
.config
.spec
.as_ref()
.ok_or_else(|| anyhow!("spec config was not present"))?;
let linux = spec
.linux
.as_ref()
.ok_or_else(|| anyhow!("linux config was not present"))?;
let cpath = if linux.cgroups_path.is_empty() {
id.to_string()
} else {
linux
.cgroups_path
.clone()
.trim_start_matches('/')
.to_string()
};
let cgroup = cgroups::Cgroup::load(cgroups::hierarchies::auto(), cpath);
let state = get_current_container_state(&status, &cgroup)?;
Ok(Self {
status,
state,
cgroup,
})
}
pub fn processes(&self) -> Result<Vec<Pid>> {
let pids = self.cgroup.tasks();
let result = pids.iter().map(|x| Pid::from_raw(x.pid as i32)).collect();
Ok(result)
}
pub fn kill(&self, signal: Signal, all: bool) -> Result<()> {
if all {
let pids = self.processes()?;
for pid in pids {
if!status::is_process_running(pid)? {
continue;
}
kill(pid, signal)?;
}
} else {
// If --all option is not specified and the container is stopped,
// kill operation generates an error in accordance with the OCI runtime spec.
if self.state == ContainerState::Stopped {
return Err(anyhow!(
"container {} can't be killed because it is {:?}",
self.status.id,
self.state
));
}
let pid = Pid::from_raw(self.status.pid);
if status::is_process_running(pid)? {
kill(pid, signal)?;
}
}
// For cgroup v1, killing a process in a frozen cgroup does nothing until it's thawed.
// Only thaw the cgroup for SIGKILL.
// Ref: https://github.com/opencontainers/runc/pull/3217
if!is_cgroup2_unified_mode() && self.state == ContainerState::Paused && signal == SIGKILL {
freeze(&self.cgroup, FreezerState::Thawed)?;
}
Ok(())
}
pub async fn delete(&self, force: bool, logger: &Logger) -> Result<()> {
let status = &self.status;
let spec = status
.config
.spec
.as_ref()
.ok_or_else(|| anyhow!("spec config was not present in the status"))?;
let oci_state = OCIState {
version: status.oci_version.clone(),
id: status.id.clone(),
status: self.state,
pid: status.pid,
bundle: status
.bundle
.to_str()
.ok_or_else(|| anyhow!("invalid bundle path"))?
.to_string(),
annotations: spec.annotations.clone(),
};
if let Some(hooks) = spec.hooks.as_ref() {
info!(&logger, "Poststop Hooks");
let mut poststop_hookstates = HookStates::new();
poststop_hookstates.execute_hooks(&hooks.poststop, Some(oci_state.clone()))?;
}
match oci_state.status {
ContainerState::Stopped => {
self.destroy()?;
}
ContainerState::Created => {
// Kill an init process
self.kill(SIGKILL, false)?;
self.destroy()?;
}
_ => {
if force {
self.kill(SIGKILL, true)?;
self.destroy()?;
} else {
return Err(anyhow!(
"cannot delete container {} that is not stopped",
&status.id
));
}
}
}
Ok(())
}
pub fn | (&self) -> Result<()> {
if self.state!= ContainerState::Running && self.state!= ContainerState::Created {
return Err(anyhow!(
"failed to pause container: current status is: {:?}",
self.state
));
}
freeze(&self.cgroup, FreezerState::Frozen)?;
Ok(())
}
pub fn resume(&self) -> Result<()> {
if self.state!= ContainerState::Paused {
return Err(anyhow!(
"failed to resume container: current status is: {:?}",
self.state
));
}
freeze(&self.cgroup, FreezerState::Thawed)?;
Ok(())
}
pub fn destroy(&self) -> Result<()> {
remove_cgroup_dir(&self.cgroup)?;
self.status.remove_dir()
}
}
/// Used to run a process. If init is set, it will create a container and run the process in it.
/// If init is not set, it will run the process in an existing container.
#[derive(Debug)]
pub struct ContainerLauncher {
pub id: String,
pub bundle: PathBuf,
pub state_root: PathBuf,
pub init: bool,
pub runner: LinuxContainer,
pub pid_file: Option<PathBuf>,
}
impl ContainerLauncher {
pub fn new(
id: &str,
bundle: &Path,
state_root: &Path,
init: bool,
runner: LinuxContainer,
pid_file: Option<PathBuf>,
) -> Self {
ContainerLauncher {
id: id.to_string(),
bundle: bundle.to_path_buf(),
state_root: state_root.to_path_buf(),
init,
runner,
pid_file,
}
}
/// Launch a process. For init containers, we will create a container. For non-init, it will join an existing container.
pub async fn launch(&mut self, action: ContainerAction, logger: &Logger) -> Result<()> {
if self.init {
self.spawn_container(action, logger).await?;
} else {
if action == ContainerAction::Create {
return Err(anyhow!(
"ContainerAction::Create is used for init-container only"
));
}
self.spawn_process(action, logger).await?;
}
if let Some(pid_file) = self.pid_file.as_ref() {
fs::write(
pid_file,
format!("{}", self.runner.get_process(self.id.as_str())?.pid()),
)?;
}
Ok(())
}
/// Create the container by invoking runner to spawn the first process and save status.
async fn spawn_container(&mut self, action: ContainerAction, logger: &Logger) -> Result<()> {
// State root path root/id has been created in LinuxContainer::new(),
// so we don't have to create it again.
// Spawn a new process in the container by using the agent's codes.
self.spawn_process(action, logger).await?;
let status = self.get_status()?;
status.save()?;
debug!(logger, "saved status is {:?}", status);
// Clean up the fifo file created by LinuxContainer, which is used for block the created process.
if action == ContainerAction::Run || action == ContainerAction::Start {
let fifo_path = get_fifo_path(&status);
if fifo_path.exists() {
unlink(&fifo_path)?;
}
}
Ok(())
}
/// Generate rustjail::Process from OCI::Process
fn get_process(&self, logger: &Logger) -> Result<Process> {
let spec = self.runner.config.spec.as_ref().unwrap();
if spec.process.is_some() {
Ok(Process::new(
logger,
spec.process
.as_ref()
.ok_or_else(|| anyhow!("process config was not present in the spec file"))?,
// rustjail::LinuxContainer use the exec_id to identify processes in a container,
// so we can get the spawned process by ctr.get_process(exec_id) later.
// Since LinuxContainer is temporarily created to spawn one process in each runk invocation,
// we can use arbitrary string as the exec_id. Here we choose the container id.
&self.id,
self.init,
0,
)?)
} else {
Err(anyhow!("no process configuration"))
}
}
/// Spawn a new process in the container by invoking runner.
async fn spawn_process(&mut self, action: ContainerAction, logger: &Logger) -> Result<()> {
// Agent will chdir to bundle_path before creating LinuxContainer. Just do the same as agent.
let current_dir = current_dir()?;
chdir(&self.bundle)?;
defer! {
chdir(¤t_dir).unwrap();
}
let process = self.get_process(logger)?;
match action {
ContainerAction::Create => {
self.runner.start(process).await?;
}
ContainerAction::Start => {
self.runner.exec().await?;
}
ContainerAction::Run => {
self.runner.run(process).await?;
}
}
Ok(())
}
/// Generate runk specified Status
fn get_status(&self) -> Result<Status> {
let oci_state = self.runner.oci_state()?;
// read start time from /proc/<pid>/stat
let proc = procfs::process::Process::new(self.runner.init_process_pid)?;
let process_start_time = proc.stat()?.starttime;
Status::new(
&self.state_root,
&self.bundle,
oci_state,
process_start_time,
self.runner.created,
self.runner
.cgroup_manager
.as_ref()
.as_any()?
.downcast_ref::<CgroupManager>()
.unwrap()
.clone(),
self.runner.config.clone(),
)
}
}
pub fn create_linux_container(
id: &str,
root: &Path,
config: CreateOpts,
console_socket: Option<PathBuf>,
logger: &Logger,
) -> Result<LinuxContainer> {
let mut container = LinuxContainer::new(
id,
root.to_str()
.map(|s| s.to_string())
.ok_or_else(|| anyhow!("failed to convert bundle path"))?
.as_str(),
config,
logger,
)?;
if let Some(socket_path) = console_socket.as_ref() {
container.set_console_socket(socket_path)?;
}
Ok(container)
}
// Load rustjail's Linux container.
// "uid_map_path" and "gid_map_path" are always empty, so they are not set.
pub fn load_linux_container(
status: &Status,
console_socket: Option<PathBuf>,
logger: &Logger,
) -> Result<LinuxContainer> {
let mut container = LinuxContainer::new(
&status.id,
&status
.root
.to_str()
.map(|s| s.to_string())
.ok_or_else(|| anyhow!("failed to convert a root path"))?,
status.config.clone(),
logger,
)?;
if let Some(socket_path) = console_socket.as_ref() {
container.set_console_socket(socket_path)?;
}
container.init_process_pid = status.pid;
container.init_process_start_time = status.process_start_time;
container.created = status.created.into();
Ok(container)
}
pub fn get_config_path<P: AsRef<Path>>(bundle: P) -> PathBuf {
bundle.as_ref().join(CONFIG_FILE_NAME)
}
pub fn get_fifo_path(status: &Status) -> PathBuf {
status.root.join(&status.id).join(EXEC_FIFO_FILENAME)
}
#[cfg(test)]
mod tests {
use super::*;
use crate::utils::test_utils::*;
use rustjail::container::EXEC_FIFO_FILENAME;
use std::path::PathBuf;
#[test]
fn test_get_config_path() {
let test_data = PathBuf::from(TEST_BUNDLE_PATH).join(CONFIG_FILE_NAME);
assert_eq!(get_config_path(TEST_BUNDLE_PATH), test_data);
}
#[test]
fn test_get_fifo_path() {
let test_data = PathBuf::from(TEST_STATE_ROOT_PATH)
.join(TEST_CONTAINER_ID)
.join(EXEC_FIFO_FILENAME);
let status = create_dummy_status();
assert_eq!(get_fifo_path(&status), test_data);
}
}
| pause | identifier_name |
container.rs | // Copyright 2021-2022 Sony Group Corporation
//
// SPDX-License-Identifier: Apache-2.0
//
use crate::cgroup::{freeze, remove_cgroup_dir};
use crate::status::{self, get_current_container_state, Status};
use anyhow::{anyhow, Result};
use cgroups;
use cgroups::freezer::FreezerState;
use cgroups::hierarchies::is_cgroup2_unified_mode;
use nix::sys::signal::kill;
use nix::{
sys::signal::Signal,
sys::signal::SIGKILL,
unistd::{chdir, unlink, Pid},
};
use oci::{ContainerState, State as OCIState};
use procfs;
use rustjail::cgroups::fs::Manager as CgroupManager;
use rustjail::{
container::{BaseContainer, LinuxContainer, EXEC_FIFO_FILENAME},
process::{Process, ProcessOperations},
specconv::CreateOpts,
};
use scopeguard::defer;
use slog::{debug, info, Logger};
use std::{
env::current_dir,
fs,
path::{Path, PathBuf},
};
use kata_sys_util::hooks::HookStates;
pub const CONFIG_FILE_NAME: &str = "config.json";
#[derive(Debug, Copy, Clone, PartialEq)]
pub enum ContainerAction {
Create,
Start,
Run,
}
#[derive(Debug)]
pub struct Container {
pub status: Status,
pub state: ContainerState,
pub cgroup: cgroups::Cgroup,
}
// Container represents a container that is created by the container runtime.
impl Container {
pub fn load(state_root: &Path, id: &str) -> Result<Self> {
let status = Status::load(state_root, id)?;
let spec = status
.config
.spec
.as_ref()
.ok_or_else(|| anyhow!("spec config was not present"))?;
let linux = spec
.linux
.as_ref()
.ok_or_else(|| anyhow!("linux config was not present"))?;
let cpath = if linux.cgroups_path.is_empty() {
id.to_string()
} else {
linux
.cgroups_path
.clone()
.trim_start_matches('/')
.to_string()
};
let cgroup = cgroups::Cgroup::load(cgroups::hierarchies::auto(), cpath);
let state = get_current_container_state(&status, &cgroup)?;
Ok(Self {
status,
state,
cgroup,
})
}
pub fn processes(&self) -> Result<Vec<Pid>> {
let pids = self.cgroup.tasks();
let result = pids.iter().map(|x| Pid::from_raw(x.pid as i32)).collect();
Ok(result)
}
pub fn kill(&self, signal: Signal, all: bool) -> Result<()> {
if all {
let pids = self.processes()?;
for pid in pids {
if!status::is_process_running(pid)? {
continue;
}
kill(pid, signal)?;
}
} else {
// If --all option is not specified and the container is stopped,
// kill operation generates an error in accordance with the OCI runtime spec.
if self.state == ContainerState::Stopped {
return Err(anyhow!(
"container {} can't be killed because it is {:?}",
self.status.id,
self.state
));
}
let pid = Pid::from_raw(self.status.pid);
if status::is_process_running(pid)? {
kill(pid, signal)?;
}
}
// For cgroup v1, killing a process in a frozen cgroup does nothing until it's thawed.
// Only thaw the cgroup for SIGKILL.
// Ref: https://github.com/opencontainers/runc/pull/3217
if!is_cgroup2_unified_mode() && self.state == ContainerState::Paused && signal == SIGKILL {
freeze(&self.cgroup, FreezerState::Thawed)?;
}
Ok(())
}
pub async fn delete(&self, force: bool, logger: &Logger) -> Result<()> {
let status = &self.status;
let spec = status
.config
.spec
.as_ref()
.ok_or_else(|| anyhow!("spec config was not present in the status"))?;
let oci_state = OCIState {
version: status.oci_version.clone(),
id: status.id.clone(),
status: self.state,
pid: status.pid,
bundle: status
.bundle
.to_str()
.ok_or_else(|| anyhow!("invalid bundle path"))?
.to_string(),
annotations: spec.annotations.clone(),
};
if let Some(hooks) = spec.hooks.as_ref() {
info!(&logger, "Poststop Hooks");
let mut poststop_hookstates = HookStates::new();
poststop_hookstates.execute_hooks(&hooks.poststop, Some(oci_state.clone()))?;
}
match oci_state.status {
ContainerState::Stopped => {
self.destroy()?;
}
ContainerState::Created => {
// Kill an init process
self.kill(SIGKILL, false)?;
self.destroy()?;
}
_ => {
if force {
self.kill(SIGKILL, true)?;
self.destroy()?;
} else {
return Err(anyhow!(
"cannot delete container {} that is not stopped",
&status.id
));
}
}
}
Ok(())
}
pub fn pause(&self) -> Result<()> {
if self.state!= ContainerState::Running && self.state!= ContainerState::Created {
return Err(anyhow!(
"failed to pause container: current status is: {:?}",
self.state
));
}
freeze(&self.cgroup, FreezerState::Frozen)?;
Ok(())
}
pub fn resume(&self) -> Result<()> {
if self.state!= ContainerState::Paused |
freeze(&self.cgroup, FreezerState::Thawed)?;
Ok(())
}
pub fn destroy(&self) -> Result<()> {
remove_cgroup_dir(&self.cgroup)?;
self.status.remove_dir()
}
}
/// Used to run a process. If init is set, it will create a container and run the process in it.
/// If init is not set, it will run the process in an existing container.
#[derive(Debug)]
pub struct ContainerLauncher {
pub id: String,
pub bundle: PathBuf,
pub state_root: PathBuf,
pub init: bool,
pub runner: LinuxContainer,
pub pid_file: Option<PathBuf>,
}
impl ContainerLauncher {
pub fn new(
id: &str,
bundle: &Path,
state_root: &Path,
init: bool,
runner: LinuxContainer,
pid_file: Option<PathBuf>,
) -> Self {
ContainerLauncher {
id: id.to_string(),
bundle: bundle.to_path_buf(),
state_root: state_root.to_path_buf(),
init,
runner,
pid_file,
}
}
/// Launch a process. For init containers, we will create a container. For non-init, it will join an existing container.
pub async fn launch(&mut self, action: ContainerAction, logger: &Logger) -> Result<()> {
if self.init {
self.spawn_container(action, logger).await?;
} else {
if action == ContainerAction::Create {
return Err(anyhow!(
"ContainerAction::Create is used for init-container only"
));
}
self.spawn_process(action, logger).await?;
}
if let Some(pid_file) = self.pid_file.as_ref() {
fs::write(
pid_file,
format!("{}", self.runner.get_process(self.id.as_str())?.pid()),
)?;
}
Ok(())
}
/// Create the container by invoking runner to spawn the first process and save status.
async fn spawn_container(&mut self, action: ContainerAction, logger: &Logger) -> Result<()> {
// State root path root/id has been created in LinuxContainer::new(),
// so we don't have to create it again.
// Spawn a new process in the container by using the agent's codes.
self.spawn_process(action, logger).await?;
let status = self.get_status()?;
status.save()?;
debug!(logger, "saved status is {:?}", status);
// Clean up the fifo file created by LinuxContainer, which is used for block the created process.
if action == ContainerAction::Run || action == ContainerAction::Start {
let fifo_path = get_fifo_path(&status);
if fifo_path.exists() {
unlink(&fifo_path)?;
}
}
Ok(())
}
/// Generate rustjail::Process from OCI::Process
fn get_process(&self, logger: &Logger) -> Result<Process> {
let spec = self.runner.config.spec.as_ref().unwrap();
if spec.process.is_some() {
Ok(Process::new(
logger,
spec.process
.as_ref()
.ok_or_else(|| anyhow!("process config was not present in the spec file"))?,
// rustjail::LinuxContainer use the exec_id to identify processes in a container,
// so we can get the spawned process by ctr.get_process(exec_id) later.
// Since LinuxContainer is temporarily created to spawn one process in each runk invocation,
// we can use arbitrary string as the exec_id. Here we choose the container id.
&self.id,
self.init,
0,
)?)
} else {
Err(anyhow!("no process configuration"))
}
}
/// Spawn a new process in the container by invoking runner.
async fn spawn_process(&mut self, action: ContainerAction, logger: &Logger) -> Result<()> {
// Agent will chdir to bundle_path before creating LinuxContainer. Just do the same as agent.
let current_dir = current_dir()?;
chdir(&self.bundle)?;
defer! {
chdir(¤t_dir).unwrap();
}
let process = self.get_process(logger)?;
match action {
ContainerAction::Create => {
self.runner.start(process).await?;
}
ContainerAction::Start => {
self.runner.exec().await?;
}
ContainerAction::Run => {
self.runner.run(process).await?;
}
}
Ok(())
}
/// Generate runk specified Status
fn get_status(&self) -> Result<Status> {
let oci_state = self.runner.oci_state()?;
// read start time from /proc/<pid>/stat
let proc = procfs::process::Process::new(self.runner.init_process_pid)?;
let process_start_time = proc.stat()?.starttime;
Status::new(
&self.state_root,
&self.bundle,
oci_state,
process_start_time,
self.runner.created,
self.runner
.cgroup_manager
.as_ref()
.as_any()?
.downcast_ref::<CgroupManager>()
.unwrap()
.clone(),
self.runner.config.clone(),
)
}
}
pub fn create_linux_container(
id: &str,
root: &Path,
config: CreateOpts,
console_socket: Option<PathBuf>,
logger: &Logger,
) -> Result<LinuxContainer> {
let mut container = LinuxContainer::new(
id,
root.to_str()
.map(|s| s.to_string())
.ok_or_else(|| anyhow!("failed to convert bundle path"))?
.as_str(),
config,
logger,
)?;
if let Some(socket_path) = console_socket.as_ref() {
container.set_console_socket(socket_path)?;
}
Ok(container)
}
// Load rustjail's Linux container.
// "uid_map_path" and "gid_map_path" are always empty, so they are not set.
pub fn load_linux_container(
status: &Status,
console_socket: Option<PathBuf>,
logger: &Logger,
) -> Result<LinuxContainer> {
let mut container = LinuxContainer::new(
&status.id,
&status
.root
.to_str()
.map(|s| s.to_string())
.ok_or_else(|| anyhow!("failed to convert a root path"))?,
status.config.clone(),
logger,
)?;
if let Some(socket_path) = console_socket.as_ref() {
container.set_console_socket(socket_path)?;
}
container.init_process_pid = status.pid;
container.init_process_start_time = status.process_start_time;
container.created = status.created.into();
Ok(container)
}
pub fn get_config_path<P: AsRef<Path>>(bundle: P) -> PathBuf {
bundle.as_ref().join(CONFIG_FILE_NAME)
}
pub fn get_fifo_path(status: &Status) -> PathBuf {
status.root.join(&status.id).join(EXEC_FIFO_FILENAME)
}
#[cfg(test)]
mod tests {
use super::*;
use crate::utils::test_utils::*;
use rustjail::container::EXEC_FIFO_FILENAME;
use std::path::PathBuf;
#[test]
fn test_get_config_path() {
let test_data = PathBuf::from(TEST_BUNDLE_PATH).join(CONFIG_FILE_NAME);
assert_eq!(get_config_path(TEST_BUNDLE_PATH), test_data);
}
#[test]
fn test_get_fifo_path() {
let test_data = PathBuf::from(TEST_STATE_ROOT_PATH)
.join(TEST_CONTAINER_ID)
.join(EXEC_FIFO_FILENAME);
let status = create_dummy_status();
assert_eq!(get_fifo_path(&status), test_data);
}
}
| {
return Err(anyhow!(
"failed to resume container: current status is: {:?}",
self.state
));
} | conditional_block |
event_queue.rs | use {Implementable, Proxy};
use std::any::Any;
use std::io::{Error as IoError, Result as IoResult};
use std::io::Write;
use std::ops::{Deref, DerefMut};
use std::os::raw::{c_int, c_void};
use std::sync::Arc;
use std::sync::atomic::{AtomicBool, AtomicPtr};
pub use token_store::{Store as State, StoreProxy as StateProxy, Token as StateToken};
use wayland_sys::RUST_MANAGED;
use wayland_sys::client::*;
use wayland_sys::common::*;
type ProxyUserData = (
*mut EventQueueHandle,
Option<Box<Any>>,
Arc<(AtomicBool, AtomicPtr<()>)>,
);
/// Status of a registration attempt of a proxy.
pub enum RegisterStatus {
/// The proxy was properly registered to this event queue & handler.
Registered,
/// The proxy was not registered because it is not managed by `wayland-client`.
Unmanaged,
/// The proxy was not registered because it is already destroyed.
Dead,
}
/// Handle to an event queue
///
/// This handle gives you access to methods on an event queue
/// that are safe to do from within a callback.
///
/// They are also available on an `EventQueue` object via `Deref`.
pub struct EventQueueHandle {
state: State,
wlevq: Option<*mut wl_event_queue>,
}
impl EventQueueHandle {
/// Register a proxy to this event queue.
///
/// You are required to provide a valid implementation for this proxy
/// as well as some associated implementation data. This implementation
/// is expected to be a struct holding the various relevant
/// function pointers.
///
/// This implementation data can typically contain indexes to state value
/// that the implementation will need to work on.
///
/// This overwrites any precedently set implementation for this proxy.
///
/// Returns appropriately and does nothing if this proxy is dead or already managed by
/// something else than this library.
pub fn register<P, ID>(&mut self, proxy: &P, implementation: P::Implementation, idata: ID)
-> RegisterStatus
where
P: Proxy + Implementable<ID>,
ID:'static,
{
match proxy.status() {
::Liveness::Dead => return RegisterStatus::Dead,
::Liveness::Unmanaged => return RegisterStatus::Unmanaged,
::Liveness::Alive => { /* ok, we can continue */ }
}
unsafe {
let data: *mut ProxyUserData =
ffi_dispatch!(WAYLAND_CLIENT_HANDLE, wl_proxy_get_user_data, proxy.ptr()) as *mut _;
// This cast from *const to *mut is legit because we enforce that a proxy
// can only be assigned to a single EventQueue.
// (this is actually the whole point of the design of this lib)
(&mut *data).0 = self as *const _ as *mut _;
(&mut *data).1 = Some(Box::new((implementation, idata)) as Box<Any>);
// even if this call fails, we updated the user_data, so the new implementation is in place.
ffi_dispatch!(
WAYLAND_CLIENT_HANDLE,
wl_proxy_add_dispatcher,
proxy.ptr(),
dispatch_func::<P, ID>,
&RUST_MANAGED as *const _ as *const _,
data as *mut c_void
);
ffi_dispatch!(
WAYLAND_CLIENT_HANDLE,
wl_proxy_set_queue,
proxy.ptr(),
match self.wlevq {
Some(ptr) => ptr,
None => ::std::ptr::null_mut(),
}
);
}
RegisterStatus::Registered
}
/// Get a handle to the internal state
///
/// The returned guard object allows you to get references
/// to the handler objects you previously inserted in this
/// event queue.
pub fn state(&mut self) -> &mut State {
&mut self.state
}
}
/// An event queue managing wayland events
///
/// Each wayland object can receive events from the server. To handle these events
/// you must associate to these objects an implementation: a struct defined in their
/// respective module, in which you provide a set of functions that will handle each event.
///
/// Your implementation can also access a shared state managed by the event queue. See
/// the `State` struct and the `state()` method on `EventQueueHandle`. If you need this,
/// the way to do it is:
///
/// - insert your state value in the event queue state store, your are then provided with a
/// token to access it
/// - provide this token (you can clone it) as implementation data to any wayland object
/// that need to access this state in its event callbacks.
///
/// The event queues also provides you control on the flow of the program, via the `dispatch()` and
/// `dispatch_pending()` methods.
pub struct EventQueue {
handle: Box<EventQueueHandle>,
display: *mut wl_display,
}
impl EventQueue {
/// Dispatches events from the internal buffer.
///
/// Dispatches all events to their appropriate handlers.
/// If not events were in the internal buffer, will block until
/// some events are read and dispatch them.
/// This process can insert events in the internal buffers of
/// other event queues.
///
/// If an error is returned, your connection with the wayland
/// compositor is probably lost.
pub fn dispatch(&mut self) -> IoResult<u32> {
let ret = match self.handle.wlevq {
Some(evq) => unsafe {
ffi_dispatch!(
WAYLAND_CLIENT_HANDLE,
wl_display_dispatch_queue,
self.display,
evq
)
},
None => unsafe { ffi_dispatch!(WAYLAND_CLIENT_HANDLE, wl_display_dispatch, self.display) },
};
if ret >= 0 {
Ok(ret as u32)
} else {
Err(IoError::last_os_error())
}
}
/// Dispatches pending events from the internal buffer.
///
/// Dispatches all events to their appropriate handlers.
/// Never blocks, if not events were pending, simply returns
/// `Ok(0)`.
///
/// If an error is returned, your connection with the wayland
/// compositor is probably lost.
pub fn dispatch_pending(&mut self) -> IoResult<u32> {
let ret = match self.handle.wlevq {
Some(evq) => unsafe {
ffi_dispatch!(
WAYLAND_CLIENT_HANDLE,
wl_display_dispatch_queue_pending,
self.display,
evq
)
},
None => unsafe {
ffi_dispatch!(
WAYLAND_CLIENT_HANDLE,
wl_display_dispatch_pending,
self.display
)
},
};
if ret >= 0 {
Ok(ret as u32)
} else {
Err(IoError::last_os_error())
}
}
/// Synchronous roundtrip
///
/// This call will cause a synchonous roundtrip with the wayland server. It will block until all
/// pending requests of this queue are sent to the server and it has processed all of them and
/// send the appropriate events.
///
/// Handlers are called as a consequence.
///
/// On success returns the number of dispatched events.
pub fn sync_roundtrip(&mut self) -> IoResult<i32> {
let ret = unsafe {
match self.handle.wlevq {
Some(evtq) => ffi_dispatch!(
WAYLAND_CLIENT_HANDLE,
wl_display_roundtrip_queue,
self.display,
evtq
),
None => ffi_dispatch!(WAYLAND_CLIENT_HANDLE, wl_display_roundtrip, self.display),
}
};
if ret >= 0 {
Ok(ret)
} else {
Err(IoError::last_os_error())
}
}
/// Prepare an conccurent read
///
/// Will declare your intention to read events from the server socket.
///
/// Will return `None` if there are still some events awaiting dispatch on this EventIterator.
/// In this case, you need to call `dispatch_pending()` before calling this method again.
///
/// As long as the returned guard is in scope, no events can be dispatched to any event iterator.
///
/// The guard can then be destroyed by two means:
///
/// - Calling its `cancel()` method (or letting it go out of scope): the read intention will
/// be cancelled
/// - Calling its `read_events()` method: will block until all existing guards are destroyed
/// by one of these methods, then events will be read and all blocked `read_events()` calls
/// will return.
///
/// This call will otherwise not block on the server socket if it is empty, and return
/// an io error `WouldBlock` in such cases.
pub fn prepare_read(&self) -> Option<ReadEventsGuard> {
let ret = unsafe { | Some(evtq) => ffi_dispatch!(
WAYLAND_CLIENT_HANDLE,
wl_display_prepare_read_queue,
self.display,
evtq
),
None => ffi_dispatch!(WAYLAND_CLIENT_HANDLE, wl_display_prepare_read, self.display),
}
};
if ret >= 0 {
Some(ReadEventsGuard {
display: self.display,
})
} else {
None
}
}
}
impl Deref for EventQueue {
type Target = EventQueueHandle;
fn deref(&self) -> &EventQueueHandle {
&*self.handle
}
}
impl DerefMut for EventQueue {
fn deref_mut(&mut self) -> &mut EventQueueHandle {
&mut *self.handle
}
}
/// A guard over a read intention.
///
/// See `EventQueue::prepare_read()` for details about its use.
pub struct ReadEventsGuard {
display: *mut wl_display,
}
impl ReadEventsGuard {
/// Read events
///
/// Reads events from the server socket. If other `ReadEventsGuard` exists, will block
/// until they are all consumed or destroyed.
pub fn read_events(self) -> IoResult<i32> {
let ret = unsafe { ffi_dispatch!(WAYLAND_CLIENT_HANDLE, wl_display_read_events, self.display) };
// Don't run destructor that would cancel the read intent
::std::mem::forget(self);
if ret >= 0 {
Ok(ret)
} else {
Err(IoError::last_os_error())
}
}
/// Cancel the read
///
/// Will cancel the read intention associated with this guard. Never blocks.
///
/// Has the same effet as letting the guard go out of scope.
pub fn cancel(self) {
// just run the destructor
}
}
impl Drop for ReadEventsGuard {
fn drop(&mut self) {
unsafe { ffi_dispatch!(WAYLAND_CLIENT_HANDLE, wl_display_cancel_read, self.display) }
}
}
pub unsafe fn create_event_queue(display: *mut wl_display, evq: Option<*mut wl_event_queue>) -> EventQueue {
EventQueue {
display: display,
handle: Box::new(EventQueueHandle {
state: State::new(),
wlevq: evq,
}),
}
}
unsafe extern "C" fn dispatch_func<P, ID>(_impl: *const c_void, proxy: *mut c_void, opcode: u32,
_msg: *const wl_message, args: *const wl_argument)
-> c_int
where
P: Proxy + Implementable<ID>,
ID:'static,
{
// sanity check, if it triggers, it is a bug
if _impl!= &RUST_MANAGED as *const _ as *const _ {
let _ = write!(
::std::io::stderr(),
"[wayland-client error] Dispatcher got called for a message on a non-managed object."
);
::libc::abort();
}
// We don't need to worry about panic-safeness, because if there is a panic,
// we'll abort the process, so no access to corrupted data is possible.
let ret = ::std::panic::catch_unwind(move || {
let proxy = P::from_ptr_initialized(proxy as *mut wl_proxy);
proxy.__dispatch_msg(opcode, args)
});
match ret {
Ok(Ok(())) => return 0, // all went well
Ok(Err(())) => {
// an unknown opcode was dispatched, this is not normal
let _ = write!(
::std::io::stderr(),
"[wayland-client error] Attempted to dispatch unknown opcode {} for {}, aborting.",
opcode,
P::interface_name()
);
::libc::abort();
}
Err(_) => {
// a panic occured
let _ = write!(
::std::io::stderr(),
"[wayland-client error] A handler for {} panicked, aborting.",
P::interface_name()
);
::libc::abort();
}
}
} | match self.handle.wlevq { | random_line_split |
event_queue.rs | use {Implementable, Proxy};
use std::any::Any;
use std::io::{Error as IoError, Result as IoResult};
use std::io::Write;
use std::ops::{Deref, DerefMut};
use std::os::raw::{c_int, c_void};
use std::sync::Arc;
use std::sync::atomic::{AtomicBool, AtomicPtr};
pub use token_store::{Store as State, StoreProxy as StateProxy, Token as StateToken};
use wayland_sys::RUST_MANAGED;
use wayland_sys::client::*;
use wayland_sys::common::*;
type ProxyUserData = (
*mut EventQueueHandle,
Option<Box<Any>>,
Arc<(AtomicBool, AtomicPtr<()>)>,
);
/// Status of a registration attempt of a proxy.
pub enum RegisterStatus {
/// The proxy was properly registered to this event queue & handler.
Registered,
/// The proxy was not registered because it is not managed by `wayland-client`.
Unmanaged,
/// The proxy was not registered because it is already destroyed.
Dead,
}
/// Handle to an event queue
///
/// This handle gives you access to methods on an event queue
/// that are safe to do from within a callback.
///
/// They are also available on an `EventQueue` object via `Deref`.
pub struct EventQueueHandle {
state: State,
wlevq: Option<*mut wl_event_queue>,
}
impl EventQueueHandle {
/// Register a proxy to this event queue.
///
/// You are required to provide a valid implementation for this proxy
/// as well as some associated implementation data. This implementation
/// is expected to be a struct holding the various relevant
/// function pointers.
///
/// This implementation data can typically contain indexes to state value
/// that the implementation will need to work on.
///
/// This overwrites any precedently set implementation for this proxy.
///
/// Returns appropriately and does nothing if this proxy is dead or already managed by
/// something else than this library.
pub fn register<P, ID>(&mut self, proxy: &P, implementation: P::Implementation, idata: ID)
-> RegisterStatus
where
P: Proxy + Implementable<ID>,
ID:'static,
{
match proxy.status() {
::Liveness::Dead => return RegisterStatus::Dead,
::Liveness::Unmanaged => return RegisterStatus::Unmanaged,
::Liveness::Alive => { /* ok, we can continue */ }
}
unsafe {
let data: *mut ProxyUserData =
ffi_dispatch!(WAYLAND_CLIENT_HANDLE, wl_proxy_get_user_data, proxy.ptr()) as *mut _;
// This cast from *const to *mut is legit because we enforce that a proxy
// can only be assigned to a single EventQueue.
// (this is actually the whole point of the design of this lib)
(&mut *data).0 = self as *const _ as *mut _;
(&mut *data).1 = Some(Box::new((implementation, idata)) as Box<Any>);
// even if this call fails, we updated the user_data, so the new implementation is in place.
ffi_dispatch!(
WAYLAND_CLIENT_HANDLE,
wl_proxy_add_dispatcher,
proxy.ptr(),
dispatch_func::<P, ID>,
&RUST_MANAGED as *const _ as *const _,
data as *mut c_void
);
ffi_dispatch!(
WAYLAND_CLIENT_HANDLE,
wl_proxy_set_queue,
proxy.ptr(),
match self.wlevq {
Some(ptr) => ptr,
None => ::std::ptr::null_mut(),
}
);
}
RegisterStatus::Registered
}
/// Get a handle to the internal state
///
/// The returned guard object allows you to get references
/// to the handler objects you previously inserted in this
/// event queue.
pub fn state(&mut self) -> &mut State |
}
/// An event queue managing wayland events
///
/// Each wayland object can receive events from the server. To handle these events
/// you must associate to these objects an implementation: a struct defined in their
/// respective module, in which you provide a set of functions that will handle each event.
///
/// Your implementation can also access a shared state managed by the event queue. See
/// the `State` struct and the `state()` method on `EventQueueHandle`. If you need this,
/// the way to do it is:
///
/// - insert your state value in the event queue state store, your are then provided with a
/// token to access it
/// - provide this token (you can clone it) as implementation data to any wayland object
/// that need to access this state in its event callbacks.
///
/// The event queues also provides you control on the flow of the program, via the `dispatch()` and
/// `dispatch_pending()` methods.
pub struct EventQueue {
handle: Box<EventQueueHandle>,
display: *mut wl_display,
}
impl EventQueue {
/// Dispatches events from the internal buffer.
///
/// Dispatches all events to their appropriate handlers.
/// If not events were in the internal buffer, will block until
/// some events are read and dispatch them.
/// This process can insert events in the internal buffers of
/// other event queues.
///
/// If an error is returned, your connection with the wayland
/// compositor is probably lost.
pub fn dispatch(&mut self) -> IoResult<u32> {
let ret = match self.handle.wlevq {
Some(evq) => unsafe {
ffi_dispatch!(
WAYLAND_CLIENT_HANDLE,
wl_display_dispatch_queue,
self.display,
evq
)
},
None => unsafe { ffi_dispatch!(WAYLAND_CLIENT_HANDLE, wl_display_dispatch, self.display) },
};
if ret >= 0 {
Ok(ret as u32)
} else {
Err(IoError::last_os_error())
}
}
/// Dispatches pending events from the internal buffer.
///
/// Dispatches all events to their appropriate handlers.
/// Never blocks, if not events were pending, simply returns
/// `Ok(0)`.
///
/// If an error is returned, your connection with the wayland
/// compositor is probably lost.
pub fn dispatch_pending(&mut self) -> IoResult<u32> {
let ret = match self.handle.wlevq {
Some(evq) => unsafe {
ffi_dispatch!(
WAYLAND_CLIENT_HANDLE,
wl_display_dispatch_queue_pending,
self.display,
evq
)
},
None => unsafe {
ffi_dispatch!(
WAYLAND_CLIENT_HANDLE,
wl_display_dispatch_pending,
self.display
)
},
};
if ret >= 0 {
Ok(ret as u32)
} else {
Err(IoError::last_os_error())
}
}
/// Synchronous roundtrip
///
/// This call will cause a synchonous roundtrip with the wayland server. It will block until all
/// pending requests of this queue are sent to the server and it has processed all of them and
/// send the appropriate events.
///
/// Handlers are called as a consequence.
///
/// On success returns the number of dispatched events.
pub fn sync_roundtrip(&mut self) -> IoResult<i32> {
let ret = unsafe {
match self.handle.wlevq {
Some(evtq) => ffi_dispatch!(
WAYLAND_CLIENT_HANDLE,
wl_display_roundtrip_queue,
self.display,
evtq
),
None => ffi_dispatch!(WAYLAND_CLIENT_HANDLE, wl_display_roundtrip, self.display),
}
};
if ret >= 0 {
Ok(ret)
} else {
Err(IoError::last_os_error())
}
}
/// Prepare an conccurent read
///
/// Will declare your intention to read events from the server socket.
///
/// Will return `None` if there are still some events awaiting dispatch on this EventIterator.
/// In this case, you need to call `dispatch_pending()` before calling this method again.
///
/// As long as the returned guard is in scope, no events can be dispatched to any event iterator.
///
/// The guard can then be destroyed by two means:
///
/// - Calling its `cancel()` method (or letting it go out of scope): the read intention will
/// be cancelled
/// - Calling its `read_events()` method: will block until all existing guards are destroyed
/// by one of these methods, then events will be read and all blocked `read_events()` calls
/// will return.
///
/// This call will otherwise not block on the server socket if it is empty, and return
/// an io error `WouldBlock` in such cases.
pub fn prepare_read(&self) -> Option<ReadEventsGuard> {
let ret = unsafe {
match self.handle.wlevq {
Some(evtq) => ffi_dispatch!(
WAYLAND_CLIENT_HANDLE,
wl_display_prepare_read_queue,
self.display,
evtq
),
None => ffi_dispatch!(WAYLAND_CLIENT_HANDLE, wl_display_prepare_read, self.display),
}
};
if ret >= 0 {
Some(ReadEventsGuard {
display: self.display,
})
} else {
None
}
}
}
impl Deref for EventQueue {
type Target = EventQueueHandle;
fn deref(&self) -> &EventQueueHandle {
&*self.handle
}
}
impl DerefMut for EventQueue {
fn deref_mut(&mut self) -> &mut EventQueueHandle {
&mut *self.handle
}
}
/// A guard over a read intention.
///
/// See `EventQueue::prepare_read()` for details about its use.
pub struct ReadEventsGuard {
display: *mut wl_display,
}
impl ReadEventsGuard {
/// Read events
///
/// Reads events from the server socket. If other `ReadEventsGuard` exists, will block
/// until they are all consumed or destroyed.
pub fn read_events(self) -> IoResult<i32> {
let ret = unsafe { ffi_dispatch!(WAYLAND_CLIENT_HANDLE, wl_display_read_events, self.display) };
// Don't run destructor that would cancel the read intent
::std::mem::forget(self);
if ret >= 0 {
Ok(ret)
} else {
Err(IoError::last_os_error())
}
}
/// Cancel the read
///
/// Will cancel the read intention associated with this guard. Never blocks.
///
/// Has the same effet as letting the guard go out of scope.
pub fn cancel(self) {
// just run the destructor
}
}
impl Drop for ReadEventsGuard {
fn drop(&mut self) {
unsafe { ffi_dispatch!(WAYLAND_CLIENT_HANDLE, wl_display_cancel_read, self.display) }
}
}
pub unsafe fn create_event_queue(display: *mut wl_display, evq: Option<*mut wl_event_queue>) -> EventQueue {
EventQueue {
display: display,
handle: Box::new(EventQueueHandle {
state: State::new(),
wlevq: evq,
}),
}
}
unsafe extern "C" fn dispatch_func<P, ID>(_impl: *const c_void, proxy: *mut c_void, opcode: u32,
_msg: *const wl_message, args: *const wl_argument)
-> c_int
where
P: Proxy + Implementable<ID>,
ID:'static,
{
// sanity check, if it triggers, it is a bug
if _impl!= &RUST_MANAGED as *const _ as *const _ {
let _ = write!(
::std::io::stderr(),
"[wayland-client error] Dispatcher got called for a message on a non-managed object."
);
::libc::abort();
}
// We don't need to worry about panic-safeness, because if there is a panic,
// we'll abort the process, so no access to corrupted data is possible.
let ret = ::std::panic::catch_unwind(move || {
let proxy = P::from_ptr_initialized(proxy as *mut wl_proxy);
proxy.__dispatch_msg(opcode, args)
});
match ret {
Ok(Ok(())) => return 0, // all went well
Ok(Err(())) => {
// an unknown opcode was dispatched, this is not normal
let _ = write!(
::std::io::stderr(),
"[wayland-client error] Attempted to dispatch unknown opcode {} for {}, aborting.",
opcode,
P::interface_name()
);
::libc::abort();
}
Err(_) => {
// a panic occured
let _ = write!(
::std::io::stderr(),
"[wayland-client error] A handler for {} panicked, aborting.",
P::interface_name()
);
::libc::abort();
}
}
}
| {
&mut self.state
} | identifier_body |
event_queue.rs | use {Implementable, Proxy};
use std::any::Any;
use std::io::{Error as IoError, Result as IoResult};
use std::io::Write;
use std::ops::{Deref, DerefMut};
use std::os::raw::{c_int, c_void};
use std::sync::Arc;
use std::sync::atomic::{AtomicBool, AtomicPtr};
pub use token_store::{Store as State, StoreProxy as StateProxy, Token as StateToken};
use wayland_sys::RUST_MANAGED;
use wayland_sys::client::*;
use wayland_sys::common::*;
type ProxyUserData = (
*mut EventQueueHandle,
Option<Box<Any>>,
Arc<(AtomicBool, AtomicPtr<()>)>,
);
/// Status of a registration attempt of a proxy.
pub enum RegisterStatus {
/// The proxy was properly registered to this event queue & handler.
Registered,
/// The proxy was not registered because it is not managed by `wayland-client`.
Unmanaged,
/// The proxy was not registered because it is already destroyed.
Dead,
}
/// Handle to an event queue
///
/// This handle gives you access to methods on an event queue
/// that are safe to do from within a callback.
///
/// They are also available on an `EventQueue` object via `Deref`.
pub struct EventQueueHandle {
state: State,
wlevq: Option<*mut wl_event_queue>,
}
impl EventQueueHandle {
/// Register a proxy to this event queue.
///
/// You are required to provide a valid implementation for this proxy
/// as well as some associated implementation data. This implementation
/// is expected to be a struct holding the various relevant
/// function pointers.
///
/// This implementation data can typically contain indexes to state value
/// that the implementation will need to work on.
///
/// This overwrites any precedently set implementation for this proxy.
///
/// Returns appropriately and does nothing if this proxy is dead or already managed by
/// something else than this library.
pub fn register<P, ID>(&mut self, proxy: &P, implementation: P::Implementation, idata: ID)
-> RegisterStatus
where
P: Proxy + Implementable<ID>,
ID:'static,
{
match proxy.status() {
::Liveness::Dead => return RegisterStatus::Dead,
::Liveness::Unmanaged => return RegisterStatus::Unmanaged,
::Liveness::Alive => { /* ok, we can continue */ }
}
unsafe {
let data: *mut ProxyUserData =
ffi_dispatch!(WAYLAND_CLIENT_HANDLE, wl_proxy_get_user_data, proxy.ptr()) as *mut _;
// This cast from *const to *mut is legit because we enforce that a proxy
// can only be assigned to a single EventQueue.
// (this is actually the whole point of the design of this lib)
(&mut *data).0 = self as *const _ as *mut _;
(&mut *data).1 = Some(Box::new((implementation, idata)) as Box<Any>);
// even if this call fails, we updated the user_data, so the new implementation is in place.
ffi_dispatch!(
WAYLAND_CLIENT_HANDLE,
wl_proxy_add_dispatcher,
proxy.ptr(),
dispatch_func::<P, ID>,
&RUST_MANAGED as *const _ as *const _,
data as *mut c_void
);
ffi_dispatch!(
WAYLAND_CLIENT_HANDLE,
wl_proxy_set_queue,
proxy.ptr(),
match self.wlevq {
Some(ptr) => ptr,
None => ::std::ptr::null_mut(),
}
);
}
RegisterStatus::Registered
}
/// Get a handle to the internal state
///
/// The returned guard object allows you to get references
/// to the handler objects you previously inserted in this
/// event queue.
pub fn | (&mut self) -> &mut State {
&mut self.state
}
}
/// An event queue managing wayland events
///
/// Each wayland object can receive events from the server. To handle these events
/// you must associate to these objects an implementation: a struct defined in their
/// respective module, in which you provide a set of functions that will handle each event.
///
/// Your implementation can also access a shared state managed by the event queue. See
/// the `State` struct and the `state()` method on `EventQueueHandle`. If you need this,
/// the way to do it is:
///
/// - insert your state value in the event queue state store, your are then provided with a
/// token to access it
/// - provide this token (you can clone it) as implementation data to any wayland object
/// that need to access this state in its event callbacks.
///
/// The event queues also provides you control on the flow of the program, via the `dispatch()` and
/// `dispatch_pending()` methods.
pub struct EventQueue {
handle: Box<EventQueueHandle>,
display: *mut wl_display,
}
impl EventQueue {
/// Dispatches events from the internal buffer.
///
/// Dispatches all events to their appropriate handlers.
/// If not events were in the internal buffer, will block until
/// some events are read and dispatch them.
/// This process can insert events in the internal buffers of
/// other event queues.
///
/// If an error is returned, your connection with the wayland
/// compositor is probably lost.
pub fn dispatch(&mut self) -> IoResult<u32> {
let ret = match self.handle.wlevq {
Some(evq) => unsafe {
ffi_dispatch!(
WAYLAND_CLIENT_HANDLE,
wl_display_dispatch_queue,
self.display,
evq
)
},
None => unsafe { ffi_dispatch!(WAYLAND_CLIENT_HANDLE, wl_display_dispatch, self.display) },
};
if ret >= 0 {
Ok(ret as u32)
} else {
Err(IoError::last_os_error())
}
}
/// Dispatches pending events from the internal buffer.
///
/// Dispatches all events to their appropriate handlers.
/// Never blocks, if not events were pending, simply returns
/// `Ok(0)`.
///
/// If an error is returned, your connection with the wayland
/// compositor is probably lost.
pub fn dispatch_pending(&mut self) -> IoResult<u32> {
let ret = match self.handle.wlevq {
Some(evq) => unsafe {
ffi_dispatch!(
WAYLAND_CLIENT_HANDLE,
wl_display_dispatch_queue_pending,
self.display,
evq
)
},
None => unsafe {
ffi_dispatch!(
WAYLAND_CLIENT_HANDLE,
wl_display_dispatch_pending,
self.display
)
},
};
if ret >= 0 {
Ok(ret as u32)
} else {
Err(IoError::last_os_error())
}
}
/// Synchronous roundtrip
///
/// This call will cause a synchonous roundtrip with the wayland server. It will block until all
/// pending requests of this queue are sent to the server and it has processed all of them and
/// send the appropriate events.
///
/// Handlers are called as a consequence.
///
/// On success returns the number of dispatched events.
pub fn sync_roundtrip(&mut self) -> IoResult<i32> {
let ret = unsafe {
match self.handle.wlevq {
Some(evtq) => ffi_dispatch!(
WAYLAND_CLIENT_HANDLE,
wl_display_roundtrip_queue,
self.display,
evtq
),
None => ffi_dispatch!(WAYLAND_CLIENT_HANDLE, wl_display_roundtrip, self.display),
}
};
if ret >= 0 {
Ok(ret)
} else {
Err(IoError::last_os_error())
}
}
/// Prepare an conccurent read
///
/// Will declare your intention to read events from the server socket.
///
/// Will return `None` if there are still some events awaiting dispatch on this EventIterator.
/// In this case, you need to call `dispatch_pending()` before calling this method again.
///
/// As long as the returned guard is in scope, no events can be dispatched to any event iterator.
///
/// The guard can then be destroyed by two means:
///
/// - Calling its `cancel()` method (or letting it go out of scope): the read intention will
/// be cancelled
/// - Calling its `read_events()` method: will block until all existing guards are destroyed
/// by one of these methods, then events will be read and all blocked `read_events()` calls
/// will return.
///
/// This call will otherwise not block on the server socket if it is empty, and return
/// an io error `WouldBlock` in such cases.
pub fn prepare_read(&self) -> Option<ReadEventsGuard> {
let ret = unsafe {
match self.handle.wlevq {
Some(evtq) => ffi_dispatch!(
WAYLAND_CLIENT_HANDLE,
wl_display_prepare_read_queue,
self.display,
evtq
),
None => ffi_dispatch!(WAYLAND_CLIENT_HANDLE, wl_display_prepare_read, self.display),
}
};
if ret >= 0 {
Some(ReadEventsGuard {
display: self.display,
})
} else {
None
}
}
}
impl Deref for EventQueue {
type Target = EventQueueHandle;
fn deref(&self) -> &EventQueueHandle {
&*self.handle
}
}
impl DerefMut for EventQueue {
fn deref_mut(&mut self) -> &mut EventQueueHandle {
&mut *self.handle
}
}
/// A guard over a read intention.
///
/// See `EventQueue::prepare_read()` for details about its use.
pub struct ReadEventsGuard {
display: *mut wl_display,
}
impl ReadEventsGuard {
/// Read events
///
/// Reads events from the server socket. If other `ReadEventsGuard` exists, will block
/// until they are all consumed or destroyed.
pub fn read_events(self) -> IoResult<i32> {
let ret = unsafe { ffi_dispatch!(WAYLAND_CLIENT_HANDLE, wl_display_read_events, self.display) };
// Don't run destructor that would cancel the read intent
::std::mem::forget(self);
if ret >= 0 {
Ok(ret)
} else {
Err(IoError::last_os_error())
}
}
/// Cancel the read
///
/// Will cancel the read intention associated with this guard. Never blocks.
///
/// Has the same effet as letting the guard go out of scope.
pub fn cancel(self) {
// just run the destructor
}
}
impl Drop for ReadEventsGuard {
fn drop(&mut self) {
unsafe { ffi_dispatch!(WAYLAND_CLIENT_HANDLE, wl_display_cancel_read, self.display) }
}
}
pub unsafe fn create_event_queue(display: *mut wl_display, evq: Option<*mut wl_event_queue>) -> EventQueue {
EventQueue {
display: display,
handle: Box::new(EventQueueHandle {
state: State::new(),
wlevq: evq,
}),
}
}
unsafe extern "C" fn dispatch_func<P, ID>(_impl: *const c_void, proxy: *mut c_void, opcode: u32,
_msg: *const wl_message, args: *const wl_argument)
-> c_int
where
P: Proxy + Implementable<ID>,
ID:'static,
{
// sanity check, if it triggers, it is a bug
if _impl!= &RUST_MANAGED as *const _ as *const _ {
let _ = write!(
::std::io::stderr(),
"[wayland-client error] Dispatcher got called for a message on a non-managed object."
);
::libc::abort();
}
// We don't need to worry about panic-safeness, because if there is a panic,
// we'll abort the process, so no access to corrupted data is possible.
let ret = ::std::panic::catch_unwind(move || {
let proxy = P::from_ptr_initialized(proxy as *mut wl_proxy);
proxy.__dispatch_msg(opcode, args)
});
match ret {
Ok(Ok(())) => return 0, // all went well
Ok(Err(())) => {
// an unknown opcode was dispatched, this is not normal
let _ = write!(
::std::io::stderr(),
"[wayland-client error] Attempted to dispatch unknown opcode {} for {}, aborting.",
opcode,
P::interface_name()
);
::libc::abort();
}
Err(_) => {
// a panic occured
let _ = write!(
::std::io::stderr(),
"[wayland-client error] A handler for {} panicked, aborting.",
P::interface_name()
);
::libc::abort();
}
}
}
| state | identifier_name |
main.rs | #![feature(core)]
extern crate sdl2;
use std::rc::Rc;
use std::hash::{Hash, Hasher, Writer};
use std::cell::RefCell;
use std::collections::HashMap;
use std::collections::HashSet;
use std::ops::Add;
use std::num::ToPrimitive;
use self::sdl2::render::RenderDrawer;
use self::sdl2::rect::{Rect, Point};
use self::sdl2::pixels::Color;
mod graphics;
mod game;
use game::SpaceEmpire;
#[derive(Debug, Clone, PartialEq, Eq)]
struct Resources {
food: i32,
technology: i32,
gold: i32
}
impl Add for Resources {
type Output = Resources;
fn add(self, other:Resources) -> Resources {
Resources {
food: self.food + other.food,
technology: self.technology + other.technology,
gold: self.gold + other.gold
}
}
}
impl Resources {
fn new() -> Resources {
Resources{food: 0, technology: 0, gold: 0}
}
}
#[derive(Debug, Clone)]
enum BuildingClass {
Farm,
Laboratory,
GoldMine
}
#[derive(Debug, Clone)]
struct Building {
class: BuildingClass,
production: Resources
}
impl Building {
fn new(class: BuildingClass) -> Building {
let production = match class {
BuildingClass::Farm => Resources { food: 5, technology: 0, gold: 0 },
BuildingClass::Laboratory => Resources { food: 0, technology: 2, gold: 0 },
BuildingClass::GoldMine => Resources { food: 0, technology: 0, gold: 8 }
};
Building { class: class, production: production }
}
fn produce(&self) -> Resources {
self.production.clone()
}
}
#[derive(Hash, Eq, PartialEq, Copy)]
enum ShipClass {
Colony,
Scout,
Fighter
}
struct Ship {
class: ShipClass,
health: u32,
speed: u32,
damage: u32
}
impl Ship {
fn new(class: ShipClass) -> Ship {
match class {
ShipClass::Colony => Ship { class: class, health: 100, speed: 10, damage: 10},
ShipClass::Scout => Ship { class: class, health: 50, speed: 30, damage: 5},
ShipClass::Fighter => Ship { class: class, health: 150, speed: 10, damage: 100}
}
}
}
enum FleetLocation {
Moving, // from -> to, turns/needed_turns
Somewhere // exact location
}
struct Fleet {
ships: HashMap<ShipClass, Vec<Ship>>,
location: FleetLocation,
}
impl Fleet{
fn new() -> Fleet {
Fleet { ships: HashMap::new(), location: FleetLocation::Somewhere }
}
fn add(&mut self, ship: Ship) {
match self.ships.get(&ship.class) {
None => { self.ships.insert(ship.class, Vec::new()); },
Some(_) => ()
}
self.ships.get_mut(&ship.class).unwrap().push(ship);
}
fn merge(&mut self, fleet: Box<Fleet>) {
for (ship_class, ships) in fleet.ships.into_iter() {
for ship in ships.into_iter() {
self.add(ship);
}
}
}
fn size(&self) -> u32 {
let mut count = 0u32;
for ships in self.ships.values() {
count += ships.len() as u32;
}
count
}
fn count(&self, class: ShipClass) -> u32 {
match self.ships.get(&class) {
Some(ships) => ships.len() as u32,
None => 0u32
}
}
fn move_to(
&mut self, fleet: &mut Fleet, number: u32, class: ShipClass
) -> Result<(), &'static str> {
if number > self.count(class) {
return Err("There are no enough ships");
}
let ships = match self.ships.get_mut(&class) {
Some(s) => s,
None => return Ok(())
};
for _ in (0..number) {
fleet.add(ships.pop().unwrap());
}
Ok(())
}
}
#[derive(Eq, PartialEq, Hash, Copy)]
struct PlayerId(pub u32);
struct Player {
id: PlayerId,
resources: Resources
}
impl PartialEq for Player {
fn eq(&self, other : &Player) -> bool {
self.id == other.id
}
}
impl Player {
fn new(id: u32) -> Player {
Player {
id: PlayerId(id),
resources: Resources::new()
}
}
fn gather_resources(&mut self, stars: &Starmap) -> () {
let id = self.id;
let owned_systems = stars.systems.values().filter(|s| s.borrow().owner == Some(id));
let owned_buildings = owned_systems.filter_map(|s| s.borrow().building.clone());
let owned_production = owned_buildings.map(|b| b.produce());
self.resources = owned_production.fold(self.resources.clone(), |r, p| r + p );
}
fn create_players(num : u32) -> Vec<Player> {
(0..num).map(|i| Player::new(i)).collect()
}
}
#[derive(Eq, PartialEq, Hash, Copy, PartialOrd, Ord)]
struct SolarSystemId(pub u32);
struct SolarSystem {
id: SolarSystemId,
building: Option<Building>,
owner: Option<PlayerId>,
fleet: Option<Fleet>,
location: (i32, i32)
}
impl <H: Hasher + Writer> Hash<H> for SolarSystem {
fn hash(&self, state: &mut H) {
self.id.hash(state);
}
}
impl SolarSystem {
fn new(id: SolarSystemId) -> SolarSystem {
SolarSystem { id: id, building: None, owner: None, fleet: None, location: (0, 0) }
}
fn set_homeworld(&mut self, player: PlayerId) {
self.owner = Some(player);
self.build(BuildingClass::GoldMine);
}
fn build(&mut self, class: BuildingClass) {
let building = Building::new(class);
self.building = Some(building);
}
fn clear(&mut self) {
self.building = None;
self.owner = None;
self.fleet = None;
}
fn display(&self, drawer: &mut RenderDrawer) {
drawer.set_draw_color(Color::RGB(0, 0, 255));
let (x,y) = self.location;
let display_x = x.to_i32().unwrap()*80;
let display_y = y.to_i32().unwrap()*80;
drawer.draw_rect(&Rect::new(display_x, display_y, 50, 50));
}
fn display_location(&self) -> (i32, i32) {
let (x,y) = self.location;
(x*80, y*80)
}
fn center(&self) -> (i32, i32) {
let (x,y) = self.display_location();
(x+25, y+25)
}
}
type SolarSystemCell = Rc<RefCell<SolarSystem>>;
pub struct Starmap {
systems: HashMap<SolarSystemId, SolarSystemCell>,
neighbours: HashSet<SystemsConnection>
}
impl Starmap {
fn new() -> Starmap {
Starmap { systems: HashMap::new(), neighbours: HashSet::new() }
}
fn generate_universe() -> Starmap {
// 0 - 1 - 2
// | / |
// 3 4 5
// | / |
// 6 - 7 - 8
let neighbours = [
(0,1), (1,2), (2,5),
(5,8), (7,8), (6,7),
(3,6), (0,3), (4,6),
(2,4)
];
let mut starmap = Starmap::new();
for n in 0..9 {
let system = Rc::new(RefCell::new(SolarSystem::new(SolarSystemId(n))));
system.borrow_mut().location = ((n % 3).to_i32().unwrap(), (n / 3).to_i32().unwrap());
starmap.systems.insert(SolarSystemId(n), system);
}
for neighbour in neighbours.iter() {
let system_a = starmap.systems[SolarSystemId(neighbour.0)].clone();
let system_b = starmap.systems[SolarSystemId(neighbour.1)].clone();
starmap.neighbours.insert(SystemsConnection::new(system_a, system_b));
}
starmap
}
fn set_homeworlds(&mut self, players: &[PlayerId]) -> Result<(), &'static str> {
if players.len()!= 2 {
return Err("Only two players are possible now!");
}
self.systems.get_mut(&SolarSystemId(0)).unwrap().borrow_mut().set_homeworld(players[0]);
self.systems.get_mut(&SolarSystemId(8)).unwrap().borrow_mut().set_homeworld(players[1]);
Ok(())
}
fn display(&self, drawer: &mut RenderDrawer) {
for system in self.systems.values() {
system.borrow().display(drawer);
}
for connection in self.neighbours.iter() {
connection.display(drawer);
}
}
}
struct SystemsConnection {
first: SolarSystemCell,
second: SolarSystemCell
}
impl <H: Hasher + Writer> Hash<H> for SystemsConnection {
fn hash(&self, state: &mut H) {
self.first.borrow().hash(state);
self.second.borrow().hash(state);
}
}
impl PartialEq for SystemsConnection {
fn eq(&self, other : &SystemsConnection) -> bool {
self.first.borrow().id == other.first.borrow().id &&
self.second.borrow().id == other.second.borrow().id
}
}
impl Eq for SystemsConnection {}
impl SystemsConnection {
fn new(system_a: SolarSystemCell, system_b: SolarSystemCell) -> SystemsConnection {
if system_a.borrow().id < system_b.borrow().id {
SystemsConnection{first: system_a, second: system_b}
} else {
SystemsConnection{first: system_b, second: system_a}
}
}
fn display(&self, drawer: &mut RenderDrawer) |
}
fn test_gathering_resources() {
let mut player = Player{ id: PlayerId(1), resources: Resources::new() };
let mut universe = Starmap::generate_universe();
player.gather_resources(&universe);
assert!(player.resources == Resources::new());
assert!(universe.set_homeworlds(&[PlayerId(1), PlayerId(2)]).is_ok());
player.gather_resources(&universe);
assert!(player.resources == Resources { gold: 8, food: 0, technology: 0 });
}
fn test_fleet_movement() {
// Fleets
let mut fleet1 = Fleet::new();
let mut fleet2 = Fleet::new();
fleet1.add(Ship::new(ShipClass::Fighter));
fleet1.add(Ship::new(ShipClass::Fighter));
fleet1.add(Ship::new(ShipClass::Scout));
fleet2.add(Ship::new(ShipClass::Fighter));
fleet2.add(Ship::new(ShipClass::Fighter));
fleet2.add(Ship::new(ShipClass::Colony));
fleet1.merge(Box::new(fleet2));
let mut fleet3 = Fleet::new();
assert!(fleet1.move_to(&mut fleet3, 3, ShipClass::Fighter).is_ok());
}
fn print_some_buildings_really_important_piece_of_code() {
// Buildings
let farm = Building::new(BuildingClass::Farm);
let lab = Building::new(BuildingClass::Laboratory);
let mine = Building::new(BuildingClass::GoldMine);
println!("{:?}", farm);
println!("{:?}", lab);
println!("{:?}", mine);
}
fn main() {
print_some_buildings_really_important_piece_of_code();
test_fleet_movement();
test_gathering_resources();
graphics::example(Box::new(SpaceEmpire::new()));
}
| {
let (x1, y1) = self.first.borrow().center();
let (x2, y2) = self.second.borrow().center();
drawer.draw_line(
Point{x: x1, y: y1},
Point{x: x2, y: y2});
} | identifier_body |
main.rs | #![feature(core)]
extern crate sdl2;
use std::rc::Rc;
use std::hash::{Hash, Hasher, Writer};
use std::cell::RefCell;
use std::collections::HashMap;
use std::collections::HashSet;
use std::ops::Add;
use std::num::ToPrimitive;
use self::sdl2::render::RenderDrawer;
use self::sdl2::rect::{Rect, Point};
use self::sdl2::pixels::Color;
mod graphics;
mod game;
use game::SpaceEmpire;
#[derive(Debug, Clone, PartialEq, Eq)]
struct Resources {
food: i32,
technology: i32,
gold: i32
}
impl Add for Resources {
type Output = Resources;
fn add(self, other:Resources) -> Resources {
Resources {
food: self.food + other.food,
technology: self.technology + other.technology,
gold: self.gold + other.gold
}
}
}
impl Resources {
fn new() -> Resources {
Resources{food: 0, technology: 0, gold: 0}
}
}
#[derive(Debug, Clone)]
enum BuildingClass {
Farm,
Laboratory,
GoldMine
}
#[derive(Debug, Clone)]
struct Building {
class: BuildingClass,
production: Resources
}
impl Building {
fn new(class: BuildingClass) -> Building {
let production = match class {
BuildingClass::Farm => Resources { food: 5, technology: 0, gold: 0 },
BuildingClass::Laboratory => Resources { food: 0, technology: 2, gold: 0 },
BuildingClass::GoldMine => Resources { food: 0, technology: 0, gold: 8 }
};
Building { class: class, production: production }
}
fn produce(&self) -> Resources {
self.production.clone()
}
}
#[derive(Hash, Eq, PartialEq, Copy)]
enum ShipClass {
Colony,
Scout,
Fighter
}
struct Ship {
class: ShipClass,
health: u32,
speed: u32,
damage: u32
}
impl Ship {
fn new(class: ShipClass) -> Ship {
match class {
ShipClass::Colony => Ship { class: class, health: 100, speed: 10, damage: 10},
ShipClass::Scout => Ship { class: class, health: 50, speed: 30, damage: 5},
ShipClass::Fighter => Ship { class: class, health: 150, speed: 10, damage: 100}
}
}
}
enum FleetLocation {
Moving, // from -> to, turns/needed_turns
Somewhere // exact location
}
struct Fleet {
ships: HashMap<ShipClass, Vec<Ship>>,
location: FleetLocation,
}
impl Fleet{
fn new() -> Fleet {
Fleet { ships: HashMap::new(), location: FleetLocation::Somewhere }
} | }
self.ships.get_mut(&ship.class).unwrap().push(ship);
}
fn merge(&mut self, fleet: Box<Fleet>) {
for (ship_class, ships) in fleet.ships.into_iter() {
for ship in ships.into_iter() {
self.add(ship);
}
}
}
fn size(&self) -> u32 {
let mut count = 0u32;
for ships in self.ships.values() {
count += ships.len() as u32;
}
count
}
fn count(&self, class: ShipClass) -> u32 {
match self.ships.get(&class) {
Some(ships) => ships.len() as u32,
None => 0u32
}
}
fn move_to(
&mut self, fleet: &mut Fleet, number: u32, class: ShipClass
) -> Result<(), &'static str> {
if number > self.count(class) {
return Err("There are no enough ships");
}
let ships = match self.ships.get_mut(&class) {
Some(s) => s,
None => return Ok(())
};
for _ in (0..number) {
fleet.add(ships.pop().unwrap());
}
Ok(())
}
}
#[derive(Eq, PartialEq, Hash, Copy)]
struct PlayerId(pub u32);
struct Player {
id: PlayerId,
resources: Resources
}
impl PartialEq for Player {
fn eq(&self, other : &Player) -> bool {
self.id == other.id
}
}
impl Player {
fn new(id: u32) -> Player {
Player {
id: PlayerId(id),
resources: Resources::new()
}
}
fn gather_resources(&mut self, stars: &Starmap) -> () {
let id = self.id;
let owned_systems = stars.systems.values().filter(|s| s.borrow().owner == Some(id));
let owned_buildings = owned_systems.filter_map(|s| s.borrow().building.clone());
let owned_production = owned_buildings.map(|b| b.produce());
self.resources = owned_production.fold(self.resources.clone(), |r, p| r + p );
}
fn create_players(num : u32) -> Vec<Player> {
(0..num).map(|i| Player::new(i)).collect()
}
}
#[derive(Eq, PartialEq, Hash, Copy, PartialOrd, Ord)]
struct SolarSystemId(pub u32);
struct SolarSystem {
id: SolarSystemId,
building: Option<Building>,
owner: Option<PlayerId>,
fleet: Option<Fleet>,
location: (i32, i32)
}
impl <H: Hasher + Writer> Hash<H> for SolarSystem {
fn hash(&self, state: &mut H) {
self.id.hash(state);
}
}
impl SolarSystem {
fn new(id: SolarSystemId) -> SolarSystem {
SolarSystem { id: id, building: None, owner: None, fleet: None, location: (0, 0) }
}
fn set_homeworld(&mut self, player: PlayerId) {
self.owner = Some(player);
self.build(BuildingClass::GoldMine);
}
fn build(&mut self, class: BuildingClass) {
let building = Building::new(class);
self.building = Some(building);
}
fn clear(&mut self) {
self.building = None;
self.owner = None;
self.fleet = None;
}
fn display(&self, drawer: &mut RenderDrawer) {
drawer.set_draw_color(Color::RGB(0, 0, 255));
let (x,y) = self.location;
let display_x = x.to_i32().unwrap()*80;
let display_y = y.to_i32().unwrap()*80;
drawer.draw_rect(&Rect::new(display_x, display_y, 50, 50));
}
fn display_location(&self) -> (i32, i32) {
let (x,y) = self.location;
(x*80, y*80)
}
fn center(&self) -> (i32, i32) {
let (x,y) = self.display_location();
(x+25, y+25)
}
}
type SolarSystemCell = Rc<RefCell<SolarSystem>>;
pub struct Starmap {
systems: HashMap<SolarSystemId, SolarSystemCell>,
neighbours: HashSet<SystemsConnection>
}
impl Starmap {
fn new() -> Starmap {
Starmap { systems: HashMap::new(), neighbours: HashSet::new() }
}
fn generate_universe() -> Starmap {
// 0 - 1 - 2
// | / |
// 3 4 5
// | / |
// 6 - 7 - 8
let neighbours = [
(0,1), (1,2), (2,5),
(5,8), (7,8), (6,7),
(3,6), (0,3), (4,6),
(2,4)
];
let mut starmap = Starmap::new();
for n in 0..9 {
let system = Rc::new(RefCell::new(SolarSystem::new(SolarSystemId(n))));
system.borrow_mut().location = ((n % 3).to_i32().unwrap(), (n / 3).to_i32().unwrap());
starmap.systems.insert(SolarSystemId(n), system);
}
for neighbour in neighbours.iter() {
let system_a = starmap.systems[SolarSystemId(neighbour.0)].clone();
let system_b = starmap.systems[SolarSystemId(neighbour.1)].clone();
starmap.neighbours.insert(SystemsConnection::new(system_a, system_b));
}
starmap
}
fn set_homeworlds(&mut self, players: &[PlayerId]) -> Result<(), &'static str> {
if players.len()!= 2 {
return Err("Only two players are possible now!");
}
self.systems.get_mut(&SolarSystemId(0)).unwrap().borrow_mut().set_homeworld(players[0]);
self.systems.get_mut(&SolarSystemId(8)).unwrap().borrow_mut().set_homeworld(players[1]);
Ok(())
}
fn display(&self, drawer: &mut RenderDrawer) {
for system in self.systems.values() {
system.borrow().display(drawer);
}
for connection in self.neighbours.iter() {
connection.display(drawer);
}
}
}
struct SystemsConnection {
first: SolarSystemCell,
second: SolarSystemCell
}
impl <H: Hasher + Writer> Hash<H> for SystemsConnection {
fn hash(&self, state: &mut H) {
self.first.borrow().hash(state);
self.second.borrow().hash(state);
}
}
impl PartialEq for SystemsConnection {
fn eq(&self, other : &SystemsConnection) -> bool {
self.first.borrow().id == other.first.borrow().id &&
self.second.borrow().id == other.second.borrow().id
}
}
impl Eq for SystemsConnection {}
impl SystemsConnection {
fn new(system_a: SolarSystemCell, system_b: SolarSystemCell) -> SystemsConnection {
if system_a.borrow().id < system_b.borrow().id {
SystemsConnection{first: system_a, second: system_b}
} else {
SystemsConnection{first: system_b, second: system_a}
}
}
fn display(&self, drawer: &mut RenderDrawer) {
let (x1, y1) = self.first.borrow().center();
let (x2, y2) = self.second.borrow().center();
drawer.draw_line(
Point{x: x1, y: y1},
Point{x: x2, y: y2});
}
}
fn test_gathering_resources() {
let mut player = Player{ id: PlayerId(1), resources: Resources::new() };
let mut universe = Starmap::generate_universe();
player.gather_resources(&universe);
assert!(player.resources == Resources::new());
assert!(universe.set_homeworlds(&[PlayerId(1), PlayerId(2)]).is_ok());
player.gather_resources(&universe);
assert!(player.resources == Resources { gold: 8, food: 0, technology: 0 });
}
fn test_fleet_movement() {
// Fleets
let mut fleet1 = Fleet::new();
let mut fleet2 = Fleet::new();
fleet1.add(Ship::new(ShipClass::Fighter));
fleet1.add(Ship::new(ShipClass::Fighter));
fleet1.add(Ship::new(ShipClass::Scout));
fleet2.add(Ship::new(ShipClass::Fighter));
fleet2.add(Ship::new(ShipClass::Fighter));
fleet2.add(Ship::new(ShipClass::Colony));
fleet1.merge(Box::new(fleet2));
let mut fleet3 = Fleet::new();
assert!(fleet1.move_to(&mut fleet3, 3, ShipClass::Fighter).is_ok());
}
fn print_some_buildings_really_important_piece_of_code() {
// Buildings
let farm = Building::new(BuildingClass::Farm);
let lab = Building::new(BuildingClass::Laboratory);
let mine = Building::new(BuildingClass::GoldMine);
println!("{:?}", farm);
println!("{:?}", lab);
println!("{:?}", mine);
}
fn main() {
print_some_buildings_really_important_piece_of_code();
test_fleet_movement();
test_gathering_resources();
graphics::example(Box::new(SpaceEmpire::new()));
} |
fn add(&mut self, ship: Ship) {
match self.ships.get(&ship.class) {
None => { self.ships.insert(ship.class, Vec::new()); },
Some(_) => () | random_line_split |
main.rs | #![feature(core)]
extern crate sdl2;
use std::rc::Rc;
use std::hash::{Hash, Hasher, Writer};
use std::cell::RefCell;
use std::collections::HashMap;
use std::collections::HashSet;
use std::ops::Add;
use std::num::ToPrimitive;
use self::sdl2::render::RenderDrawer;
use self::sdl2::rect::{Rect, Point};
use self::sdl2::pixels::Color;
mod graphics;
mod game;
use game::SpaceEmpire;
#[derive(Debug, Clone, PartialEq, Eq)]
struct Resources {
food: i32,
technology: i32,
gold: i32
}
impl Add for Resources {
type Output = Resources;
fn | (self, other:Resources) -> Resources {
Resources {
food: self.food + other.food,
technology: self.technology + other.technology,
gold: self.gold + other.gold
}
}
}
impl Resources {
fn new() -> Resources {
Resources{food: 0, technology: 0, gold: 0}
}
}
#[derive(Debug, Clone)]
enum BuildingClass {
Farm,
Laboratory,
GoldMine
}
#[derive(Debug, Clone)]
struct Building {
class: BuildingClass,
production: Resources
}
impl Building {
fn new(class: BuildingClass) -> Building {
let production = match class {
BuildingClass::Farm => Resources { food: 5, technology: 0, gold: 0 },
BuildingClass::Laboratory => Resources { food: 0, technology: 2, gold: 0 },
BuildingClass::GoldMine => Resources { food: 0, technology: 0, gold: 8 }
};
Building { class: class, production: production }
}
fn produce(&self) -> Resources {
self.production.clone()
}
}
#[derive(Hash, Eq, PartialEq, Copy)]
enum ShipClass {
Colony,
Scout,
Fighter
}
struct Ship {
class: ShipClass,
health: u32,
speed: u32,
damage: u32
}
impl Ship {
fn new(class: ShipClass) -> Ship {
match class {
ShipClass::Colony => Ship { class: class, health: 100, speed: 10, damage: 10},
ShipClass::Scout => Ship { class: class, health: 50, speed: 30, damage: 5},
ShipClass::Fighter => Ship { class: class, health: 150, speed: 10, damage: 100}
}
}
}
enum FleetLocation {
Moving, // from -> to, turns/needed_turns
Somewhere // exact location
}
struct Fleet {
ships: HashMap<ShipClass, Vec<Ship>>,
location: FleetLocation,
}
impl Fleet{
fn new() -> Fleet {
Fleet { ships: HashMap::new(), location: FleetLocation::Somewhere }
}
fn add(&mut self, ship: Ship) {
match self.ships.get(&ship.class) {
None => { self.ships.insert(ship.class, Vec::new()); },
Some(_) => ()
}
self.ships.get_mut(&ship.class).unwrap().push(ship);
}
fn merge(&mut self, fleet: Box<Fleet>) {
for (ship_class, ships) in fleet.ships.into_iter() {
for ship in ships.into_iter() {
self.add(ship);
}
}
}
fn size(&self) -> u32 {
let mut count = 0u32;
for ships in self.ships.values() {
count += ships.len() as u32;
}
count
}
fn count(&self, class: ShipClass) -> u32 {
match self.ships.get(&class) {
Some(ships) => ships.len() as u32,
None => 0u32
}
}
fn move_to(
&mut self, fleet: &mut Fleet, number: u32, class: ShipClass
) -> Result<(), &'static str> {
if number > self.count(class) {
return Err("There are no enough ships");
}
let ships = match self.ships.get_mut(&class) {
Some(s) => s,
None => return Ok(())
};
for _ in (0..number) {
fleet.add(ships.pop().unwrap());
}
Ok(())
}
}
#[derive(Eq, PartialEq, Hash, Copy)]
struct PlayerId(pub u32);
struct Player {
id: PlayerId,
resources: Resources
}
impl PartialEq for Player {
fn eq(&self, other : &Player) -> bool {
self.id == other.id
}
}
impl Player {
fn new(id: u32) -> Player {
Player {
id: PlayerId(id),
resources: Resources::new()
}
}
fn gather_resources(&mut self, stars: &Starmap) -> () {
let id = self.id;
let owned_systems = stars.systems.values().filter(|s| s.borrow().owner == Some(id));
let owned_buildings = owned_systems.filter_map(|s| s.borrow().building.clone());
let owned_production = owned_buildings.map(|b| b.produce());
self.resources = owned_production.fold(self.resources.clone(), |r, p| r + p );
}
fn create_players(num : u32) -> Vec<Player> {
(0..num).map(|i| Player::new(i)).collect()
}
}
#[derive(Eq, PartialEq, Hash, Copy, PartialOrd, Ord)]
struct SolarSystemId(pub u32);
struct SolarSystem {
id: SolarSystemId,
building: Option<Building>,
owner: Option<PlayerId>,
fleet: Option<Fleet>,
location: (i32, i32)
}
impl <H: Hasher + Writer> Hash<H> for SolarSystem {
fn hash(&self, state: &mut H) {
self.id.hash(state);
}
}
impl SolarSystem {
fn new(id: SolarSystemId) -> SolarSystem {
SolarSystem { id: id, building: None, owner: None, fleet: None, location: (0, 0) }
}
fn set_homeworld(&mut self, player: PlayerId) {
self.owner = Some(player);
self.build(BuildingClass::GoldMine);
}
fn build(&mut self, class: BuildingClass) {
let building = Building::new(class);
self.building = Some(building);
}
fn clear(&mut self) {
self.building = None;
self.owner = None;
self.fleet = None;
}
fn display(&self, drawer: &mut RenderDrawer) {
drawer.set_draw_color(Color::RGB(0, 0, 255));
let (x,y) = self.location;
let display_x = x.to_i32().unwrap()*80;
let display_y = y.to_i32().unwrap()*80;
drawer.draw_rect(&Rect::new(display_x, display_y, 50, 50));
}
fn display_location(&self) -> (i32, i32) {
let (x,y) = self.location;
(x*80, y*80)
}
fn center(&self) -> (i32, i32) {
let (x,y) = self.display_location();
(x+25, y+25)
}
}
type SolarSystemCell = Rc<RefCell<SolarSystem>>;
pub struct Starmap {
systems: HashMap<SolarSystemId, SolarSystemCell>,
neighbours: HashSet<SystemsConnection>
}
impl Starmap {
fn new() -> Starmap {
Starmap { systems: HashMap::new(), neighbours: HashSet::new() }
}
fn generate_universe() -> Starmap {
// 0 - 1 - 2
// | / |
// 3 4 5
// | / |
// 6 - 7 - 8
let neighbours = [
(0,1), (1,2), (2,5),
(5,8), (7,8), (6,7),
(3,6), (0,3), (4,6),
(2,4)
];
let mut starmap = Starmap::new();
for n in 0..9 {
let system = Rc::new(RefCell::new(SolarSystem::new(SolarSystemId(n))));
system.borrow_mut().location = ((n % 3).to_i32().unwrap(), (n / 3).to_i32().unwrap());
starmap.systems.insert(SolarSystemId(n), system);
}
for neighbour in neighbours.iter() {
let system_a = starmap.systems[SolarSystemId(neighbour.0)].clone();
let system_b = starmap.systems[SolarSystemId(neighbour.1)].clone();
starmap.neighbours.insert(SystemsConnection::new(system_a, system_b));
}
starmap
}
fn set_homeworlds(&mut self, players: &[PlayerId]) -> Result<(), &'static str> {
if players.len()!= 2 {
return Err("Only two players are possible now!");
}
self.systems.get_mut(&SolarSystemId(0)).unwrap().borrow_mut().set_homeworld(players[0]);
self.systems.get_mut(&SolarSystemId(8)).unwrap().borrow_mut().set_homeworld(players[1]);
Ok(())
}
fn display(&self, drawer: &mut RenderDrawer) {
for system in self.systems.values() {
system.borrow().display(drawer);
}
for connection in self.neighbours.iter() {
connection.display(drawer);
}
}
}
struct SystemsConnection {
first: SolarSystemCell,
second: SolarSystemCell
}
impl <H: Hasher + Writer> Hash<H> for SystemsConnection {
fn hash(&self, state: &mut H) {
self.first.borrow().hash(state);
self.second.borrow().hash(state);
}
}
impl PartialEq for SystemsConnection {
fn eq(&self, other : &SystemsConnection) -> bool {
self.first.borrow().id == other.first.borrow().id &&
self.second.borrow().id == other.second.borrow().id
}
}
impl Eq for SystemsConnection {}
impl SystemsConnection {
fn new(system_a: SolarSystemCell, system_b: SolarSystemCell) -> SystemsConnection {
if system_a.borrow().id < system_b.borrow().id {
SystemsConnection{first: system_a, second: system_b}
} else {
SystemsConnection{first: system_b, second: system_a}
}
}
fn display(&self, drawer: &mut RenderDrawer) {
let (x1, y1) = self.first.borrow().center();
let (x2, y2) = self.second.borrow().center();
drawer.draw_line(
Point{x: x1, y: y1},
Point{x: x2, y: y2});
}
}
fn test_gathering_resources() {
let mut player = Player{ id: PlayerId(1), resources: Resources::new() };
let mut universe = Starmap::generate_universe();
player.gather_resources(&universe);
assert!(player.resources == Resources::new());
assert!(universe.set_homeworlds(&[PlayerId(1), PlayerId(2)]).is_ok());
player.gather_resources(&universe);
assert!(player.resources == Resources { gold: 8, food: 0, technology: 0 });
}
fn test_fleet_movement() {
// Fleets
let mut fleet1 = Fleet::new();
let mut fleet2 = Fleet::new();
fleet1.add(Ship::new(ShipClass::Fighter));
fleet1.add(Ship::new(ShipClass::Fighter));
fleet1.add(Ship::new(ShipClass::Scout));
fleet2.add(Ship::new(ShipClass::Fighter));
fleet2.add(Ship::new(ShipClass::Fighter));
fleet2.add(Ship::new(ShipClass::Colony));
fleet1.merge(Box::new(fleet2));
let mut fleet3 = Fleet::new();
assert!(fleet1.move_to(&mut fleet3, 3, ShipClass::Fighter).is_ok());
}
fn print_some_buildings_really_important_piece_of_code() {
// Buildings
let farm = Building::new(BuildingClass::Farm);
let lab = Building::new(BuildingClass::Laboratory);
let mine = Building::new(BuildingClass::GoldMine);
println!("{:?}", farm);
println!("{:?}", lab);
println!("{:?}", mine);
}
fn main() {
print_some_buildings_really_important_piece_of_code();
test_fleet_movement();
test_gathering_resources();
graphics::example(Box::new(SpaceEmpire::new()));
}
| add | identifier_name |
value.rs | use crate::css::CallArgs;
use crate::error::Error;
use crate::ordermap::OrderMap;
use crate::output::{Format, Formatted};
use crate::sass::Function;
use crate::value::{Color, ListSeparator, Number, Numeric, Operator, Quotes};
use std::convert::TryFrom;
/// A css value.
#[derive(Clone, Debug, Eq, PartialOrd)]
pub enum Value {
/// A special kind of escape. Only really used for!important.
Bang(String),
/// An function call that was not evaluated.
Call(String, CallArgs),
/// A (callable?) function.
Function(String, Option<Function>),
/// A string literal.
Literal(String, Quotes),
/// A comma- or space separated list of values, with or without brackets.
List(Vec<Value>, Option<ListSeparator>, bool),
/// A Numeric value is a rational value with a Unit (which may be
/// Unit::None) and flags.
///
/// The boolean flag is true for calculated values and false for
/// literal values.
Numeric(Numeric, bool),
/// A color value (and optionally, its source string).
Color(Color, Option<String>),
/// The null value.
Null,
/// The true boolean value.
True,
/// The false boolean value.
False,
/// A binary operation, two operands and an operator.
/// The booleans represents possible whitespace.
BinOp(Box<Value>, bool, Operator, bool, Box<Value>),
/// A unary operator and its operand.
UnaryOp(Operator, Box<Value>),
/// A map of values.
Map(ValueMap),
/// A unicode range for font selections. U+NN, U+N?, U+NN-MM.
/// The string is the entire value, including the "U+" tag.
UnicodeRange(String),
/// A value in parenthesis.
Paren(Box<Value>),
}
/// An OrderMap where both the keys and the values are css values.
pub type ValueMap = OrderMap<Value, Value>;
impl Value {
/// Create a numeric value with no unit.
pub fn scalar<T: Into<Number>>(v: T) -> Self {
Value::Numeric(Numeric::scalar(v), true)
}
/// Get the type name of this value.
pub fn type_name(&self) -> &'static str {
match *self {
Value::Color(..) => "color",
Value::Literal(..) => "string",
Value::Map(..) => "map",
Value::Numeric(..) => "number",
Value::List(..) => "list",
Value::Function(..) => "function",
Value::True | Value::False => "bool",
Value::Null => "null",
_ => "unknown",
}
}
/// Return true if this is a calculated value.
///
/// The return of functions or operators are calculated, verbatim
/// values are not.
pub fn is_calculated(&self) -> bool {
match *self {
Value::Numeric(.., calculated) => calculated,
Value::Color(_, None) => true,
_ => false,
}
}
/// Get this value, but marked as calculated.
pub fn into_calculated(self) -> Self {
match self {
Value::Numeric(num, _) => Value::Numeric(num, true),
Value::List(v, sep, bracketed) => Value::List(
v.into_iter().map(|i| i.into_calculated()).collect(),
sep,
bracketed,
),
other => other,
}
}
/// All values other than `False` and `Null` should be considered true.
pub fn is_true(&self) -> bool {
!matches!(self, Value::False | Value::Null)
}
/// Return true if this value is null.
///
/// Note that an empty unquoted string and a list containing no
/// non-null values is also considered null.
pub fn is_null(&self) -> bool {
match *self {
Value::Null => true,
Value::List(ref list, _, false) => {
list.iter().all(|v| v.is_null())
}
Value::Literal(ref s, Quotes::None) if s.is_empty() => true,
Value::Paren(ref v) => v.is_null(),
_ => false,
}
}
/// Check if this value is numeric.
///
/// If it is, get the number and unit, otherwise, get the value
/// itself as error.
pub fn numeric_value(self) -> Result<Numeric, Self> {
match self {
Value::Numeric(num,..) => Ok(num),
v => Err(v),
}
}
/// Check that this value is an integer.
#[deprecated]
pub fn integer_value(&self) -> Result<i64, Error> {
match self {
&Value::Numeric(ref num,..) => num
.value
.clone()
.into_integer()
.map_err(|_| Error::bad_value("an integer", self)),
v => Err(Error::bad_value("a number", v)),
}
}
/// Unquote this value.
///
/// If the value is a quoted string, the content is unquoted.
pub fn unquote(self) -> Value {
match self {
Value::Literal(s, Quotes::None) => {
Value::Literal(s, Quotes::None)
}
Value::Literal(s, _) => {
let mut result = String::new();
let mut iter = s.chars().peekable();
while let Some(c) = iter.next() {
if c == '\\' {
let mut val: u32 = 0;
let mut got_num = false;
let nextchar = loop {
match iter.peek() {
Some(&c) if c.is_ascii_hexdigit() => {
val = val * 10 + u32::from(hexvalue(c));
got_num = true;
iter.next();
}
Some(' ') if got_num => {
iter.next();
break (None);
}
Some(_) if!got_num => break (iter.next()),
_ => break (None),
}
};
if got_num {
if let Ok(c) = char::try_from(val) {
result.push(c);
} else {
result.push('\u{fffd}');
}
}
match nextchar {
Some('\n') => {
result.push('\\');
result.push('a');
}
Some(c) => {
result.push(c);
}
None => (),
}
} else {
result.push(c)
}
}
Value::Literal(result, Quotes::None)
}
Value::List(list, s, b) => Value::List(
list.into_iter().map(|v| v.unquote()).collect(),
s,
b,
),
Value::Paren(v) => *v,
v => v,
}
}
/// Get this value as iterable items.
///
/// Lists and maps have iterable items, which are returned as a
/// vector of values. Other values are returned as a vec
/// containing the value as a single item.
pub fn iter_items(self) -> Vec<Value> {
match self {
Value::List(v, _, _) => v,
Value::Map(map) => map
.iter()
.map(|&(ref k, ref v)| {
Value::List(
vec![k.clone(), v.clone()],
Some(ListSeparator::Space),
false,
)
})
.collect(),
Value::Paren(v) => v.iter_items(),
v => vec![v],
}
}
/// Get a reference to this `Value` bound to an output format.
///
/// The bound referene implements `Display`, so it can be written
/// with the rust `format!(...) macros or coverted with the
/// `to_string()` method.
///
/// # Example
///
/// ```
/// # use rsass::css::Value;
/// assert_eq!(
/// Value::scalar(42).format(Default::default()).to_string(),
/// "42",
/// );
/// ```
pub fn format(&self, format: Format) -> Formatted<Value> {
Formatted {
value: self,
format,
}
}
}
fn hexvalue(c: char) -> u8 {
if ('0'..='9').contains(&c) {
c as u8 - b'0'
} else if ('a'..='f').contains(&c) {
c as u8 - b'a' + 10
} else if ('A'..='F').contains(&c) {
c as u8 - b'A' + 10
} else {
0
}
}
/// Some Values are equal according to spec even with some
/// implementation differences.
impl PartialEq for Value {
fn eq(&self, other: &Value) -> bool {
match (&self, other) {
(Value::Bang(a), Value::Bang(b)) => a == b,
(Value::Numeric(a, _), Value::Numeric(b, _)) => a == b,
(Value::Literal(a, aq), Value::Literal(b, bq)) => {
if aq == bq {
a == b
} else {
let a = if aq.is_none() {
a.replace('\\', "\\\\")
} else {
a.clone()
};
let b = if bq.is_none() {
b.replace('\\', "\\\\")
} else {
b.clone()
};
a == b
}
}
(Value::Null, Value::Null) => true,
(Value::True, Value::True) => true,
(Value::False, Value::False) => true,
(Value::Color(a, _), Value::Color(b, _)) => a == b,
(Value::Call(af, aa), Value::Call(bf, ba)) => {
af == bf && aa == ba
}
(Value::Function(a, abody), Value::Function(b, bbody)) => {
a == b && abody == bbody
}
(Value::List(av, asep, ab), Value::List(bv, bsep, bb)) => {
av == bv && asep == bsep && ab == bb
}
(Value::Map(a), Value::Map(b)) => a == b, | (Value::UnaryOp(a, av), Value::UnaryOp(b, bv)) => {
a == b && av == bv
}
(
Value::BinOp(aa, _, ao, _, ab),
Value::BinOp(ba, _, bo, _, bb),
) => ao == bo && aa == ba && ab == bb,
(Value::UnicodeRange(a), Value::UnicodeRange(b)) => a == b,
(Value::Paren(a), Value::Paren(b)) => a == b,
(Value::List(a,..), Value::Map(b)) => {
a.is_empty() && b.len() == 0
}
(Value::Map(a), Value::List(b,..)) => {
a.len() == 0 && b.is_empty()
}
_ => false,
}
}
}
impl From<bool> for Value {
fn from(v: bool) -> Value {
match v {
true => Value::True,
false => Value::False,
}
}
}
impl From<&str> for Value {
fn from(s: &str) -> Value {
Value::Literal(s.into(), Quotes::None)
}
}
impl From<String> for Value {
fn from(s: String) -> Value {
Value::Literal(s, Quotes::None)
}
}
impl From<Numeric> for Value {
fn from(v: Numeric) -> Value {
Value::Numeric(v, true)
}
}
impl<C: Into<Color>> From<C> for Value {
fn from(c: C) -> Value {
Value::Color(c.into(), None)
}
} | random_line_split |
|
value.rs | use crate::css::CallArgs;
use crate::error::Error;
use crate::ordermap::OrderMap;
use crate::output::{Format, Formatted};
use crate::sass::Function;
use crate::value::{Color, ListSeparator, Number, Numeric, Operator, Quotes};
use std::convert::TryFrom;
/// A css value.
#[derive(Clone, Debug, Eq, PartialOrd)]
pub enum Value {
/// A special kind of escape. Only really used for!important.
Bang(String),
/// An function call that was not evaluated.
Call(String, CallArgs),
/// A (callable?) function.
Function(String, Option<Function>),
/// A string literal.
Literal(String, Quotes),
/// A comma- or space separated list of values, with or without brackets.
List(Vec<Value>, Option<ListSeparator>, bool),
/// A Numeric value is a rational value with a Unit (which may be
/// Unit::None) and flags.
///
/// The boolean flag is true for calculated values and false for
/// literal values.
Numeric(Numeric, bool),
/// A color value (and optionally, its source string).
Color(Color, Option<String>),
/// The null value.
Null,
/// The true boolean value.
True,
/// The false boolean value.
False,
/// A binary operation, two operands and an operator.
/// The booleans represents possible whitespace.
BinOp(Box<Value>, bool, Operator, bool, Box<Value>),
/// A unary operator and its operand.
UnaryOp(Operator, Box<Value>),
/// A map of values.
Map(ValueMap),
/// A unicode range for font selections. U+NN, U+N?, U+NN-MM.
/// The string is the entire value, including the "U+" tag.
UnicodeRange(String),
/// A value in parenthesis.
Paren(Box<Value>),
}
/// An OrderMap where both the keys and the values are css values.
pub type ValueMap = OrderMap<Value, Value>;
impl Value {
/// Create a numeric value with no unit.
pub fn scalar<T: Into<Number>>(v: T) -> Self {
Value::Numeric(Numeric::scalar(v), true)
}
/// Get the type name of this value.
pub fn type_name(&self) -> &'static str {
match *self {
Value::Color(..) => "color",
Value::Literal(..) => "string",
Value::Map(..) => "map",
Value::Numeric(..) => "number",
Value::List(..) => "list",
Value::Function(..) => "function",
Value::True | Value::False => "bool",
Value::Null => "null",
_ => "unknown",
}
}
/// Return true if this is a calculated value.
///
/// The return of functions or operators are calculated, verbatim
/// values are not.
pub fn is_calculated(&self) -> bool {
match *self {
Value::Numeric(.., calculated) => calculated,
Value::Color(_, None) => true,
_ => false,
}
}
/// Get this value, but marked as calculated.
pub fn into_calculated(self) -> Self {
match self {
Value::Numeric(num, _) => Value::Numeric(num, true),
Value::List(v, sep, bracketed) => Value::List(
v.into_iter().map(|i| i.into_calculated()).collect(),
sep,
bracketed,
),
other => other,
}
}
/// All values other than `False` and `Null` should be considered true.
pub fn is_true(&self) -> bool {
!matches!(self, Value::False | Value::Null)
}
/// Return true if this value is null.
///
/// Note that an empty unquoted string and a list containing no
/// non-null values is also considered null.
pub fn is_null(&self) -> bool {
match *self {
Value::Null => true,
Value::List(ref list, _, false) => {
list.iter().all(|v| v.is_null())
}
Value::Literal(ref s, Quotes::None) if s.is_empty() => true,
Value::Paren(ref v) => v.is_null(),
_ => false,
}
}
/// Check if this value is numeric.
///
/// If it is, get the number and unit, otherwise, get the value
/// itself as error.
pub fn numeric_value(self) -> Result<Numeric, Self> {
match self {
Value::Numeric(num,..) => Ok(num),
v => Err(v),
}
}
/// Check that this value is an integer.
#[deprecated]
pub fn integer_value(&self) -> Result<i64, Error> {
match self {
&Value::Numeric(ref num,..) => num
.value
.clone()
.into_integer()
.map_err(|_| Error::bad_value("an integer", self)),
v => Err(Error::bad_value("a number", v)),
}
}
/// Unquote this value.
///
/// If the value is a quoted string, the content is unquoted.
pub fn unquote(self) -> Value {
match self {
Value::Literal(s, Quotes::None) => {
Value::Literal(s, Quotes::None)
}
Value::Literal(s, _) => {
let mut result = String::new();
let mut iter = s.chars().peekable();
while let Some(c) = iter.next() {
if c == '\\' {
let mut val: u32 = 0;
let mut got_num = false;
let nextchar = loop {
match iter.peek() {
Some(&c) if c.is_ascii_hexdigit() => {
val = val * 10 + u32::from(hexvalue(c));
got_num = true;
iter.next();
}
Some(' ') if got_num => {
iter.next();
break (None);
}
Some(_) if!got_num => break (iter.next()),
_ => break (None),
}
};
if got_num {
if let Ok(c) = char::try_from(val) {
result.push(c);
} else {
result.push('\u{fffd}');
}
}
match nextchar {
Some('\n') => {
result.push('\\');
result.push('a');
}
Some(c) => {
result.push(c);
}
None => (),
}
} else {
result.push(c)
}
}
Value::Literal(result, Quotes::None)
}
Value::List(list, s, b) => Value::List(
list.into_iter().map(|v| v.unquote()).collect(),
s,
b,
),
Value::Paren(v) => *v,
v => v,
}
}
/// Get this value as iterable items.
///
/// Lists and maps have iterable items, which are returned as a
/// vector of values. Other values are returned as a vec
/// containing the value as a single item.
pub fn iter_items(self) -> Vec<Value> {
match self {
Value::List(v, _, _) => v,
Value::Map(map) => map
.iter()
.map(|&(ref k, ref v)| {
Value::List(
vec![k.clone(), v.clone()],
Some(ListSeparator::Space),
false,
)
})
.collect(),
Value::Paren(v) => v.iter_items(),
v => vec![v],
}
}
/// Get a reference to this `Value` bound to an output format.
///
/// The bound referene implements `Display`, so it can be written
/// with the rust `format!(...) macros or coverted with the
/// `to_string()` method.
///
/// # Example
///
/// ```
/// # use rsass::css::Value;
/// assert_eq!(
/// Value::scalar(42).format(Default::default()).to_string(),
/// "42",
/// );
/// ```
pub fn format(&self, format: Format) -> Formatted<Value> {
Formatted {
value: self,
format,
}
}
}
fn hexvalue(c: char) -> u8 {
if ('0'..='9').contains(&c) {
c as u8 - b'0'
} else if ('a'..='f').contains(&c) {
c as u8 - b'a' + 10
} else if ('A'..='F').contains(&c) {
c as u8 - b'A' + 10
} else {
0
}
}
/// Some Values are equal according to spec even with some
/// implementation differences.
impl PartialEq for Value {
fn | (&self, other: &Value) -> bool {
match (&self, other) {
(Value::Bang(a), Value::Bang(b)) => a == b,
(Value::Numeric(a, _), Value::Numeric(b, _)) => a == b,
(Value::Literal(a, aq), Value::Literal(b, bq)) => {
if aq == bq {
a == b
} else {
let a = if aq.is_none() {
a.replace('\\', "\\\\")
} else {
a.clone()
};
let b = if bq.is_none() {
b.replace('\\', "\\\\")
} else {
b.clone()
};
a == b
}
}
(Value::Null, Value::Null) => true,
(Value::True, Value::True) => true,
(Value::False, Value::False) => true,
(Value::Color(a, _), Value::Color(b, _)) => a == b,
(Value::Call(af, aa), Value::Call(bf, ba)) => {
af == bf && aa == ba
}
(Value::Function(a, abody), Value::Function(b, bbody)) => {
a == b && abody == bbody
}
(Value::List(av, asep, ab), Value::List(bv, bsep, bb)) => {
av == bv && asep == bsep && ab == bb
}
(Value::Map(a), Value::Map(b)) => a == b,
(Value::UnaryOp(a, av), Value::UnaryOp(b, bv)) => {
a == b && av == bv
}
(
Value::BinOp(aa, _, ao, _, ab),
Value::BinOp(ba, _, bo, _, bb),
) => ao == bo && aa == ba && ab == bb,
(Value::UnicodeRange(a), Value::UnicodeRange(b)) => a == b,
(Value::Paren(a), Value::Paren(b)) => a == b,
(Value::List(a,..), Value::Map(b)) => {
a.is_empty() && b.len() == 0
}
(Value::Map(a), Value::List(b,..)) => {
a.len() == 0 && b.is_empty()
}
_ => false,
}
}
}
impl From<bool> for Value {
fn from(v: bool) -> Value {
match v {
true => Value::True,
false => Value::False,
}
}
}
impl From<&str> for Value {
fn from(s: &str) -> Value {
Value::Literal(s.into(), Quotes::None)
}
}
impl From<String> for Value {
fn from(s: String) -> Value {
Value::Literal(s, Quotes::None)
}
}
impl From<Numeric> for Value {
fn from(v: Numeric) -> Value {
Value::Numeric(v, true)
}
}
impl<C: Into<Color>> From<C> for Value {
fn from(c: C) -> Value {
Value::Color(c.into(), None)
}
}
| eq | identifier_name |
main.rs | use lazy_static;
use permutator::Combination;
use serde::Deserialize;
use std::collections::{BTreeSet, HashMap, VecDeque};
use std::error::Error;
fn mushes() -> Result<Vec<Mush>, Box<Error>> {
let f = std::fs::File::open("assets/agaricus-lepiota.data")?;
let mut rdr = csv::ReaderBuilder::new().has_headers(false).from_reader(f);
let mut result = Vec::new();
for r in rdr.deserialize() {
result.push(r?);
}
Ok(result)
}
enum SplitType {
Gini,
Twoing,
}
const SPLIT_TYPE: SplitType = SplitType::Twoing;
fn main() {
let mush = mushes().expect("error getting mushy");
println!("got {} rows", mush.len());
// Nodes to check. Vec of (rows, node_idx)
let mut half_nodes: VecDeque<(_, usize)> = VecDeque::new();
half_nodes.push_back((mush, 1));
let mut next_page = 1;
while let Some((mush, page)) = half_nodes.pop_front() {
let mut min_impurinty = None;
let mut max_twoing = None;
for facet in 0..NUM_FACETS {
let facet_vals = &facet_vals(&mush, facet);
let questions = (1..facet_vals.len())
.flat_map(move |k| facet_vals.combination(k))
.map(move |combis| Question {
facet,
vals: combis.into_iter().cloned().collect(),
});
for question in questions {
let answer = question.answer(&mush);
match SPLIT_TYPE {
SplitType::Gini => {
let ans_imp = answer.impurity;
if let Some((min_i, _, _)) = min_impurinty {
if ans_imp < min_i {
min_impurinty = Some((ans_imp, question, answer));
}
} else {
min_impurinty = Some((ans_imp, question, answer));
}
}
SplitType::Twoing => {
let p_no = answer.no.rows.len() as f64 / mush.len() as f64;
let p_yes = answer.yes.rows.len() as f64 / mush.len() as f64;
let sum_poison = (answer.yes.poison_cnt as f64
/ answer.yes.rows.len() as f64)
- (answer.no.poison_cnt as f64 / answer.no.rows.len() as f64);
let sum_edible = (1.0
- answer.yes.poison_cnt as f64 / answer.yes.rows.len() as f64)
- (1.0 - answer.no.poison_cnt as f64 / answer.no.rows.len() as f64);
let sum = sum_edible.abs() + sum_poison.abs();
let twoing = p_no * p_yes * 0.25 * (sum * sum);
if let Some((max_two, _, _)) = max_twoing {
if max_two < twoing {
max_twoing = Some((twoing, question, answer));
}
} else {
max_twoing = Some((twoing, question, answer));
}
}
}
}
}
let (quest, ans) = match SPLIT_TYPE {
SplitType::Gini => {
let (_, quest, ans) = min_impurinty.expect("huh? no nodes or sumpin?");
(quest, ans)
}
SplitType::Twoing => {
let (_, quest, ans) = max_twoing.expect("Huh? no nodes?");
(quest, ans)
}
};
println!("page {}: {}", page, quest);
for (txt, node) in &[("yes", &ans.yes), ("no", &ans.no)] {
if node.impurity == 0.0 {
println!("\tif {}, done. {}", txt, node);
} else {
next_page += 1;
println!("\tif {}, {}, goto page {}", txt, node, next_page);
half_nodes.push_back((node.rows.clone(), next_page));
}
}
}
}
fn facet_vals(mushs: &[Mush], facet: usize) -> Vec<char> {
mushs
.iter()
.map(|m| m.attrs[facet])
.collect::<BTreeSet<_>>()
.into_iter()
.collect()
}
#[derive(Clone, Debug)]
struct Question {
facet: usize,
vals: BTreeSet<char>,
}
impl Question {
/// Applies the question to the group, separating it into two.
fn answer(&self, input: &[Mush]) -> Answer {
let (yes, no) = input
.iter()
.partition(|mush| self.vals.contains(&mush.attrs[self.facet]));
Answer::new(yes, no)
}
}
impl std::fmt::Display for Question {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
let (facet_name, facet_map) = &FACETS[self.facet];
let choices = facet_map
.iter()
.filter_map(|(k, v)| if self.vals.contains(k) { Some(v) } else { None })
.cloned()
.collect::<Vec<_>>()
.join(", ");
write!(
f,
"Examine '{}'. Is it {}{}?",
facet_name,
if self.vals.len() > 1 { "one of " } else { "" },
choices
)
}
}
#[test]
fn test_question_fmt() {
use std::iter::FromIterator;
let q = Question {
facet: 0,
vals: BTreeSet::from_iter(['b', 'c', 'x'].iter().cloned()),
};
format!("{}", q);
}
lazy_static::lazy_static! {
static ref FACETS: Vec<(&'static str,HashMap<char,&'static str>)> = {
let facet_data = [
("cap-shape" ,"bell=b,conical=c,convex=x,flat=f,knobbed=k,sunken=s"),
("cap-surface" ,"fibrous=f,grooves=g,scaly=y,smooth=s"),
("cap-color" ,"brown=n,buff=b,cinnamon=c,gray=g,green=r,pink=p,purple=u,red=e,white=w,yellow=y"),
("bruises?" ,"bruises=t,no=f"),
("odor" ,"almond=a,anise=l,creosote=c,fishy=y,foul=f,musty=m,none=n,pungent=p,spicy=s"),
("gill-attachment" ,"attached=a,descending=d,free=f,notched=n"),
("gill-spacing" ,"close=c,crowded=w,distant=d"),
("gill-size" ,"broad=b,narrow=n"),
("gill-color" ,"black=k,brown=n,buff=b,chocolate=h,gray=g,green=r,orange=o,pink=p,purple=u,red=e,white=w,yellow=y"),
("stalk-shape" ,"enlarging=e,tapering=t"),
("stalk-root" ,"bulbous=b,club=c,cup=u,equal=e,rhizomorphs=z,rooted=r,missing=?"),
("stalk-surface-above-ring","fibrous=f,scaly=y,silky=k,smooth=s"),
("stalk-surface-below-ring","fibrous=f,scaly=y,silky=k,smooth=s"),
("stalk-color-above-ring" ,"brown=n,buff=b,cinnamon=c,gray=g,orange=o,pink=p,red=e,white=w,yellow=y"),
("stalk-color-below-ring" ,"brown=n,buff=b,cinnamon=c,gray=g,orange=o,pink=p,red=e,white=w,yellow=y"),
("veil-type" ,"partial=p,universal=u"),
("veil-color" ,"brown=n,orange=o,white=w,yellow=y"),
("ring-number" ,"none=n,one=o,two=t"),
("ring-type" ,"cobwebby=c,evanescent=e,flaring=f,large=l,none=n,pendant=p,sheathing=s,zone=z"),
("spore-print-color" ,"black=k,brown=n,buff=b,chocolate=h,green=r,orange=o,purple=u,white=w,yellow=y"),
("population" ,"abundant=a,clustered=c,numerous=n,scattered=s,several=v,solitary=y"),
("habitat" ,"grasses=g,leaves=l,meadows=m,paths=p,urban=u,waste=w,woods=d"),
];
let mut result = Vec::new();
for (facet, cats) in &facet_data {
let mut facet_map = HashMap::new();
for cat in cats.split(',') {
let mut i = cat.splitn(2,'=');
if let (Some(name),Some(c)) = (i.next(), i.next()) {
facet_map.insert(c.chars().next().unwrap(), name);
} else {
panic!("Can't parse: {}", cat);
}
}
result.push((*facet,facet_map));
}
result
};
}
#[test]
fn | () {
let q = Question {
facet: 0,
vals: ['a', 'b', 'c'].iter().cloned().collect(),
};
let mushs = [
Mush {
poison: 'p',
attrs: ['a'; 22],
},
Mush {
poison: 'p',
attrs: ['b'; 22],
},
Mush {
poison: 'p',
attrs: ['c'; 22],
},
Mush {
poison: 'p',
attrs: ['d'; 22],
},
Mush {
poison: 'p',
attrs: ['e'; 22],
},
];
let a = q.answer(&mushs);
assert_eq!(a.yes.rows.len(), 3);
assert_eq!(a.no.rows.len(), 2);
}
#[derive(Debug)]
struct Answer {
yes: Node,
no: Node,
impurity: f64,
parent_idx: Option<usize>,
}
impl Answer {
fn new(yes: Vec<Mush>, no: Vec<Mush>) -> Answer {
let yes_node = Node::new(yes);
let no_node = Node::new(no);
let answer_impurity = yes_node.impurity + no_node.impurity;
Answer {
yes: yes_node,
no: no_node,
impurity: answer_impurity,
parent_idx: None,
}
}
}
impl std::fmt::Display for Answer {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(
f,
"yes: {}, no {}, imp: {}",
self.yes, self.no, self.impurity
)
}
}
#[test]
fn test_impurity() {
let poisons: Vec<_> = (0..10)
.map(|_| Mush {
poison: 'p',
attrs: ['a'; 22],
})
.collect();
let edibles: Vec<_> = (0..10)
.map(|_| Mush {
poison: 'e',
attrs: ['a'; 22],
})
.collect();
let mixed: Vec<_> = (0..10)
.map(|i| Mush {
poison: if i % 2 == 0 { 'e' } else { 'p' },
attrs: ['a'; 22],
})
.collect();
assert!(
Answer::new(poisons.clone(), edibles.clone()).impurity
< Answer::new(poisons.clone(), mixed.clone()).impurity
);
assert!(
(Answer::new(poisons.clone(), mixed.clone()).impurity
- Answer::new(mixed.clone(), poisons.clone()).impurity)
.abs()
< std::f64::EPSILON
);
assert!(
Answer::new(edibles.clone(), poisons.clone()).impurity
< Answer::new(poisons.clone(), mixed.clone()).impurity
);
}
#[derive(Debug)]
struct Node {
rows: Vec<Mush>,
poison_cnt: usize,
impurity: f64,
}
impl Node {
fn new(mushs: Vec<Mush>) -> Node {
let gini = |poison_count, total_count| {
let p = poison_count as f64;
let l = total_count as f64;
2.0 * (p / l) * ((l - p) / l)
};
let poison_cnt = mushs.iter().filter(|m| m.poisonous()).count();
let impurity = gini(poison_cnt, mushs.len());
Node {
rows: mushs,
poison_cnt,
impurity,
}
}
}
impl std::fmt::Display for Node {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(
f,
"{}/{} poisonous (imp {})",
self.poison_cnt,
self.rows.len(),
self.impurity
)
}
}
const NUM_FACETS: usize = 22;
#[derive(Deserialize, Debug, Clone, Copy)]
struct Mush {
poison: char,
attrs: [char; NUM_FACETS],
}
impl Mush {
fn poisonous(&self) -> bool {
self.poison == 'p'
}
}
| test_answer | identifier_name |
main.rs | use lazy_static;
use permutator::Combination;
use serde::Deserialize;
use std::collections::{BTreeSet, HashMap, VecDeque};
use std::error::Error;
fn mushes() -> Result<Vec<Mush>, Box<Error>> {
let f = std::fs::File::open("assets/agaricus-lepiota.data")?;
let mut rdr = csv::ReaderBuilder::new().has_headers(false).from_reader(f);
let mut result = Vec::new();
for r in rdr.deserialize() {
result.push(r?);
}
Ok(result)
}
enum SplitType {
Gini,
Twoing,
}
const SPLIT_TYPE: SplitType = SplitType::Twoing;
fn main() {
let mush = mushes().expect("error getting mushy");
println!("got {} rows", mush.len());
// Nodes to check. Vec of (rows, node_idx)
let mut half_nodes: VecDeque<(_, usize)> = VecDeque::new();
half_nodes.push_back((mush, 1));
let mut next_page = 1;
while let Some((mush, page)) = half_nodes.pop_front() {
let mut min_impurinty = None;
let mut max_twoing = None;
for facet in 0..NUM_FACETS {
let facet_vals = &facet_vals(&mush, facet);
let questions = (1..facet_vals.len())
.flat_map(move |k| facet_vals.combination(k))
.map(move |combis| Question {
facet,
vals: combis.into_iter().cloned().collect(),
});
for question in questions {
let answer = question.answer(&mush);
match SPLIT_TYPE {
SplitType::Gini => {
let ans_imp = answer.impurity;
if let Some((min_i, _, _)) = min_impurinty {
if ans_imp < min_i {
min_impurinty = Some((ans_imp, question, answer));
}
} else {
min_impurinty = Some((ans_imp, question, answer));
}
}
SplitType::Twoing => {
let p_no = answer.no.rows.len() as f64 / mush.len() as f64;
let p_yes = answer.yes.rows.len() as f64 / mush.len() as f64;
let sum_poison = (answer.yes.poison_cnt as f64
/ answer.yes.rows.len() as f64)
- (answer.no.poison_cnt as f64 / answer.no.rows.len() as f64);
let sum_edible = (1.0
- answer.yes.poison_cnt as f64 / answer.yes.rows.len() as f64)
- (1.0 - answer.no.poison_cnt as f64 / answer.no.rows.len() as f64);
let sum = sum_edible.abs() + sum_poison.abs();
let twoing = p_no * p_yes * 0.25 * (sum * sum);
if let Some((max_two, _, _)) = max_twoing {
if max_two < twoing {
max_twoing = Some((twoing, question, answer));
}
} else {
max_twoing = Some((twoing, question, answer));
}
}
}
}
}
let (quest, ans) = match SPLIT_TYPE {
SplitType::Gini => {
let (_, quest, ans) = min_impurinty.expect("huh? no nodes or sumpin?");
(quest, ans)
}
SplitType::Twoing => {
let (_, quest, ans) = max_twoing.expect("Huh? no nodes?");
(quest, ans)
}
};
println!("page {}: {}", page, quest);
for (txt, node) in &[("yes", &ans.yes), ("no", &ans.no)] {
if node.impurity == 0.0 {
println!("\tif {}, done. {}", txt, node);
} else {
next_page += 1;
println!("\tif {}, {}, goto page {}", txt, node, next_page);
half_nodes.push_back((node.rows.clone(), next_page));
}
}
}
}
fn facet_vals(mushs: &[Mush], facet: usize) -> Vec<char> {
mushs
.iter()
.map(|m| m.attrs[facet])
.collect::<BTreeSet<_>>()
.into_iter()
.collect()
}
#[derive(Clone, Debug)]
struct Question {
facet: usize,
vals: BTreeSet<char>,
}
impl Question {
/// Applies the question to the group, separating it into two.
fn answer(&self, input: &[Mush]) -> Answer {
let (yes, no) = input
.iter()
.partition(|mush| self.vals.contains(&mush.attrs[self.facet]));
Answer::new(yes, no)
}
}
impl std::fmt::Display for Question {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
let (facet_name, facet_map) = &FACETS[self.facet];
let choices = facet_map
.iter()
.filter_map(|(k, v)| if self.vals.contains(k) { Some(v) } else { None })
.cloned()
.collect::<Vec<_>>()
.join(", ");
write!(
f,
"Examine '{}'. Is it {}{}?",
facet_name,
if self.vals.len() > 1 { "one of " } else { "" },
choices
)
}
}
#[test]
fn test_question_fmt() {
use std::iter::FromIterator;
let q = Question {
facet: 0,
vals: BTreeSet::from_iter(['b', 'c', 'x'].iter().cloned()),
};
format!("{}", q);
}
lazy_static::lazy_static! {
static ref FACETS: Vec<(&'static str,HashMap<char,&'static str>)> = {
let facet_data = [
("cap-shape" ,"bell=b,conical=c,convex=x,flat=f,knobbed=k,sunken=s"),
("cap-surface" ,"fibrous=f,grooves=g,scaly=y,smooth=s"), | ("gill-size" ,"broad=b,narrow=n"),
("gill-color" ,"black=k,brown=n,buff=b,chocolate=h,gray=g,green=r,orange=o,pink=p,purple=u,red=e,white=w,yellow=y"),
("stalk-shape" ,"enlarging=e,tapering=t"),
("stalk-root" ,"bulbous=b,club=c,cup=u,equal=e,rhizomorphs=z,rooted=r,missing=?"),
("stalk-surface-above-ring","fibrous=f,scaly=y,silky=k,smooth=s"),
("stalk-surface-below-ring","fibrous=f,scaly=y,silky=k,smooth=s"),
("stalk-color-above-ring" ,"brown=n,buff=b,cinnamon=c,gray=g,orange=o,pink=p,red=e,white=w,yellow=y"),
("stalk-color-below-ring" ,"brown=n,buff=b,cinnamon=c,gray=g,orange=o,pink=p,red=e,white=w,yellow=y"),
("veil-type" ,"partial=p,universal=u"),
("veil-color" ,"brown=n,orange=o,white=w,yellow=y"),
("ring-number" ,"none=n,one=o,two=t"),
("ring-type" ,"cobwebby=c,evanescent=e,flaring=f,large=l,none=n,pendant=p,sheathing=s,zone=z"),
("spore-print-color" ,"black=k,brown=n,buff=b,chocolate=h,green=r,orange=o,purple=u,white=w,yellow=y"),
("population" ,"abundant=a,clustered=c,numerous=n,scattered=s,several=v,solitary=y"),
("habitat" ,"grasses=g,leaves=l,meadows=m,paths=p,urban=u,waste=w,woods=d"),
];
let mut result = Vec::new();
for (facet, cats) in &facet_data {
let mut facet_map = HashMap::new();
for cat in cats.split(',') {
let mut i = cat.splitn(2,'=');
if let (Some(name),Some(c)) = (i.next(), i.next()) {
facet_map.insert(c.chars().next().unwrap(), name);
} else {
panic!("Can't parse: {}", cat);
}
}
result.push((*facet,facet_map));
}
result
};
}
#[test]
fn test_answer() {
let q = Question {
facet: 0,
vals: ['a', 'b', 'c'].iter().cloned().collect(),
};
let mushs = [
Mush {
poison: 'p',
attrs: ['a'; 22],
},
Mush {
poison: 'p',
attrs: ['b'; 22],
},
Mush {
poison: 'p',
attrs: ['c'; 22],
},
Mush {
poison: 'p',
attrs: ['d'; 22],
},
Mush {
poison: 'p',
attrs: ['e'; 22],
},
];
let a = q.answer(&mushs);
assert_eq!(a.yes.rows.len(), 3);
assert_eq!(a.no.rows.len(), 2);
}
#[derive(Debug)]
struct Answer {
yes: Node,
no: Node,
impurity: f64,
parent_idx: Option<usize>,
}
impl Answer {
fn new(yes: Vec<Mush>, no: Vec<Mush>) -> Answer {
let yes_node = Node::new(yes);
let no_node = Node::new(no);
let answer_impurity = yes_node.impurity + no_node.impurity;
Answer {
yes: yes_node,
no: no_node,
impurity: answer_impurity,
parent_idx: None,
}
}
}
impl std::fmt::Display for Answer {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(
f,
"yes: {}, no {}, imp: {}",
self.yes, self.no, self.impurity
)
}
}
#[test]
fn test_impurity() {
let poisons: Vec<_> = (0..10)
.map(|_| Mush {
poison: 'p',
attrs: ['a'; 22],
})
.collect();
let edibles: Vec<_> = (0..10)
.map(|_| Mush {
poison: 'e',
attrs: ['a'; 22],
})
.collect();
let mixed: Vec<_> = (0..10)
.map(|i| Mush {
poison: if i % 2 == 0 { 'e' } else { 'p' },
attrs: ['a'; 22],
})
.collect();
assert!(
Answer::new(poisons.clone(), edibles.clone()).impurity
< Answer::new(poisons.clone(), mixed.clone()).impurity
);
assert!(
(Answer::new(poisons.clone(), mixed.clone()).impurity
- Answer::new(mixed.clone(), poisons.clone()).impurity)
.abs()
< std::f64::EPSILON
);
assert!(
Answer::new(edibles.clone(), poisons.clone()).impurity
< Answer::new(poisons.clone(), mixed.clone()).impurity
);
}
#[derive(Debug)]
struct Node {
rows: Vec<Mush>,
poison_cnt: usize,
impurity: f64,
}
impl Node {
fn new(mushs: Vec<Mush>) -> Node {
let gini = |poison_count, total_count| {
let p = poison_count as f64;
let l = total_count as f64;
2.0 * (p / l) * ((l - p) / l)
};
let poison_cnt = mushs.iter().filter(|m| m.poisonous()).count();
let impurity = gini(poison_cnt, mushs.len());
Node {
rows: mushs,
poison_cnt,
impurity,
}
}
}
impl std::fmt::Display for Node {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(
f,
"{}/{} poisonous (imp {})",
self.poison_cnt,
self.rows.len(),
self.impurity
)
}
}
const NUM_FACETS: usize = 22;
#[derive(Deserialize, Debug, Clone, Copy)]
struct Mush {
poison: char,
attrs: [char; NUM_FACETS],
}
impl Mush {
fn poisonous(&self) -> bool {
self.poison == 'p'
}
} | ("cap-color" ,"brown=n,buff=b,cinnamon=c,gray=g,green=r,pink=p,purple=u,red=e,white=w,yellow=y"),
("bruises?" ,"bruises=t,no=f"),
("odor" ,"almond=a,anise=l,creosote=c,fishy=y,foul=f,musty=m,none=n,pungent=p,spicy=s"),
("gill-attachment" ,"attached=a,descending=d,free=f,notched=n"),
("gill-spacing" ,"close=c,crowded=w,distant=d"), | random_line_split |
main.rs | use lazy_static;
use permutator::Combination;
use serde::Deserialize;
use std::collections::{BTreeSet, HashMap, VecDeque};
use std::error::Error;
fn mushes() -> Result<Vec<Mush>, Box<Error>> {
let f = std::fs::File::open("assets/agaricus-lepiota.data")?;
let mut rdr = csv::ReaderBuilder::new().has_headers(false).from_reader(f);
let mut result = Vec::new();
for r in rdr.deserialize() {
result.push(r?);
}
Ok(result)
}
enum SplitType {
Gini,
Twoing,
}
const SPLIT_TYPE: SplitType = SplitType::Twoing;
fn main() {
let mush = mushes().expect("error getting mushy");
println!("got {} rows", mush.len());
// Nodes to check. Vec of (rows, node_idx)
let mut half_nodes: VecDeque<(_, usize)> = VecDeque::new();
half_nodes.push_back((mush, 1));
let mut next_page = 1;
while let Some((mush, page)) = half_nodes.pop_front() {
let mut min_impurinty = None;
let mut max_twoing = None;
for facet in 0..NUM_FACETS {
let facet_vals = &facet_vals(&mush, facet);
let questions = (1..facet_vals.len())
.flat_map(move |k| facet_vals.combination(k))
.map(move |combis| Question {
facet,
vals: combis.into_iter().cloned().collect(),
});
for question in questions {
let answer = question.answer(&mush);
match SPLIT_TYPE {
SplitType::Gini => {
let ans_imp = answer.impurity;
if let Some((min_i, _, _)) = min_impurinty {
if ans_imp < min_i {
min_impurinty = Some((ans_imp, question, answer));
}
} else {
min_impurinty = Some((ans_imp, question, answer));
}
}
SplitType::Twoing => {
let p_no = answer.no.rows.len() as f64 / mush.len() as f64;
let p_yes = answer.yes.rows.len() as f64 / mush.len() as f64;
let sum_poison = (answer.yes.poison_cnt as f64
/ answer.yes.rows.len() as f64)
- (answer.no.poison_cnt as f64 / answer.no.rows.len() as f64);
let sum_edible = (1.0
- answer.yes.poison_cnt as f64 / answer.yes.rows.len() as f64)
- (1.0 - answer.no.poison_cnt as f64 / answer.no.rows.len() as f64);
let sum = sum_edible.abs() + sum_poison.abs();
let twoing = p_no * p_yes * 0.25 * (sum * sum);
if let Some((max_two, _, _)) = max_twoing {
if max_two < twoing {
max_twoing = Some((twoing, question, answer));
}
} else {
max_twoing = Some((twoing, question, answer));
}
}
}
}
}
let (quest, ans) = match SPLIT_TYPE {
SplitType::Gini => {
let (_, quest, ans) = min_impurinty.expect("huh? no nodes or sumpin?");
(quest, ans)
}
SplitType::Twoing => {
let (_, quest, ans) = max_twoing.expect("Huh? no nodes?");
(quest, ans)
}
};
println!("page {}: {}", page, quest);
for (txt, node) in &[("yes", &ans.yes), ("no", &ans.no)] {
if node.impurity == 0.0 {
println!("\tif {}, done. {}", txt, node);
} else {
next_page += 1;
println!("\tif {}, {}, goto page {}", txt, node, next_page);
half_nodes.push_back((node.rows.clone(), next_page));
}
}
}
}
fn facet_vals(mushs: &[Mush], facet: usize) -> Vec<char> {
mushs
.iter()
.map(|m| m.attrs[facet])
.collect::<BTreeSet<_>>()
.into_iter()
.collect()
}
#[derive(Clone, Debug)]
struct Question {
facet: usize,
vals: BTreeSet<char>,
}
impl Question {
/// Applies the question to the group, separating it into two.
fn answer(&self, input: &[Mush]) -> Answer {
let (yes, no) = input
.iter()
.partition(|mush| self.vals.contains(&mush.attrs[self.facet]));
Answer::new(yes, no)
}
}
impl std::fmt::Display for Question {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
let (facet_name, facet_map) = &FACETS[self.facet];
let choices = facet_map
.iter()
.filter_map(|(k, v)| if self.vals.contains(k) | else { None })
.cloned()
.collect::<Vec<_>>()
.join(", ");
write!(
f,
"Examine '{}'. Is it {}{}?",
facet_name,
if self.vals.len() > 1 { "one of " } else { "" },
choices
)
}
}
#[test]
fn test_question_fmt() {
use std::iter::FromIterator;
let q = Question {
facet: 0,
vals: BTreeSet::from_iter(['b', 'c', 'x'].iter().cloned()),
};
format!("{}", q);
}
lazy_static::lazy_static! {
static ref FACETS: Vec<(&'static str,HashMap<char,&'static str>)> = {
let facet_data = [
("cap-shape" ,"bell=b,conical=c,convex=x,flat=f,knobbed=k,sunken=s"),
("cap-surface" ,"fibrous=f,grooves=g,scaly=y,smooth=s"),
("cap-color" ,"brown=n,buff=b,cinnamon=c,gray=g,green=r,pink=p,purple=u,red=e,white=w,yellow=y"),
("bruises?" ,"bruises=t,no=f"),
("odor" ,"almond=a,anise=l,creosote=c,fishy=y,foul=f,musty=m,none=n,pungent=p,spicy=s"),
("gill-attachment" ,"attached=a,descending=d,free=f,notched=n"),
("gill-spacing" ,"close=c,crowded=w,distant=d"),
("gill-size" ,"broad=b,narrow=n"),
("gill-color" ,"black=k,brown=n,buff=b,chocolate=h,gray=g,green=r,orange=o,pink=p,purple=u,red=e,white=w,yellow=y"),
("stalk-shape" ,"enlarging=e,tapering=t"),
("stalk-root" ,"bulbous=b,club=c,cup=u,equal=e,rhizomorphs=z,rooted=r,missing=?"),
("stalk-surface-above-ring","fibrous=f,scaly=y,silky=k,smooth=s"),
("stalk-surface-below-ring","fibrous=f,scaly=y,silky=k,smooth=s"),
("stalk-color-above-ring" ,"brown=n,buff=b,cinnamon=c,gray=g,orange=o,pink=p,red=e,white=w,yellow=y"),
("stalk-color-below-ring" ,"brown=n,buff=b,cinnamon=c,gray=g,orange=o,pink=p,red=e,white=w,yellow=y"),
("veil-type" ,"partial=p,universal=u"),
("veil-color" ,"brown=n,orange=o,white=w,yellow=y"),
("ring-number" ,"none=n,one=o,two=t"),
("ring-type" ,"cobwebby=c,evanescent=e,flaring=f,large=l,none=n,pendant=p,sheathing=s,zone=z"),
("spore-print-color" ,"black=k,brown=n,buff=b,chocolate=h,green=r,orange=o,purple=u,white=w,yellow=y"),
("population" ,"abundant=a,clustered=c,numerous=n,scattered=s,several=v,solitary=y"),
("habitat" ,"grasses=g,leaves=l,meadows=m,paths=p,urban=u,waste=w,woods=d"),
];
let mut result = Vec::new();
for (facet, cats) in &facet_data {
let mut facet_map = HashMap::new();
for cat in cats.split(',') {
let mut i = cat.splitn(2,'=');
if let (Some(name),Some(c)) = (i.next(), i.next()) {
facet_map.insert(c.chars().next().unwrap(), name);
} else {
panic!("Can't parse: {}", cat);
}
}
result.push((*facet,facet_map));
}
result
};
}
#[test]
fn test_answer() {
let q = Question {
facet: 0,
vals: ['a', 'b', 'c'].iter().cloned().collect(),
};
let mushs = [
Mush {
poison: 'p',
attrs: ['a'; 22],
},
Mush {
poison: 'p',
attrs: ['b'; 22],
},
Mush {
poison: 'p',
attrs: ['c'; 22],
},
Mush {
poison: 'p',
attrs: ['d'; 22],
},
Mush {
poison: 'p',
attrs: ['e'; 22],
},
];
let a = q.answer(&mushs);
assert_eq!(a.yes.rows.len(), 3);
assert_eq!(a.no.rows.len(), 2);
}
#[derive(Debug)]
struct Answer {
yes: Node,
no: Node,
impurity: f64,
parent_idx: Option<usize>,
}
impl Answer {
fn new(yes: Vec<Mush>, no: Vec<Mush>) -> Answer {
let yes_node = Node::new(yes);
let no_node = Node::new(no);
let answer_impurity = yes_node.impurity + no_node.impurity;
Answer {
yes: yes_node,
no: no_node,
impurity: answer_impurity,
parent_idx: None,
}
}
}
impl std::fmt::Display for Answer {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(
f,
"yes: {}, no {}, imp: {}",
self.yes, self.no, self.impurity
)
}
}
#[test]
fn test_impurity() {
let poisons: Vec<_> = (0..10)
.map(|_| Mush {
poison: 'p',
attrs: ['a'; 22],
})
.collect();
let edibles: Vec<_> = (0..10)
.map(|_| Mush {
poison: 'e',
attrs: ['a'; 22],
})
.collect();
let mixed: Vec<_> = (0..10)
.map(|i| Mush {
poison: if i % 2 == 0 { 'e' } else { 'p' },
attrs: ['a'; 22],
})
.collect();
assert!(
Answer::new(poisons.clone(), edibles.clone()).impurity
< Answer::new(poisons.clone(), mixed.clone()).impurity
);
assert!(
(Answer::new(poisons.clone(), mixed.clone()).impurity
- Answer::new(mixed.clone(), poisons.clone()).impurity)
.abs()
< std::f64::EPSILON
);
assert!(
Answer::new(edibles.clone(), poisons.clone()).impurity
< Answer::new(poisons.clone(), mixed.clone()).impurity
);
}
#[derive(Debug)]
struct Node {
rows: Vec<Mush>,
poison_cnt: usize,
impurity: f64,
}
impl Node {
fn new(mushs: Vec<Mush>) -> Node {
let gini = |poison_count, total_count| {
let p = poison_count as f64;
let l = total_count as f64;
2.0 * (p / l) * ((l - p) / l)
};
let poison_cnt = mushs.iter().filter(|m| m.poisonous()).count();
let impurity = gini(poison_cnt, mushs.len());
Node {
rows: mushs,
poison_cnt,
impurity,
}
}
}
impl std::fmt::Display for Node {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(
f,
"{}/{} poisonous (imp {})",
self.poison_cnt,
self.rows.len(),
self.impurity
)
}
}
const NUM_FACETS: usize = 22;
#[derive(Deserialize, Debug, Clone, Copy)]
struct Mush {
poison: char,
attrs: [char; NUM_FACETS],
}
impl Mush {
fn poisonous(&self) -> bool {
self.poison == 'p'
}
}
| { Some(v) } | conditional_block |
main.rs | use lazy_static;
use permutator::Combination;
use serde::Deserialize;
use std::collections::{BTreeSet, HashMap, VecDeque};
use std::error::Error;
fn mushes() -> Result<Vec<Mush>, Box<Error>> {
let f = std::fs::File::open("assets/agaricus-lepiota.data")?;
let mut rdr = csv::ReaderBuilder::new().has_headers(false).from_reader(f);
let mut result = Vec::new();
for r in rdr.deserialize() {
result.push(r?);
}
Ok(result)
}
enum SplitType {
Gini,
Twoing,
}
const SPLIT_TYPE: SplitType = SplitType::Twoing;
fn main() {
let mush = mushes().expect("error getting mushy");
println!("got {} rows", mush.len());
// Nodes to check. Vec of (rows, node_idx)
let mut half_nodes: VecDeque<(_, usize)> = VecDeque::new();
half_nodes.push_back((mush, 1));
let mut next_page = 1;
while let Some((mush, page)) = half_nodes.pop_front() {
let mut min_impurinty = None;
let mut max_twoing = None;
for facet in 0..NUM_FACETS {
let facet_vals = &facet_vals(&mush, facet);
let questions = (1..facet_vals.len())
.flat_map(move |k| facet_vals.combination(k))
.map(move |combis| Question {
facet,
vals: combis.into_iter().cloned().collect(),
});
for question in questions {
let answer = question.answer(&mush);
match SPLIT_TYPE {
SplitType::Gini => {
let ans_imp = answer.impurity;
if let Some((min_i, _, _)) = min_impurinty {
if ans_imp < min_i {
min_impurinty = Some((ans_imp, question, answer));
}
} else {
min_impurinty = Some((ans_imp, question, answer));
}
}
SplitType::Twoing => {
let p_no = answer.no.rows.len() as f64 / mush.len() as f64;
let p_yes = answer.yes.rows.len() as f64 / mush.len() as f64;
let sum_poison = (answer.yes.poison_cnt as f64
/ answer.yes.rows.len() as f64)
- (answer.no.poison_cnt as f64 / answer.no.rows.len() as f64);
let sum_edible = (1.0
- answer.yes.poison_cnt as f64 / answer.yes.rows.len() as f64)
- (1.0 - answer.no.poison_cnt as f64 / answer.no.rows.len() as f64);
let sum = sum_edible.abs() + sum_poison.abs();
let twoing = p_no * p_yes * 0.25 * (sum * sum);
if let Some((max_two, _, _)) = max_twoing {
if max_two < twoing {
max_twoing = Some((twoing, question, answer));
}
} else {
max_twoing = Some((twoing, question, answer));
}
}
}
}
}
let (quest, ans) = match SPLIT_TYPE {
SplitType::Gini => {
let (_, quest, ans) = min_impurinty.expect("huh? no nodes or sumpin?");
(quest, ans)
}
SplitType::Twoing => {
let (_, quest, ans) = max_twoing.expect("Huh? no nodes?");
(quest, ans)
}
};
println!("page {}: {}", page, quest);
for (txt, node) in &[("yes", &ans.yes), ("no", &ans.no)] {
if node.impurity == 0.0 {
println!("\tif {}, done. {}", txt, node);
} else {
next_page += 1;
println!("\tif {}, {}, goto page {}", txt, node, next_page);
half_nodes.push_back((node.rows.clone(), next_page));
}
}
}
}
fn facet_vals(mushs: &[Mush], facet: usize) -> Vec<char> {
mushs
.iter()
.map(|m| m.attrs[facet])
.collect::<BTreeSet<_>>()
.into_iter()
.collect()
}
#[derive(Clone, Debug)]
struct Question {
facet: usize,
vals: BTreeSet<char>,
}
impl Question {
/// Applies the question to the group, separating it into two.
fn answer(&self, input: &[Mush]) -> Answer {
let (yes, no) = input
.iter()
.partition(|mush| self.vals.contains(&mush.attrs[self.facet]));
Answer::new(yes, no)
}
}
impl std::fmt::Display for Question {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result |
}
#[test]
fn test_question_fmt() {
use std::iter::FromIterator;
let q = Question {
facet: 0,
vals: BTreeSet::from_iter(['b', 'c', 'x'].iter().cloned()),
};
format!("{}", q);
}
lazy_static::lazy_static! {
static ref FACETS: Vec<(&'static str,HashMap<char,&'static str>)> = {
let facet_data = [
("cap-shape" ,"bell=b,conical=c,convex=x,flat=f,knobbed=k,sunken=s"),
("cap-surface" ,"fibrous=f,grooves=g,scaly=y,smooth=s"),
("cap-color" ,"brown=n,buff=b,cinnamon=c,gray=g,green=r,pink=p,purple=u,red=e,white=w,yellow=y"),
("bruises?" ,"bruises=t,no=f"),
("odor" ,"almond=a,anise=l,creosote=c,fishy=y,foul=f,musty=m,none=n,pungent=p,spicy=s"),
("gill-attachment" ,"attached=a,descending=d,free=f,notched=n"),
("gill-spacing" ,"close=c,crowded=w,distant=d"),
("gill-size" ,"broad=b,narrow=n"),
("gill-color" ,"black=k,brown=n,buff=b,chocolate=h,gray=g,green=r,orange=o,pink=p,purple=u,red=e,white=w,yellow=y"),
("stalk-shape" ,"enlarging=e,tapering=t"),
("stalk-root" ,"bulbous=b,club=c,cup=u,equal=e,rhizomorphs=z,rooted=r,missing=?"),
("stalk-surface-above-ring","fibrous=f,scaly=y,silky=k,smooth=s"),
("stalk-surface-below-ring","fibrous=f,scaly=y,silky=k,smooth=s"),
("stalk-color-above-ring" ,"brown=n,buff=b,cinnamon=c,gray=g,orange=o,pink=p,red=e,white=w,yellow=y"),
("stalk-color-below-ring" ,"brown=n,buff=b,cinnamon=c,gray=g,orange=o,pink=p,red=e,white=w,yellow=y"),
("veil-type" ,"partial=p,universal=u"),
("veil-color" ,"brown=n,orange=o,white=w,yellow=y"),
("ring-number" ,"none=n,one=o,two=t"),
("ring-type" ,"cobwebby=c,evanescent=e,flaring=f,large=l,none=n,pendant=p,sheathing=s,zone=z"),
("spore-print-color" ,"black=k,brown=n,buff=b,chocolate=h,green=r,orange=o,purple=u,white=w,yellow=y"),
("population" ,"abundant=a,clustered=c,numerous=n,scattered=s,several=v,solitary=y"),
("habitat" ,"grasses=g,leaves=l,meadows=m,paths=p,urban=u,waste=w,woods=d"),
];
let mut result = Vec::new();
for (facet, cats) in &facet_data {
let mut facet_map = HashMap::new();
for cat in cats.split(',') {
let mut i = cat.splitn(2,'=');
if let (Some(name),Some(c)) = (i.next(), i.next()) {
facet_map.insert(c.chars().next().unwrap(), name);
} else {
panic!("Can't parse: {}", cat);
}
}
result.push((*facet,facet_map));
}
result
};
}
#[test]
fn test_answer() {
let q = Question {
facet: 0,
vals: ['a', 'b', 'c'].iter().cloned().collect(),
};
let mushs = [
Mush {
poison: 'p',
attrs: ['a'; 22],
},
Mush {
poison: 'p',
attrs: ['b'; 22],
},
Mush {
poison: 'p',
attrs: ['c'; 22],
},
Mush {
poison: 'p',
attrs: ['d'; 22],
},
Mush {
poison: 'p',
attrs: ['e'; 22],
},
];
let a = q.answer(&mushs);
assert_eq!(a.yes.rows.len(), 3);
assert_eq!(a.no.rows.len(), 2);
}
#[derive(Debug)]
struct Answer {
yes: Node,
no: Node,
impurity: f64,
parent_idx: Option<usize>,
}
impl Answer {
fn new(yes: Vec<Mush>, no: Vec<Mush>) -> Answer {
let yes_node = Node::new(yes);
let no_node = Node::new(no);
let answer_impurity = yes_node.impurity + no_node.impurity;
Answer {
yes: yes_node,
no: no_node,
impurity: answer_impurity,
parent_idx: None,
}
}
}
impl std::fmt::Display for Answer {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(
f,
"yes: {}, no {}, imp: {}",
self.yes, self.no, self.impurity
)
}
}
#[test]
fn test_impurity() {
let poisons: Vec<_> = (0..10)
.map(|_| Mush {
poison: 'p',
attrs: ['a'; 22],
})
.collect();
let edibles: Vec<_> = (0..10)
.map(|_| Mush {
poison: 'e',
attrs: ['a'; 22],
})
.collect();
let mixed: Vec<_> = (0..10)
.map(|i| Mush {
poison: if i % 2 == 0 { 'e' } else { 'p' },
attrs: ['a'; 22],
})
.collect();
assert!(
Answer::new(poisons.clone(), edibles.clone()).impurity
< Answer::new(poisons.clone(), mixed.clone()).impurity
);
assert!(
(Answer::new(poisons.clone(), mixed.clone()).impurity
- Answer::new(mixed.clone(), poisons.clone()).impurity)
.abs()
< std::f64::EPSILON
);
assert!(
Answer::new(edibles.clone(), poisons.clone()).impurity
< Answer::new(poisons.clone(), mixed.clone()).impurity
);
}
#[derive(Debug)]
struct Node {
rows: Vec<Mush>,
poison_cnt: usize,
impurity: f64,
}
impl Node {
fn new(mushs: Vec<Mush>) -> Node {
let gini = |poison_count, total_count| {
let p = poison_count as f64;
let l = total_count as f64;
2.0 * (p / l) * ((l - p) / l)
};
let poison_cnt = mushs.iter().filter(|m| m.poisonous()).count();
let impurity = gini(poison_cnt, mushs.len());
Node {
rows: mushs,
poison_cnt,
impurity,
}
}
}
impl std::fmt::Display for Node {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(
f,
"{}/{} poisonous (imp {})",
self.poison_cnt,
self.rows.len(),
self.impurity
)
}
}
const NUM_FACETS: usize = 22;
#[derive(Deserialize, Debug, Clone, Copy)]
struct Mush {
poison: char,
attrs: [char; NUM_FACETS],
}
impl Mush {
fn poisonous(&self) -> bool {
self.poison == 'p'
}
}
| {
let (facet_name, facet_map) = &FACETS[self.facet];
let choices = facet_map
.iter()
.filter_map(|(k, v)| if self.vals.contains(k) { Some(v) } else { None })
.cloned()
.collect::<Vec<_>>()
.join(", ");
write!(
f,
"Examine '{}'. Is it {}{}?",
facet_name,
if self.vals.len() > 1 { "one of " } else { "" },
choices
)
} | identifier_body |
mod.rs | //
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations under
// the License.
pub mod node;
pub mod node_validator;
pub mod partition;
pub mod partition_tokenizer;
use std::collections::HashMap;
use std::sync::atomic::{AtomicBool, AtomicIsize, Ordering};
use std::sync::mpsc;
use std::sync::mpsc::{Receiver, Sender, TryRecvError};
use std::sync::Arc;
use std::thread;
use std::time::{Duration, Instant};
use std::vec::Vec;
use parking_lot::{Mutex, RwLock};
pub use self::node::Node;
use self::node_validator::NodeValidator;
use self::partition::Partition;
use self::partition_tokenizer::PartitionTokenizer;
use crate::errors::{ErrorKind, Result};
use crate::net::Host;
use crate::policy::ClientPolicy;
// Cluster encapsulates the aerospike cluster nodes and manages
// them.
#[derive(Debug)]
pub struct Cluster {
// Initial host nodes specified by user.
seeds: Arc<RwLock<Vec<Host>>>,
// All aliases for all nodes in cluster.
aliases: Arc<RwLock<HashMap<Host, Arc<Node>>>>,
// Active nodes in cluster.
nodes: Arc<RwLock<Vec<Arc<Node>>>>,
// Hints for best node for a partition
partition_write_map: Arc<RwLock<HashMap<String, Vec<Arc<Node>>>>>,
// Random node index.
node_index: AtomicIsize,
client_policy: ClientPolicy,
tend_channel: Mutex<Sender<()>>,
closed: AtomicBool,
}
impl Cluster {
pub fn new(policy: ClientPolicy, hosts: &[Host]) -> Result<Arc<Self>> {
let (tx, rx): (Sender<()>, Receiver<()>) = mpsc::channel();
let cluster = Arc::new(Cluster {
client_policy: policy,
seeds: Arc::new(RwLock::new(hosts.to_vec())),
aliases: Arc::new(RwLock::new(HashMap::new())),
nodes: Arc::new(RwLock::new(vec![])),
partition_write_map: Arc::new(RwLock::new(HashMap::new())),
node_index: AtomicIsize::new(0),
tend_channel: Mutex::new(tx),
closed: AtomicBool::new(false),
});
// try to seed connections for first use
Cluster::wait_till_stabilized(cluster.clone())?;
// apply policy rules
if cluster.client_policy.fail_if_not_connected &&!cluster.is_connected() {
bail!(ErrorKind::Connection(
"Failed to connect to host(s). The network \
connection(s) to cluster nodes may have timed out, or \
the cluster may be in a state of flux."
.to_string()
));
}
let cluster_for_tend = cluster.clone();
thread::spawn(move || Cluster::tend_thread(cluster_for_tend, rx));
debug!("New cluster initialized and ready to be used...");
Ok(cluster)
}
fn tend_thread(cluster: Arc<Cluster>, rx: Receiver<()>) {
let tend_interval = cluster.client_policy.tend_interval;
loop {
// try to read from the receive channel to see if it hung up
match rx.try_recv() {
Ok(_) => unreachable!(),
// signaled to end
Err(TryRecvError::Disconnected) => break,
Err(TryRecvError::Empty) => {
if let Err(err) = cluster.tend() {
log_error_chain!(err, "Error tending cluster");
}
thread::sleep(tend_interval);
}
}
}
// close all nodes
let nodes = cluster.nodes();
for mut node in nodes {
if let Some(node) = Arc::get_mut(&mut node) {
node.close();
}
}
cluster.set_nodes(vec![]);
}
fn tend(&self) -> Result<()> {
let mut nodes = self.nodes();
// All node additions/deletions are performed in tend thread.
// If active nodes don't exist, seed cluster.
if nodes.is_empty() {
debug!("No connections available; seeding...");
self.seed_nodes();
nodes = self.nodes();
}
let mut friend_list: Vec<Host> = vec![];
let mut refresh_count = 0;
// Refresh all known nodes.
for node in nodes {
let old_gen = node.partition_generation();
if node.is_active() {
match node.refresh(self.aliases()) {
Ok(friends) => {
refresh_count += 1;
if!friends.is_empty() {
friend_list.extend_from_slice(&friends);
}
if old_gen!= node.partition_generation() {
self.update_partitions(node.clone())?;
}
}
Err(err) => {
node.increase_failures();
warn!("Node `{}` refresh failed: {}", node, err);
}
}
}
}
// Add nodes in a batch.
let add_list = self.find_new_nodes_to_add(friend_list);
self.add_nodes_and_aliases(&add_list);
// IMPORTANT: Remove must come after add to remove aliases
// Handle nodes changes determined from refreshes.
// Remove nodes in a batch.
let remove_list = self.find_nodes_to_remove(refresh_count);
self.remove_nodes_and_aliases(remove_list);
Ok(())
}
fn wait_till_stabilized(cluster: Arc<Cluster>) -> Result<()> {
let timeout = cluster
.client_policy()
.timeout
.unwrap_or_else(|| Duration::from_secs(3));
let deadline = Instant::now() + timeout;
let sleep_between_tend = Duration::from_millis(1);
let handle = thread::spawn(move || {
let mut count: isize = -1;
loop {
if Instant::now() > deadline {
break;
}
if let Err(err) = cluster.tend() {
log_error_chain!(err, "Error during initial cluster tend");
}
let old_count = count;
count = cluster.nodes().len() as isize;
if count == old_count {
break;
}
thread::sleep(sleep_between_tend);
}
});
handle
.join()
.map_err(|err| format!("Error during initial cluster tend: {:?}", err).into())
}
pub const fn cluster_name(&self) -> &Option<String> {
&self.client_policy.cluster_name
}
pub const fn client_policy(&self) -> &ClientPolicy {
&self.client_policy
}
pub fn add_seeds(&self, new_seeds: &[Host]) -> Result<()> {
let mut seeds = self.seeds.write();
seeds.extend_from_slice(new_seeds);
Ok(())
}
pub fn alias_exists(&self, host: &Host) -> Result<bool> {
let aliases = self.aliases.read();
Ok(aliases.contains_key(host))
}
fn set_partitions(&self, partitions: HashMap<String, Vec<Arc<Node>>>) {
let mut partition_map = self.partition_write_map.write();
*partition_map = partitions;
}
fn partitions(&self) -> Arc<RwLock<HashMap<String, Vec<Arc<Node>>>>> {
self.partition_write_map.clone()
}
pub fn node_partitions(&self, node: &Node, namespace: &str) -> Vec<u16> {
let mut res = vec![];
let partitions = self.partitions();
let partitions = partitions.read();
if let Some(node_array) = partitions.get(namespace) {
let mut i = 0;
for tnode in node_array {
if node == tnode.as_ref() {
res.push(i);
}
i += 1;
}
}
res
}
pub fn update_partitions(&self, node: Arc<Node>) -> Result<()> {
let mut conn = node.get_connection(self.client_policy.timeout)?;
let tokens = PartitionTokenizer::new(&mut conn).map_err(|e| {
conn.invalidate();
e
})?;
let nmap = tokens.update_partition(self.partitions(), node)?;
self.set_partitions(nmap);
Ok(())
}
pub fn seed_nodes(&self) -> bool {
let seed_array = self.seeds.read();
info!("Seeding the cluster. Seeds count: {}", seed_array.len());
let mut list: Vec<Arc<Node>> = vec![];
for seed in &*seed_array {
let mut seed_node_validator = NodeValidator::new(self);
if let Err(err) = seed_node_validator.validate_node(self, seed) {
log_error_chain!(err, "Failed to validate seed host: {}", seed);
continue;
};
for alias in &*seed_node_validator.aliases() {
let nv = if *seed == *alias {
seed_node_validator.clone()
} else {
let mut nv2 = NodeValidator::new(self);
if let Err(err) = nv2.validate_node(self, seed) {
log_error_chain!(err, "Seeding host {} failed with error", alias);
continue;
};
nv2
};
if self.find_node_name(&list, &nv.name) {
continue;
}
let node = self.create_node(nv);
let node = Arc::new(node);
self.add_aliases(node.clone());
list.push(node);
}
}
self.add_nodes_and_aliases(&list);
!list.is_empty()
}
fn find_node_name(&self, list: &[Arc<Node>], name: &str) -> bool {
list.iter().any(|node| node.name() == name)
}
fn find_new_nodes_to_add(&self, hosts: Vec<Host>) -> Vec<Arc<Node>> {
let mut list: Vec<Arc<Node>> = vec![];
for host in hosts {
let mut nv = NodeValidator::new(self);
if let Err(err) = nv.validate_node(self, &host) {
log_error_chain!(err, "Adding node {} failed with error", host.name);
continue;
};
// Duplicate node name found. This usually occurs when the server
// services list contains both internal and external IP addresses
// for the same node. Add new host to list of alias filters
// and do not add new node.
let mut dup = false;
match self.get_node_by_name(&nv.name) {
Ok(node) => {
self.add_alias(host, node.clone());
dup = true;
}
Err(_) => {
if let Some(node) = list.iter().find(|n| n.name() == nv.name) {
self.add_alias(host, node.clone());
dup = true;
}
}
};
if!dup {
let node = self.create_node(nv);
list.push(Arc::new(node));
}
}
list
}
fn create_node(&self, nv: NodeValidator) -> Node {
Node::new(self.client_policy.clone(), Arc::new(nv))
}
fn find_nodes_to_remove(&self, refresh_count: usize) -> Vec<Arc<Node>> {
let nodes = self.nodes();
let mut remove_list: Vec<Arc<Node>> = vec![];
let cluster_size = nodes.len();
for node in nodes {
let tnode = node.clone();
if!node.is_active() {
remove_list.push(tnode);
continue;
}
match cluster_size {
// Single node clusters rely on whether it responded to info requests.
1 if node.failures() > 5 => {
// 5 consecutive info requests failed. Try seeds.
if self.seed_nodes() {
remove_list.push(tnode);
}
}
// Two node clusters require at least one successful refresh before removing.
2 if refresh_count == 1 && node.reference_count() == 0 && node.failures() > 0 => {
remove_list.push(node)
}
_ => {
// Multi-node clusters require two successful node refreshes before removing.
if refresh_count >= 2 && node.reference_count() == 0 {
// Node is not referenced by other nodes.
// Check if node responded to info request.
if node.failures() == 0 {
// Node is alive, but not referenced by other nodes. Check if mapped.
if!self.find_node_in_partition_map(node) {
remove_list.push(tnode);
}
} else {
// Node not responding. Remove it.
remove_list.push(tnode);
}
}
}
}
}
remove_list
}
fn add_nodes_and_aliases(&self, friend_list: &[Arc<Node>]) {
for node in friend_list {
self.add_aliases(node.clone());
}
self.add_nodes(friend_list);
}
fn remove_nodes_and_aliases(&self, mut nodes_to_remove: Vec<Arc<Node>>) {
for node in &mut nodes_to_remove {
for alias in node.aliases() {
self.remove_alias(&alias);
}
if let Some(node) = Arc::get_mut(node) {
node.close();
}
}
self.remove_nodes(&nodes_to_remove);
}
fn add_alias(&self, host: Host, node: Arc<Node>) |
fn remove_alias(&self, host: &Host) {
let mut aliases = self.aliases.write();
aliases.remove(host);
}
fn add_aliases(&self, node: Arc<Node>) {
let mut aliases = self.aliases.write();
for alias in node.aliases() {
aliases.insert(alias, node.clone());
}
}
fn find_node_in_partition_map(&self, filter: Arc<Node>) -> bool {
let partitions = self.partition_write_map.read();
(*partitions)
.values()
.any(|map| map.iter().any(|node| *node == filter))
}
fn add_nodes(&self, friend_list: &[Arc<Node>]) {
if friend_list.is_empty() {
return;
}
let mut nodes = self.nodes();
nodes.extend(friend_list.iter().cloned());
self.set_nodes(nodes)
}
fn remove_nodes(&self, nodes_to_remove: &[Arc<Node>]) {
if nodes_to_remove.is_empty() {
return;
}
let nodes = self.nodes();
let mut node_array: Vec<Arc<Node>> = vec![];
for node in &nodes {
if!nodes_to_remove.contains(node) {
node_array.push(node.clone());
}
}
self.set_nodes(node_array)
}
pub fn is_connected(&self) -> bool {
let nodes = self.nodes();
let closed = self.closed.load(Ordering::Relaxed);
!nodes.is_empty() &&!closed
}
pub fn aliases(&self) -> HashMap<Host, Arc<Node>> {
self.aliases.read().clone()
}
pub fn nodes(&self) -> Vec<Arc<Node>> {
self.nodes.read().clone()
}
fn set_nodes(&self, new_nodes: Vec<Arc<Node>>) {
let mut nodes = self.nodes.write();
*nodes = new_nodes;
}
pub fn get_node(&self, partition: &Partition) -> Result<Arc<Node>> {
let partitions = self.partitions();
let partitions = partitions.read();
if let Some(node_array) = partitions.get(partition.namespace) {
if let Some(node) = node_array.get(partition.partition_id) {
return Ok(node.clone());
}
}
self.get_random_node()
}
pub fn get_random_node(&self) -> Result<Arc<Node>> {
let node_array = self.nodes();
let length = node_array.len() as isize;
for _ in 0..length {
let index = ((self.node_index.fetch_add(1, Ordering::Relaxed) + 1) % length).abs();
if let Some(node) = node_array.get(index as usize) {
if node.is_active() {
return Ok(node.clone());
}
}
}
bail!("No active node")
}
pub fn get_node_by_name(&self, node_name: &str) -> Result<Arc<Node>> {
let node_array = self.nodes();
for node in &node_array {
if node.name() == | {
let mut aliases = self.aliases.write();
node.add_alias(host.clone());
aliases.insert(host, node);
} | identifier_body |
mod.rs | //
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations under
// the License.
pub mod node;
pub mod node_validator;
pub mod partition;
pub mod partition_tokenizer;
use std::collections::HashMap;
use std::sync::atomic::{AtomicBool, AtomicIsize, Ordering};
use std::sync::mpsc;
use std::sync::mpsc::{Receiver, Sender, TryRecvError};
use std::sync::Arc;
use std::thread;
use std::time::{Duration, Instant};
use std::vec::Vec;
use parking_lot::{Mutex, RwLock};
pub use self::node::Node;
use self::node_validator::NodeValidator;
use self::partition::Partition;
use self::partition_tokenizer::PartitionTokenizer;
use crate::errors::{ErrorKind, Result};
use crate::net::Host;
use crate::policy::ClientPolicy;
// Cluster encapsulates the aerospike cluster nodes and manages
// them.
#[derive(Debug)]
pub struct Cluster {
// Initial host nodes specified by user.
seeds: Arc<RwLock<Vec<Host>>>,
// All aliases for all nodes in cluster.
aliases: Arc<RwLock<HashMap<Host, Arc<Node>>>>,
// Active nodes in cluster.
nodes: Arc<RwLock<Vec<Arc<Node>>>>,
// Hints for best node for a partition
partition_write_map: Arc<RwLock<HashMap<String, Vec<Arc<Node>>>>>,
// Random node index.
node_index: AtomicIsize,
client_policy: ClientPolicy,
tend_channel: Mutex<Sender<()>>,
closed: AtomicBool,
}
impl Cluster {
pub fn new(policy: ClientPolicy, hosts: &[Host]) -> Result<Arc<Self>> {
let (tx, rx): (Sender<()>, Receiver<()>) = mpsc::channel();
let cluster = Arc::new(Cluster {
client_policy: policy,
seeds: Arc::new(RwLock::new(hosts.to_vec())),
aliases: Arc::new(RwLock::new(HashMap::new())),
nodes: Arc::new(RwLock::new(vec![])),
partition_write_map: Arc::new(RwLock::new(HashMap::new())),
node_index: AtomicIsize::new(0),
tend_channel: Mutex::new(tx),
closed: AtomicBool::new(false),
});
// try to seed connections for first use
Cluster::wait_till_stabilized(cluster.clone())?;
// apply policy rules
if cluster.client_policy.fail_if_not_connected &&!cluster.is_connected() {
bail!(ErrorKind::Connection(
"Failed to connect to host(s). The network \
connection(s) to cluster nodes may have timed out, or \
the cluster may be in a state of flux."
.to_string()
));
}
let cluster_for_tend = cluster.clone();
thread::spawn(move || Cluster::tend_thread(cluster_for_tend, rx));
debug!("New cluster initialized and ready to be used...");
Ok(cluster)
}
fn tend_thread(cluster: Arc<Cluster>, rx: Receiver<()>) {
let tend_interval = cluster.client_policy.tend_interval;
loop {
// try to read from the receive channel to see if it hung up
match rx.try_recv() {
Ok(_) => unreachable!(),
// signaled to end
Err(TryRecvError::Disconnected) => break,
Err(TryRecvError::Empty) => {
if let Err(err) = cluster.tend() {
log_error_chain!(err, "Error tending cluster");
}
thread::sleep(tend_interval);
}
}
}
// close all nodes
let nodes = cluster.nodes();
for mut node in nodes {
if let Some(node) = Arc::get_mut(&mut node) {
node.close();
}
}
cluster.set_nodes(vec![]);
}
fn tend(&self) -> Result<()> {
let mut nodes = self.nodes();
// All node additions/deletions are performed in tend thread.
// If active nodes don't exist, seed cluster.
if nodes.is_empty() {
debug!("No connections available; seeding...");
self.seed_nodes();
nodes = self.nodes();
}
let mut friend_list: Vec<Host> = vec![];
let mut refresh_count = 0;
// Refresh all known nodes.
for node in nodes {
let old_gen = node.partition_generation();
if node.is_active() {
match node.refresh(self.aliases()) {
Ok(friends) => {
refresh_count += 1;
if!friends.is_empty() {
friend_list.extend_from_slice(&friends);
}
if old_gen!= node.partition_generation() {
self.update_partitions(node.clone())?;
}
}
Err(err) => {
node.increase_failures();
warn!("Node `{}` refresh failed: {}", node, err);
}
}
}
}
// Add nodes in a batch.
let add_list = self.find_new_nodes_to_add(friend_list);
self.add_nodes_and_aliases(&add_list);
// IMPORTANT: Remove must come after add to remove aliases
// Handle nodes changes determined from refreshes.
// Remove nodes in a batch.
let remove_list = self.find_nodes_to_remove(refresh_count);
self.remove_nodes_and_aliases(remove_list);
Ok(())
}
fn wait_till_stabilized(cluster: Arc<Cluster>) -> Result<()> {
let timeout = cluster
.client_policy()
.timeout
.unwrap_or_else(|| Duration::from_secs(3));
let deadline = Instant::now() + timeout;
let sleep_between_tend = Duration::from_millis(1);
let handle = thread::spawn(move || {
let mut count: isize = -1;
loop {
if Instant::now() > deadline {
break;
}
if let Err(err) = cluster.tend() {
log_error_chain!(err, "Error during initial cluster tend");
}
let old_count = count;
count = cluster.nodes().len() as isize;
if count == old_count {
break;
}
thread::sleep(sleep_between_tend);
}
});
handle
.join()
.map_err(|err| format!("Error during initial cluster tend: {:?}", err).into())
}
pub const fn cluster_name(&self) -> &Option<String> {
&self.client_policy.cluster_name
}
pub const fn client_policy(&self) -> &ClientPolicy {
&self.client_policy
}
pub fn add_seeds(&self, new_seeds: &[Host]) -> Result<()> {
let mut seeds = self.seeds.write();
seeds.extend_from_slice(new_seeds);
Ok(())
}
pub fn alias_exists(&self, host: &Host) -> Result<bool> {
let aliases = self.aliases.read();
Ok(aliases.contains_key(host))
}
fn set_partitions(&self, partitions: HashMap<String, Vec<Arc<Node>>>) {
let mut partition_map = self.partition_write_map.write();
*partition_map = partitions;
}
fn partitions(&self) -> Arc<RwLock<HashMap<String, Vec<Arc<Node>>>>> {
self.partition_write_map.clone()
}
pub fn node_partitions(&self, node: &Node, namespace: &str) -> Vec<u16> {
let mut res = vec![];
let partitions = self.partitions();
let partitions = partitions.read();
if let Some(node_array) = partitions.get(namespace) {
let mut i = 0;
for tnode in node_array {
if node == tnode.as_ref() {
res.push(i);
}
i += 1;
}
}
res
}
pub fn update_partitions(&self, node: Arc<Node>) -> Result<()> {
let mut conn = node.get_connection(self.client_policy.timeout)?;
let tokens = PartitionTokenizer::new(&mut conn).map_err(|e| {
conn.invalidate();
e
})?;
let nmap = tokens.update_partition(self.partitions(), node)?;
self.set_partitions(nmap);
Ok(())
}
pub fn seed_nodes(&self) -> bool {
let seed_array = self.seeds.read();
info!("Seeding the cluster. Seeds count: {}", seed_array.len());
let mut list: Vec<Arc<Node>> = vec![];
for seed in &*seed_array {
let mut seed_node_validator = NodeValidator::new(self);
if let Err(err) = seed_node_validator.validate_node(self, seed) {
log_error_chain!(err, "Failed to validate seed host: {}", seed);
continue;
};
for alias in &*seed_node_validator.aliases() {
let nv = if *seed == *alias {
seed_node_validator.clone()
} else {
let mut nv2 = NodeValidator::new(self);
if let Err(err) = nv2.validate_node(self, seed) {
log_error_chain!(err, "Seeding host {} failed with error", alias);
continue;
};
nv2
};
if self.find_node_name(&list, &nv.name) {
continue;
}
let node = self.create_node(nv);
let node = Arc::new(node);
self.add_aliases(node.clone());
list.push(node);
}
}
self.add_nodes_and_aliases(&list);
!list.is_empty()
}
fn find_node_name(&self, list: &[Arc<Node>], name: &str) -> bool {
list.iter().any(|node| node.name() == name)
}
fn find_new_nodes_to_add(&self, hosts: Vec<Host>) -> Vec<Arc<Node>> {
let mut list: Vec<Arc<Node>> = vec![];
for host in hosts {
let mut nv = NodeValidator::new(self);
if let Err(err) = nv.validate_node(self, &host) {
log_error_chain!(err, "Adding node {} failed with error", host.name);
continue;
};
// Duplicate node name found. This usually occurs when the server
// services list contains both internal and external IP addresses
// for the same node. Add new host to list of alias filters
// and do not add new node.
let mut dup = false;
match self.get_node_by_name(&nv.name) {
Ok(node) => {
self.add_alias(host, node.clone());
dup = true;
}
Err(_) => {
if let Some(node) = list.iter().find(|n| n.name() == nv.name) {
self.add_alias(host, node.clone());
dup = true;
}
}
};
if!dup {
let node = self.create_node(nv);
list.push(Arc::new(node));
}
}
list
}
fn create_node(&self, nv: NodeValidator) -> Node {
Node::new(self.client_policy.clone(), Arc::new(nv))
}
fn find_nodes_to_remove(&self, refresh_count: usize) -> Vec<Arc<Node>> {
let nodes = self.nodes();
let mut remove_list: Vec<Arc<Node>> = vec![];
let cluster_size = nodes.len();
for node in nodes {
let tnode = node.clone();
if!node.is_active() {
remove_list.push(tnode);
continue;
}
match cluster_size {
// Single node clusters rely on whether it responded to info requests.
1 if node.failures() > 5 => {
// 5 consecutive info requests failed. Try seeds.
if self.seed_nodes() {
remove_list.push(tnode);
}
}
// Two node clusters require at least one successful refresh before removing.
2 if refresh_count == 1 && node.reference_count() == 0 && node.failures() > 0 => {
remove_list.push(node)
}
_ => {
// Multi-node clusters require two successful node refreshes before removing.
if refresh_count >= 2 && node.reference_count() == 0 {
// Node is not referenced by other nodes.
// Check if node responded to info request.
if node.failures() == 0 {
// Node is alive, but not referenced by other nodes. Check if mapped.
if!self.find_node_in_partition_map(node) |
} else {
// Node not responding. Remove it.
remove_list.push(tnode);
}
}
}
}
}
remove_list
}
fn add_nodes_and_aliases(&self, friend_list: &[Arc<Node>]) {
for node in friend_list {
self.add_aliases(node.clone());
}
self.add_nodes(friend_list);
}
fn remove_nodes_and_aliases(&self, mut nodes_to_remove: Vec<Arc<Node>>) {
for node in &mut nodes_to_remove {
for alias in node.aliases() {
self.remove_alias(&alias);
}
if let Some(node) = Arc::get_mut(node) {
node.close();
}
}
self.remove_nodes(&nodes_to_remove);
}
fn add_alias(&self, host: Host, node: Arc<Node>) {
let mut aliases = self.aliases.write();
node.add_alias(host.clone());
aliases.insert(host, node);
}
fn remove_alias(&self, host: &Host) {
let mut aliases = self.aliases.write();
aliases.remove(host);
}
fn add_aliases(&self, node: Arc<Node>) {
let mut aliases = self.aliases.write();
for alias in node.aliases() {
aliases.insert(alias, node.clone());
}
}
fn find_node_in_partition_map(&self, filter: Arc<Node>) -> bool {
let partitions = self.partition_write_map.read();
(*partitions)
.values()
.any(|map| map.iter().any(|node| *node == filter))
}
fn add_nodes(&self, friend_list: &[Arc<Node>]) {
if friend_list.is_empty() {
return;
}
let mut nodes = self.nodes();
nodes.extend(friend_list.iter().cloned());
self.set_nodes(nodes)
}
fn remove_nodes(&self, nodes_to_remove: &[Arc<Node>]) {
if nodes_to_remove.is_empty() {
return;
}
let nodes = self.nodes();
let mut node_array: Vec<Arc<Node>> = vec![];
for node in &nodes {
if!nodes_to_remove.contains(node) {
node_array.push(node.clone());
}
}
self.set_nodes(node_array)
}
pub fn is_connected(&self) -> bool {
let nodes = self.nodes();
let closed = self.closed.load(Ordering::Relaxed);
!nodes.is_empty() &&!closed
}
pub fn aliases(&self) -> HashMap<Host, Arc<Node>> {
self.aliases.read().clone()
}
pub fn nodes(&self) -> Vec<Arc<Node>> {
self.nodes.read().clone()
}
fn set_nodes(&self, new_nodes: Vec<Arc<Node>>) {
let mut nodes = self.nodes.write();
*nodes = new_nodes;
}
pub fn get_node(&self, partition: &Partition) -> Result<Arc<Node>> {
let partitions = self.partitions();
let partitions = partitions.read();
if let Some(node_array) = partitions.get(partition.namespace) {
if let Some(node) = node_array.get(partition.partition_id) {
return Ok(node.clone());
}
}
self.get_random_node()
}
pub fn get_random_node(&self) -> Result<Arc<Node>> {
let node_array = self.nodes();
let length = node_array.len() as isize;
for _ in 0..length {
let index = ((self.node_index.fetch_add(1, Ordering::Relaxed) + 1) % length).abs();
if let Some(node) = node_array.get(index as usize) {
if node.is_active() {
return Ok(node.clone());
}
}
}
bail!("No active node")
}
pub fn get_node_by_name(&self, node_name: &str) -> Result<Arc<Node>> {
let node_array = self.nodes();
for node in &node_array {
if node.name() == | {
remove_list.push(tnode);
} | conditional_block |
mod.rs | 0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations under
// the License.
pub mod node;
pub mod node_validator;
pub mod partition;
pub mod partition_tokenizer;
use std::collections::HashMap;
use std::sync::atomic::{AtomicBool, AtomicIsize, Ordering};
use std::sync::mpsc;
use std::sync::mpsc::{Receiver, Sender, TryRecvError};
use std::sync::Arc;
use std::thread;
use std::time::{Duration, Instant};
use std::vec::Vec;
use parking_lot::{Mutex, RwLock};
pub use self::node::Node;
use self::node_validator::NodeValidator;
use self::partition::Partition;
use self::partition_tokenizer::PartitionTokenizer;
use crate::errors::{ErrorKind, Result};
use crate::net::Host;
use crate::policy::ClientPolicy;
// Cluster encapsulates the aerospike cluster nodes and manages
// them.
#[derive(Debug)]
pub struct Cluster {
// Initial host nodes specified by user.
seeds: Arc<RwLock<Vec<Host>>>,
// All aliases for all nodes in cluster.
aliases: Arc<RwLock<HashMap<Host, Arc<Node>>>>,
// Active nodes in cluster.
nodes: Arc<RwLock<Vec<Arc<Node>>>>,
// Hints for best node for a partition
partition_write_map: Arc<RwLock<HashMap<String, Vec<Arc<Node>>>>>,
// Random node index.
node_index: AtomicIsize,
client_policy: ClientPolicy,
tend_channel: Mutex<Sender<()>>,
closed: AtomicBool,
}
impl Cluster {
pub fn new(policy: ClientPolicy, hosts: &[Host]) -> Result<Arc<Self>> {
let (tx, rx): (Sender<()>, Receiver<()>) = mpsc::channel();
let cluster = Arc::new(Cluster {
client_policy: policy,
seeds: Arc::new(RwLock::new(hosts.to_vec())),
aliases: Arc::new(RwLock::new(HashMap::new())),
nodes: Arc::new(RwLock::new(vec![])),
partition_write_map: Arc::new(RwLock::new(HashMap::new())),
node_index: AtomicIsize::new(0),
tend_channel: Mutex::new(tx),
closed: AtomicBool::new(false),
});
// try to seed connections for first use
Cluster::wait_till_stabilized(cluster.clone())?;
// apply policy rules
if cluster.client_policy.fail_if_not_connected &&!cluster.is_connected() {
bail!(ErrorKind::Connection(
"Failed to connect to host(s). The network \
connection(s) to cluster nodes may have timed out, or \
the cluster may be in a state of flux."
.to_string()
));
}
let cluster_for_tend = cluster.clone();
thread::spawn(move || Cluster::tend_thread(cluster_for_tend, rx));
debug!("New cluster initialized and ready to be used...");
Ok(cluster)
}
fn tend_thread(cluster: Arc<Cluster>, rx: Receiver<()>) {
let tend_interval = cluster.client_policy.tend_interval;
loop {
// try to read from the receive channel to see if it hung up
match rx.try_recv() {
Ok(_) => unreachable!(),
// signaled to end
Err(TryRecvError::Disconnected) => break,
Err(TryRecvError::Empty) => {
if let Err(err) = cluster.tend() {
log_error_chain!(err, "Error tending cluster");
}
thread::sleep(tend_interval);
}
}
}
// close all nodes
let nodes = cluster.nodes();
for mut node in nodes {
if let Some(node) = Arc::get_mut(&mut node) {
node.close();
}
}
cluster.set_nodes(vec![]);
}
fn tend(&self) -> Result<()> {
let mut nodes = self.nodes();
// All node additions/deletions are performed in tend thread.
// If active nodes don't exist, seed cluster.
if nodes.is_empty() {
debug!("No connections available; seeding...");
self.seed_nodes();
nodes = self.nodes();
}
let mut friend_list: Vec<Host> = vec![];
let mut refresh_count = 0;
// Refresh all known nodes.
for node in nodes {
let old_gen = node.partition_generation();
if node.is_active() {
match node.refresh(self.aliases()) {
Ok(friends) => {
refresh_count += 1;
if!friends.is_empty() {
friend_list.extend_from_slice(&friends);
}
if old_gen!= node.partition_generation() {
self.update_partitions(node.clone())?;
}
}
Err(err) => {
node.increase_failures();
warn!("Node `{}` refresh failed: {}", node, err);
}
}
}
}
// Add nodes in a batch.
let add_list = self.find_new_nodes_to_add(friend_list);
self.add_nodes_and_aliases(&add_list);
// IMPORTANT: Remove must come after add to remove aliases
// Handle nodes changes determined from refreshes.
// Remove nodes in a batch.
let remove_list = self.find_nodes_to_remove(refresh_count);
self.remove_nodes_and_aliases(remove_list);
Ok(())
}
fn wait_till_stabilized(cluster: Arc<Cluster>) -> Result<()> {
let timeout = cluster
.client_policy()
.timeout
.unwrap_or_else(|| Duration::from_secs(3));
let deadline = Instant::now() + timeout;
let sleep_between_tend = Duration::from_millis(1);
let handle = thread::spawn(move || {
let mut count: isize = -1;
loop {
if Instant::now() > deadline {
break;
}
if let Err(err) = cluster.tend() {
log_error_chain!(err, "Error during initial cluster tend");
}
let old_count = count;
count = cluster.nodes().len() as isize;
if count == old_count {
break;
}
thread::sleep(sleep_between_tend);
}
});
handle
.join()
.map_err(|err| format!("Error during initial cluster tend: {:?}", err).into())
}
pub const fn cluster_name(&self) -> &Option<String> {
&self.client_policy.cluster_name
}
pub const fn client_policy(&self) -> &ClientPolicy {
&self.client_policy
}
pub fn add_seeds(&self, new_seeds: &[Host]) -> Result<()> {
let mut seeds = self.seeds.write();
seeds.extend_from_slice(new_seeds);
Ok(())
}
pub fn alias_exists(&self, host: &Host) -> Result<bool> {
let aliases = self.aliases.read();
Ok(aliases.contains_key(host))
}
fn set_partitions(&self, partitions: HashMap<String, Vec<Arc<Node>>>) {
let mut partition_map = self.partition_write_map.write();
*partition_map = partitions;
}
fn partitions(&self) -> Arc<RwLock<HashMap<String, Vec<Arc<Node>>>>> {
self.partition_write_map.clone()
}
pub fn node_partitions(&self, node: &Node, namespace: &str) -> Vec<u16> {
let mut res = vec![];
let partitions = self.partitions();
let partitions = partitions.read();
if let Some(node_array) = partitions.get(namespace) {
let mut i = 0;
for tnode in node_array {
if node == tnode.as_ref() {
res.push(i);
}
i += 1;
}
}
res
}
pub fn update_partitions(&self, node: Arc<Node>) -> Result<()> {
let mut conn = node.get_connection(self.client_policy.timeout)?;
let tokens = PartitionTokenizer::new(&mut conn).map_err(|e| {
conn.invalidate();
e
})?;
let nmap = tokens.update_partition(self.partitions(), node)?;
self.set_partitions(nmap);
Ok(())
}
pub fn seed_nodes(&self) -> bool {
let seed_array = self.seeds.read();
info!("Seeding the cluster. Seeds count: {}", seed_array.len());
let mut list: Vec<Arc<Node>> = vec![];
for seed in &*seed_array {
let mut seed_node_validator = NodeValidator::new(self);
if let Err(err) = seed_node_validator.validate_node(self, seed) {
log_error_chain!(err, "Failed to validate seed host: {}", seed);
continue;
};
for alias in &*seed_node_validator.aliases() {
let nv = if *seed == *alias {
seed_node_validator.clone()
} else {
let mut nv2 = NodeValidator::new(self);
if let Err(err) = nv2.validate_node(self, seed) {
log_error_chain!(err, "Seeding host {} failed with error", alias);
continue;
};
nv2
};
if self.find_node_name(&list, &nv.name) {
continue;
}
let node = self.create_node(nv);
let node = Arc::new(node);
self.add_aliases(node.clone());
list.push(node);
}
}
self.add_nodes_and_aliases(&list);
!list.is_empty()
}
fn find_node_name(&self, list: &[Arc<Node>], name: &str) -> bool {
list.iter().any(|node| node.name() == name)
}
fn find_new_nodes_to_add(&self, hosts: Vec<Host>) -> Vec<Arc<Node>> {
let mut list: Vec<Arc<Node>> = vec![];
for host in hosts {
let mut nv = NodeValidator::new(self);
if let Err(err) = nv.validate_node(self, &host) {
log_error_chain!(err, "Adding node {} failed with error", host.name);
continue;
};
// Duplicate node name found. This usually occurs when the server
// services list contains both internal and external IP addresses
// for the same node. Add new host to list of alias filters
// and do not add new node.
let mut dup = false;
match self.get_node_by_name(&nv.name) {
Ok(node) => {
self.add_alias(host, node.clone());
dup = true;
}
Err(_) => {
if let Some(node) = list.iter().find(|n| n.name() == nv.name) {
self.add_alias(host, node.clone());
dup = true;
}
}
};
if!dup {
let node = self.create_node(nv);
list.push(Arc::new(node));
}
}
list
}
fn create_node(&self, nv: NodeValidator) -> Node {
Node::new(self.client_policy.clone(), Arc::new(nv))
}
fn find_nodes_to_remove(&self, refresh_count: usize) -> Vec<Arc<Node>> {
let nodes = self.nodes();
let mut remove_list: Vec<Arc<Node>> = vec![];
let cluster_size = nodes.len();
for node in nodes {
let tnode = node.clone();
if!node.is_active() {
remove_list.push(tnode);
continue;
}
match cluster_size {
// Single node clusters rely on whether it responded to info requests.
1 if node.failures() > 5 => {
// 5 consecutive info requests failed. Try seeds.
if self.seed_nodes() {
remove_list.push(tnode);
}
}
// Two node clusters require at least one successful refresh before removing.
2 if refresh_count == 1 && node.reference_count() == 0 && node.failures() > 0 => {
remove_list.push(node)
}
_ => {
// Multi-node clusters require two successful node refreshes before removing.
if refresh_count >= 2 && node.reference_count() == 0 {
// Node is not referenced by other nodes.
// Check if node responded to info request.
if node.failures() == 0 {
// Node is alive, but not referenced by other nodes. Check if mapped.
if!self.find_node_in_partition_map(node) {
remove_list.push(tnode);
}
} else {
// Node not responding. Remove it.
remove_list.push(tnode);
}
}
}
}
}
remove_list
}
fn add_nodes_and_aliases(&self, friend_list: &[Arc<Node>]) {
for node in friend_list {
self.add_aliases(node.clone());
}
self.add_nodes(friend_list);
}
fn remove_nodes_and_aliases(&self, mut nodes_to_remove: Vec<Arc<Node>>) {
for node in &mut nodes_to_remove {
for alias in node.aliases() {
self.remove_alias(&alias);
}
if let Some(node) = Arc::get_mut(node) {
node.close();
}
}
self.remove_nodes(&nodes_to_remove);
}
fn add_alias(&self, host: Host, node: Arc<Node>) {
let mut aliases = self.aliases.write();
node.add_alias(host.clone());
aliases.insert(host, node);
}
fn remove_alias(&self, host: &Host) {
let mut aliases = self.aliases.write();
aliases.remove(host);
}
fn add_aliases(&self, node: Arc<Node>) {
let mut aliases = self.aliases.write();
for alias in node.aliases() {
aliases.insert(alias, node.clone());
}
}
fn find_node_in_partition_map(&self, filter: Arc<Node>) -> bool {
let partitions = self.partition_write_map.read();
(*partitions)
.values()
.any(|map| map.iter().any(|node| *node == filter))
}
fn add_nodes(&self, friend_list: &[Arc<Node>]) {
if friend_list.is_empty() {
return;
}
let mut nodes = self.nodes();
nodes.extend(friend_list.iter().cloned());
self.set_nodes(nodes) |
fn remove_nodes(&self, nodes_to_remove: &[Arc<Node>]) {
if nodes_to_remove.is_empty() {
return;
}
let nodes = self.nodes();
let mut node_array: Vec<Arc<Node>> = vec![];
for node in &nodes {
if!nodes_to_remove.contains(node) {
node_array.push(node.clone());
}
}
self.set_nodes(node_array)
}
pub fn is_connected(&self) -> bool {
let nodes = self.nodes();
let closed = self.closed.load(Ordering::Relaxed);
!nodes.is_empty() &&!closed
}
pub fn aliases(&self) -> HashMap<Host, Arc<Node>> {
self.aliases.read().clone()
}
pub fn nodes(&self) -> Vec<Arc<Node>> {
self.nodes.read().clone()
}
fn set_nodes(&self, new_nodes: Vec<Arc<Node>>) {
let mut nodes = self.nodes.write();
*nodes = new_nodes;
}
pub fn get_node(&self, partition: &Partition) -> Result<Arc<Node>> {
let partitions = self.partitions();
let partitions = partitions.read();
if let Some(node_array) = partitions.get(partition.namespace) {
if let Some(node) = node_array.get(partition.partition_id) {
return Ok(node.clone());
}
}
self.get_random_node()
}
pub fn get_random_node(&self) -> Result<Arc<Node>> {
let node_array = self.nodes();
let length = node_array.len() as isize;
for _ in 0..length {
let index = ((self.node_index.fetch_add(1, Ordering::Relaxed) + 1) % length).abs();
if let Some(node) = node_array.get(index as usize) {
if node.is_active() {
return Ok(node.clone());
}
}
}
bail!("No active node")
}
pub fn get_node_by_name(&self, node_name: &str) -> Result<Arc<Node>> {
let node_array = self.nodes();
for node in &node_array {
if node.name() == node | } | random_line_split |
mod.rs | //
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations under
// the License.
pub mod node;
pub mod node_validator;
pub mod partition;
pub mod partition_tokenizer;
use std::collections::HashMap;
use std::sync::atomic::{AtomicBool, AtomicIsize, Ordering};
use std::sync::mpsc;
use std::sync::mpsc::{Receiver, Sender, TryRecvError};
use std::sync::Arc;
use std::thread;
use std::time::{Duration, Instant};
use std::vec::Vec;
use parking_lot::{Mutex, RwLock};
pub use self::node::Node;
use self::node_validator::NodeValidator;
use self::partition::Partition;
use self::partition_tokenizer::PartitionTokenizer;
use crate::errors::{ErrorKind, Result};
use crate::net::Host;
use crate::policy::ClientPolicy;
// Cluster encapsulates the aerospike cluster nodes and manages
// them.
#[derive(Debug)]
pub struct Cluster {
// Initial host nodes specified by user.
seeds: Arc<RwLock<Vec<Host>>>,
// All aliases for all nodes in cluster.
aliases: Arc<RwLock<HashMap<Host, Arc<Node>>>>,
// Active nodes in cluster.
nodes: Arc<RwLock<Vec<Arc<Node>>>>,
// Hints for best node for a partition
partition_write_map: Arc<RwLock<HashMap<String, Vec<Arc<Node>>>>>,
// Random node index.
node_index: AtomicIsize,
client_policy: ClientPolicy,
tend_channel: Mutex<Sender<()>>,
closed: AtomicBool,
}
impl Cluster {
pub fn new(policy: ClientPolicy, hosts: &[Host]) -> Result<Arc<Self>> {
let (tx, rx): (Sender<()>, Receiver<()>) = mpsc::channel();
let cluster = Arc::new(Cluster {
client_policy: policy,
seeds: Arc::new(RwLock::new(hosts.to_vec())),
aliases: Arc::new(RwLock::new(HashMap::new())),
nodes: Arc::new(RwLock::new(vec![])),
partition_write_map: Arc::new(RwLock::new(HashMap::new())),
node_index: AtomicIsize::new(0),
tend_channel: Mutex::new(tx),
closed: AtomicBool::new(false),
});
// try to seed connections for first use
Cluster::wait_till_stabilized(cluster.clone())?;
// apply policy rules
if cluster.client_policy.fail_if_not_connected &&!cluster.is_connected() {
bail!(ErrorKind::Connection(
"Failed to connect to host(s). The network \
connection(s) to cluster nodes may have timed out, or \
the cluster may be in a state of flux."
.to_string()
));
}
let cluster_for_tend = cluster.clone();
thread::spawn(move || Cluster::tend_thread(cluster_for_tend, rx));
debug!("New cluster initialized and ready to be used...");
Ok(cluster)
}
fn tend_thread(cluster: Arc<Cluster>, rx: Receiver<()>) {
let tend_interval = cluster.client_policy.tend_interval;
loop {
// try to read from the receive channel to see if it hung up
match rx.try_recv() {
Ok(_) => unreachable!(),
// signaled to end
Err(TryRecvError::Disconnected) => break,
Err(TryRecvError::Empty) => {
if let Err(err) = cluster.tend() {
log_error_chain!(err, "Error tending cluster");
}
thread::sleep(tend_interval);
}
}
}
// close all nodes
let nodes = cluster.nodes();
for mut node in nodes {
if let Some(node) = Arc::get_mut(&mut node) {
node.close();
}
}
cluster.set_nodes(vec![]);
}
fn tend(&self) -> Result<()> {
let mut nodes = self.nodes();
// All node additions/deletions are performed in tend thread.
// If active nodes don't exist, seed cluster.
if nodes.is_empty() {
debug!("No connections available; seeding...");
self.seed_nodes();
nodes = self.nodes();
}
let mut friend_list: Vec<Host> = vec![];
let mut refresh_count = 0;
// Refresh all known nodes.
for node in nodes {
let old_gen = node.partition_generation();
if node.is_active() {
match node.refresh(self.aliases()) {
Ok(friends) => {
refresh_count += 1;
if!friends.is_empty() {
friend_list.extend_from_slice(&friends);
}
if old_gen!= node.partition_generation() {
self.update_partitions(node.clone())?;
}
}
Err(err) => {
node.increase_failures();
warn!("Node `{}` refresh failed: {}", node, err);
}
}
}
}
// Add nodes in a batch.
let add_list = self.find_new_nodes_to_add(friend_list);
self.add_nodes_and_aliases(&add_list);
// IMPORTANT: Remove must come after add to remove aliases
// Handle nodes changes determined from refreshes.
// Remove nodes in a batch.
let remove_list = self.find_nodes_to_remove(refresh_count);
self.remove_nodes_and_aliases(remove_list);
Ok(())
}
fn wait_till_stabilized(cluster: Arc<Cluster>) -> Result<()> {
let timeout = cluster
.client_policy()
.timeout
.unwrap_or_else(|| Duration::from_secs(3));
let deadline = Instant::now() + timeout;
let sleep_between_tend = Duration::from_millis(1);
let handle = thread::spawn(move || {
let mut count: isize = -1;
loop {
if Instant::now() > deadline {
break;
}
if let Err(err) = cluster.tend() {
log_error_chain!(err, "Error during initial cluster tend");
}
let old_count = count;
count = cluster.nodes().len() as isize;
if count == old_count {
break;
}
thread::sleep(sleep_between_tend);
}
});
handle
.join()
.map_err(|err| format!("Error during initial cluster tend: {:?}", err).into())
}
pub const fn cluster_name(&self) -> &Option<String> {
&self.client_policy.cluster_name
}
pub const fn client_policy(&self) -> &ClientPolicy {
&self.client_policy
}
pub fn add_seeds(&self, new_seeds: &[Host]) -> Result<()> {
let mut seeds = self.seeds.write();
seeds.extend_from_slice(new_seeds);
Ok(())
}
pub fn alias_exists(&self, host: &Host) -> Result<bool> {
let aliases = self.aliases.read();
Ok(aliases.contains_key(host))
}
fn set_partitions(&self, partitions: HashMap<String, Vec<Arc<Node>>>) {
let mut partition_map = self.partition_write_map.write();
*partition_map = partitions;
}
fn partitions(&self) -> Arc<RwLock<HashMap<String, Vec<Arc<Node>>>>> {
self.partition_write_map.clone()
}
pub fn node_partitions(&self, node: &Node, namespace: &str) -> Vec<u16> {
let mut res = vec![];
let partitions = self.partitions();
let partitions = partitions.read();
if let Some(node_array) = partitions.get(namespace) {
let mut i = 0;
for tnode in node_array {
if node == tnode.as_ref() {
res.push(i);
}
i += 1;
}
}
res
}
pub fn update_partitions(&self, node: Arc<Node>) -> Result<()> {
let mut conn = node.get_connection(self.client_policy.timeout)?;
let tokens = PartitionTokenizer::new(&mut conn).map_err(|e| {
conn.invalidate();
e
})?;
let nmap = tokens.update_partition(self.partitions(), node)?;
self.set_partitions(nmap);
Ok(())
}
pub fn | (&self) -> bool {
let seed_array = self.seeds.read();
info!("Seeding the cluster. Seeds count: {}", seed_array.len());
let mut list: Vec<Arc<Node>> = vec![];
for seed in &*seed_array {
let mut seed_node_validator = NodeValidator::new(self);
if let Err(err) = seed_node_validator.validate_node(self, seed) {
log_error_chain!(err, "Failed to validate seed host: {}", seed);
continue;
};
for alias in &*seed_node_validator.aliases() {
let nv = if *seed == *alias {
seed_node_validator.clone()
} else {
let mut nv2 = NodeValidator::new(self);
if let Err(err) = nv2.validate_node(self, seed) {
log_error_chain!(err, "Seeding host {} failed with error", alias);
continue;
};
nv2
};
if self.find_node_name(&list, &nv.name) {
continue;
}
let node = self.create_node(nv);
let node = Arc::new(node);
self.add_aliases(node.clone());
list.push(node);
}
}
self.add_nodes_and_aliases(&list);
!list.is_empty()
}
fn find_node_name(&self, list: &[Arc<Node>], name: &str) -> bool {
list.iter().any(|node| node.name() == name)
}
fn find_new_nodes_to_add(&self, hosts: Vec<Host>) -> Vec<Arc<Node>> {
let mut list: Vec<Arc<Node>> = vec![];
for host in hosts {
let mut nv = NodeValidator::new(self);
if let Err(err) = nv.validate_node(self, &host) {
log_error_chain!(err, "Adding node {} failed with error", host.name);
continue;
};
// Duplicate node name found. This usually occurs when the server
// services list contains both internal and external IP addresses
// for the same node. Add new host to list of alias filters
// and do not add new node.
let mut dup = false;
match self.get_node_by_name(&nv.name) {
Ok(node) => {
self.add_alias(host, node.clone());
dup = true;
}
Err(_) => {
if let Some(node) = list.iter().find(|n| n.name() == nv.name) {
self.add_alias(host, node.clone());
dup = true;
}
}
};
if!dup {
let node = self.create_node(nv);
list.push(Arc::new(node));
}
}
list
}
fn create_node(&self, nv: NodeValidator) -> Node {
Node::new(self.client_policy.clone(), Arc::new(nv))
}
fn find_nodes_to_remove(&self, refresh_count: usize) -> Vec<Arc<Node>> {
let nodes = self.nodes();
let mut remove_list: Vec<Arc<Node>> = vec![];
let cluster_size = nodes.len();
for node in nodes {
let tnode = node.clone();
if!node.is_active() {
remove_list.push(tnode);
continue;
}
match cluster_size {
// Single node clusters rely on whether it responded to info requests.
1 if node.failures() > 5 => {
// 5 consecutive info requests failed. Try seeds.
if self.seed_nodes() {
remove_list.push(tnode);
}
}
// Two node clusters require at least one successful refresh before removing.
2 if refresh_count == 1 && node.reference_count() == 0 && node.failures() > 0 => {
remove_list.push(node)
}
_ => {
// Multi-node clusters require two successful node refreshes before removing.
if refresh_count >= 2 && node.reference_count() == 0 {
// Node is not referenced by other nodes.
// Check if node responded to info request.
if node.failures() == 0 {
// Node is alive, but not referenced by other nodes. Check if mapped.
if!self.find_node_in_partition_map(node) {
remove_list.push(tnode);
}
} else {
// Node not responding. Remove it.
remove_list.push(tnode);
}
}
}
}
}
remove_list
}
fn add_nodes_and_aliases(&self, friend_list: &[Arc<Node>]) {
for node in friend_list {
self.add_aliases(node.clone());
}
self.add_nodes(friend_list);
}
fn remove_nodes_and_aliases(&self, mut nodes_to_remove: Vec<Arc<Node>>) {
for node in &mut nodes_to_remove {
for alias in node.aliases() {
self.remove_alias(&alias);
}
if let Some(node) = Arc::get_mut(node) {
node.close();
}
}
self.remove_nodes(&nodes_to_remove);
}
fn add_alias(&self, host: Host, node: Arc<Node>) {
let mut aliases = self.aliases.write();
node.add_alias(host.clone());
aliases.insert(host, node);
}
fn remove_alias(&self, host: &Host) {
let mut aliases = self.aliases.write();
aliases.remove(host);
}
fn add_aliases(&self, node: Arc<Node>) {
let mut aliases = self.aliases.write();
for alias in node.aliases() {
aliases.insert(alias, node.clone());
}
}
fn find_node_in_partition_map(&self, filter: Arc<Node>) -> bool {
let partitions = self.partition_write_map.read();
(*partitions)
.values()
.any(|map| map.iter().any(|node| *node == filter))
}
fn add_nodes(&self, friend_list: &[Arc<Node>]) {
if friend_list.is_empty() {
return;
}
let mut nodes = self.nodes();
nodes.extend(friend_list.iter().cloned());
self.set_nodes(nodes)
}
fn remove_nodes(&self, nodes_to_remove: &[Arc<Node>]) {
if nodes_to_remove.is_empty() {
return;
}
let nodes = self.nodes();
let mut node_array: Vec<Arc<Node>> = vec![];
for node in &nodes {
if!nodes_to_remove.contains(node) {
node_array.push(node.clone());
}
}
self.set_nodes(node_array)
}
pub fn is_connected(&self) -> bool {
let nodes = self.nodes();
let closed = self.closed.load(Ordering::Relaxed);
!nodes.is_empty() &&!closed
}
pub fn aliases(&self) -> HashMap<Host, Arc<Node>> {
self.aliases.read().clone()
}
pub fn nodes(&self) -> Vec<Arc<Node>> {
self.nodes.read().clone()
}
fn set_nodes(&self, new_nodes: Vec<Arc<Node>>) {
let mut nodes = self.nodes.write();
*nodes = new_nodes;
}
pub fn get_node(&self, partition: &Partition) -> Result<Arc<Node>> {
let partitions = self.partitions();
let partitions = partitions.read();
if let Some(node_array) = partitions.get(partition.namespace) {
if let Some(node) = node_array.get(partition.partition_id) {
return Ok(node.clone());
}
}
self.get_random_node()
}
pub fn get_random_node(&self) -> Result<Arc<Node>> {
let node_array = self.nodes();
let length = node_array.len() as isize;
for _ in 0..length {
let index = ((self.node_index.fetch_add(1, Ordering::Relaxed) + 1) % length).abs();
if let Some(node) = node_array.get(index as usize) {
if node.is_active() {
return Ok(node.clone());
}
}
}
bail!("No active node")
}
pub fn get_node_by_name(&self, node_name: &str) -> Result<Arc<Node>> {
let node_array = self.nodes();
for node in &node_array {
if node.name() == | seed_nodes | identifier_name |
session.rs | //! Handle a game.
use std::num::NonZeroU8;
use std::fmt;
use derive_getters::Getters;
use rand::{rngs, Rng};
use crate::game::{self, Tree, Board, Players, Player, Choice, Action, Consequence, Holding};
fn roll_d6s<T: Rng>(d6s: u8, random: &mut T) -> usize {
(0..d6s)
.fold(0, |sum, _| -> usize {
sum + random.gen_range(1, 7)
})
}
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub struct LastAttack {
attacker_dice: u8,
attacker_rolled: usize,
defender_dice: u8,
defender_rolled: usize,
}
impl LastAttack {
fn new(
attacker_dice: u8, attacker_rolled: usize, defender_dice: u8, defender_rolled: usize
) -> Self {
LastAttack { attacker_dice, attacker_rolled, defender_dice, defender_rolled }
}
}
impl fmt::Display for LastAttack {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
if self.attacker_rolled == 0 && self.defender_rolled == 0 {
write!(f, "") // Sentinel value for first turn thus no preceding attacks.
} else {
if self.attacker_rolled > self.defender_rolled {
write!(
f,
"Attacker with {} dice rolled {} beating \
defender with {} dice who rolled {}.",
&self.attacker_dice,
&self.attacker_rolled,
&self.defender_dice,
&self.defender_rolled,
)
} else {
write!(
f,
"Defender with {} dice rolled {} holding against \
attacker with {} dice who rolled {}.",
&self.defender_dice,
&self.defender_rolled,
&self.attacker_dice,
&self.attacker_rolled,
)
}
}
}
}
impl Default for LastAttack {
fn default() -> Self {
LastAttack::new(0, 0, 0, 0)
}
}
/// State of game progression. Whether the game is on, over and what kind of over.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Progression {
PlayOn(LastAttack),
GameOverWinner(Player),
GameOverStalemate(Vec<Player>), // Easier to calculate than a draw...
}
/// The state of the session.
#[derive(Debug, Clone, Getters)]
pub struct State {
/// Whether we continue play or not.
game: Progression,
/// If a bunch of single move turns needed to be made first. These include players
/// being knocked out or players only being able to pass their turn.
traversal: Vec<(Board, Choice)>,
/// State of game.
board: Board,
/// Choices available to current player.
choices: Vec<Choice>,
}
impl State {
fn new(
game: Progression,
traversal: &[(Board, Choice)],
board: Board,
choices: &[Choice],
) -> Self {
State {
game,
traversal: traversal
.into_iter()
.map(|(b, c)| (b.to_owned(), c.to_owned()))
.collect(),
board,
choices: choices
.into_iter()
.map(|c| c.to_owned())
.collect(),
}
}
}
/// Generate a `State` from a chosen `Board` consequence and the `Tree` where that `Board`
/// must exist. Runs inside a loop skipping over states that have only one turn left in
/// them except for Winning states. Uses some logic to detect draw states.
fn state_from_board(
mut current_board: Board, tree: &Tree, outcome: LastAttack,
) -> Result<State, usize> {
let mut traversal: Vec<(Board, Choice)> = Vec::new();
let mut depth: usize = 1;
let state = loop {
let choices = tree
.fetch_choices(¤t_board)
.ok_or(depth)?;
// If there's only one choice left, it may be a passing/gameover/win move. Or the
// last available attack.
if choices.len() == 1 {
depth += 1;
match choices[0].action() {
Action::Attack(_, _, _, _) => {
// There is one last attack to make. We won't execute this choice
// for the player as that'd be overstepping our bounds. Thus we jump
// out of this loop.
break State::new(
Progression::PlayOn(outcome),
traversal.as_slice(),
current_board,
choices,
);
},
Action::Pass => {
// It'd be cumbersome to manually pass a move. The player can't "do"
// anything. So let's just deal with it automatically.
// In order to do this, we need to figure out the passing consequence.
match choices[0].consequence() {
Consequence::Stalemate(next_board) => break State::new(
Progression::GameOverStalemate(next_board.players().playing()),
traversal.as_slice(),
next_board.to_owned(),
choices,
),
Consequence::Winner(next_board) => break State::new(
Progression::GameOverWinner(next_board.players().current()),
traversal.as_slice(),
next_board.to_owned(),
choices,
),
Consequence::GameOver(next_board) => {
// We need to iterate the progression.
traversal.push((current_board, choices[0].to_owned()));
current_board = next_board.to_owned();
continue;
},
Consequence::TurnOver(next_board) => {
// We need to iterate the progression.
traversal.push((current_board, choices[0].to_owned()));
current_board = next_board.to_owned();
continue;
},
Consequence::Continue(_) => unreachable!(),
}
},
}
}
// If we make it here, there are choices that need to be made.
break State::new(
Progression::PlayOn(outcome), | choices,
);
};
Ok(state)
}
/// A game in progress. The `traversals` indicate how many turns have passed. Maintains
/// all state of the game.
///
/// ## Invariants
/// 1. The `Tree` will always be valid.
/// 2. The first `State` in the `turns` is the starting position sans any inital traversals.
/// 3. There will always be at least one `State` in the `turns`.
#[derive(Debug, Clone, Getters)]
pub struct Session {
turns: Vec<State>,
tree: Option<Tree>,
move_limit: NonZeroU8,
rand: rngs::ThreadRng,
}
impl Session {
pub fn new(start: Board, tree: Tree, move_limit: NonZeroU8) -> Self {
// The start may contain pass move. Cycle to get at the first true turn.
// This code is a copy of what's happening in `advance` below. TODO: Refactor me.
let mut tree = Some(tree);
let first_turn = loop {
match state_from_board(
start.clone(), tree.as_ref().unwrap(), LastAttack::default()
) {
Ok(state) => break state,
Err(depth) => {
let new_tree = game::start_tree_horizon_limited(
start.clone(), depth, move_limit.get(),
);
tree = Some(new_tree);
},
}
};
Session {
turns: vec![first_turn],
tree,
move_limit,
rand: rand::thread_rng(),
}
}
pub fn reset(self) -> Self {
let first = self.turns.first().unwrap().board.to_owned();
Session::new(
first.clone(),
game::start_tree_horizon_limited(first, 1, self.move_limit.get()),
self.move_limit,
)
}
pub fn current_turn(&self) -> &State {
self.turns.last().unwrap()
}
/// Take an `Action` and advance the game state. Advances the tree if necessary. Takes
/// an `index` of the `[Choice]`. The `Choice` will always be an attacking action.
pub fn advance(&mut self, index: usize) -> Result<&State, String> {
let choice = self
.current_turn()
.choices()
.get(index)
.ok_or("Index out of bounds.".to_owned())?
.to_owned();
let (attacker_coordinate, attacker_dice, defender_dice) = match choice.action() {
Action::Attack(ac, _, ad, dd) => (*ac, *ad, *dd),
Action::Pass => unreachable!(), // Must never happen. `Session` must always
// return with attack choices or game over.
};
let attacker_roll = roll_d6s(attacker_dice, &mut self.rand);
let defender_roll = roll_d6s(defender_dice, &mut self.rand);
let outcome = LastAttack::new(
attacker_dice, attacker_roll, defender_dice, defender_roll
);
let next_board = if attacker_roll > defender_roll {
// Board advances due to win.
choice.consequence().board().to_owned()
} else {
// Board stays the same sans one move due to loss and the losing hex frozen.
let current_board = &self.current_turn().board;
Board::new(
*current_board.players(),
current_board
.grid()
.fork_with(|coordinate, hold| {
// Freeze the losing hex til next turn.
if coordinate == &attacker_coordinate {
u8::new(hold.owner(), hold.dice(), false)
} else {
hold
}
}),
*current_board.captured_dice(),
*current_board.moved() + 1,
)
};
let state = loop {
match state_from_board(
next_board.clone(), &self.tree.as_ref().unwrap(), outcome,
) {
Ok(state) => break state,
Err(depth) => {
let new_tree = game::start_tree_horizon_limited(
next_board.to_owned(), depth, self.move_limit.get(),
);
self.tree = Some(new_tree);
},
}
};
self.turns.push(state);
Ok(self.current_turn())
}
/// Score the tree up to the depth specified in `horizon`. Will then edit current
/// `State` to put the scoring into the current choices. A deep horizon will cause the
/// system to lock up. High chance that an OOM error will follow.
pub fn score_with_depth_horizon(&mut self, horizon: usize) -> &State {
let current_board = self.current_turn().board.to_owned();
let tree = game::start_tree_horizon_limited(
current_board, horizon, self.move_limit.get(),
);
let _ = game::score_tree(&tree);
let choices = tree.fetch_choices(tree.root()).unwrap().to_owned();
let last_state = self.turns.last_mut().unwrap();
last_state.choices = choices;
self.tree = Some(tree);
last_state
}
/// Score the tree up to the the board insert budget specified. The first tree layer
/// though will be computed without taking into account the budget, this way there will
/// always be all available choices for the turn.
pub fn score_with_insert_budget(&mut self, insert_budget: usize) -> &State {
let current_board = self.current_turn().board.to_owned();
let tree = game::start_tree_insert_budgeted(
current_board, insert_budget, self.move_limit.get(),
);
let _ = game::score_tree(&tree);
let choices = tree.fetch_choices(tree.root()).unwrap().to_owned();
let last_state = self.turns.last_mut().unwrap();
last_state.choices = choices;
self.tree = Some(tree);
last_state
}
}
/// Setup a game session. Can set the number of players and the board size and to use
/// canned boards (feed it a starting position. The board can only be rectangular.
#[derive(Debug, Clone, Getters)]
pub struct Setup {
players: Players,
board: Option<Board>,
move_limit: NonZeroU8,
}
impl Setup {
pub fn new() -> Self {
Setup {
players: Players::new(2),
board: None,
move_limit: NonZeroU8::new(6).unwrap(),
}
}
/// If the number of players changes in any way, it will invalidate the `Board`.
pub fn set_players(&mut self, players: Players) -> &mut Self {
if self.players!= players {
self.board = None;
}
self.players = players;
self
}
pub fn set_move_limit(&mut self, move_limit: NonZeroU8) -> &mut Self {
self.move_limit = move_limit;
self
}
/// Set the board. This will also set the players since the `Board` lists all state.
pub fn set_board(&mut self, board: Board) -> &mut Self {
self.players = *board.players();
self.board = Some(board);
self
}
/// Will generate a new board using the loaded in `Players` setting.
pub fn gen_board(&mut self, columns: u32, rows: u32) -> &mut Self {
self.board = Some(game::generate_random_board(columns, rows, self.players));
self
}
/// Produce a game session! Will return an error if there is no `Board` setup. Boards
/// greater than 3x3 will hang the system as the current state of the library is to
///'solve' the game by resolving the entire tree of every possible action.
pub fn session(&self) -> Result<Session, String> {
if let Some(board) = self.board.clone() {
let tree = game::start_tree_horizon_limited(
board.clone(), 1, self.move_limit.get());
Ok(Session::new(board, tree, self.move_limit))
} else {
Err("No board set.".to_owned())
}
}
}
impl Default for Setup {
fn default() -> Self {
Setup::new()
}
}
#[cfg(test)]
mod test {
use std::error;
use crate::{game, session};
use super::*;
#[test]
fn state_from_board_2x1() -> Result<(), Box<dyn error::Error>> {
let start = game::canned_2x1_start01();
let s_grid = start.grid().to_owned();
let tree = game::build_tree(start.clone(), 10);
let last_attack = LastAttack::default();
let state = state_from_board(start, &tree, last_attack).unwrap();
let f_grid = state.board().grid().to_owned();
assert!(s_grid == f_grid);
Ok(())
}
#[test]
fn state_from_board_2x2() -> Result<(), Box<dyn error::Error>> {
let start = game::canned_2x2_start01();
let s_grid = start.grid().to_owned();
let tree = game::build_tree(start.clone(), 10);
let last_attack = LastAttack::default();
let state = state_from_board(start, &tree, last_attack).unwrap();
let f_grid = state.board().grid().to_owned();
assert!(s_grid == f_grid);
Ok(())
}
#[test]
fn start_grid_matches_2x1() -> Result<(), Box<dyn error::Error>> {
let start = game::canned_2x1_start01();
let s_grid = start.grid().to_owned();
let session = session::Setup::new()
.set_board(start)
.session()?;
let state = session.current_turn().to_owned();
let f_grid = state.board().grid().to_owned();
assert!(s_grid == f_grid);
Ok(())
}
#[test]
fn start_grid_matches_2x2() -> Result<(), Box<dyn error::Error>> {
let start = game::canned_2x2_start01();
let s_grid = start.grid().to_owned();
let session = session::Setup::new()
.set_board(start)
.session()?;
let state = session.current_turn().to_owned();
let f_grid = state.board().grid().to_owned();
assert!(s_grid == f_grid);
Ok(())
}
} | traversal.as_slice(),
current_board, | random_line_split |
session.rs | //! Handle a game.
use std::num::NonZeroU8;
use std::fmt;
use derive_getters::Getters;
use rand::{rngs, Rng};
use crate::game::{self, Tree, Board, Players, Player, Choice, Action, Consequence, Holding};
fn roll_d6s<T: Rng>(d6s: u8, random: &mut T) -> usize {
(0..d6s)
.fold(0, |sum, _| -> usize {
sum + random.gen_range(1, 7)
})
}
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub struct LastAttack {
attacker_dice: u8,
attacker_rolled: usize,
defender_dice: u8,
defender_rolled: usize,
}
impl LastAttack {
fn new(
attacker_dice: u8, attacker_rolled: usize, defender_dice: u8, defender_rolled: usize
) -> Self {
LastAttack { attacker_dice, attacker_rolled, defender_dice, defender_rolled }
}
}
impl fmt::Display for LastAttack {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
if self.attacker_rolled == 0 && self.defender_rolled == 0 {
write!(f, "") // Sentinel value for first turn thus no preceding attacks.
} else {
if self.attacker_rolled > self.defender_rolled {
write!(
f,
"Attacker with {} dice rolled {} beating \
defender with {} dice who rolled {}.",
&self.attacker_dice,
&self.attacker_rolled,
&self.defender_dice,
&self.defender_rolled,
)
} else {
write!(
f,
"Defender with {} dice rolled {} holding against \
attacker with {} dice who rolled {}.",
&self.defender_dice,
&self.defender_rolled,
&self.attacker_dice,
&self.attacker_rolled,
)
}
}
}
}
impl Default for LastAttack {
fn default() -> Self {
LastAttack::new(0, 0, 0, 0)
}
}
/// State of game progression. Whether the game is on, over and what kind of over.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Progression {
PlayOn(LastAttack),
GameOverWinner(Player),
GameOverStalemate(Vec<Player>), // Easier to calculate than a draw...
}
/// The state of the session.
#[derive(Debug, Clone, Getters)]
pub struct State {
/// Whether we continue play or not.
game: Progression,
/// If a bunch of single move turns needed to be made first. These include players
/// being knocked out or players only being able to pass their turn.
traversal: Vec<(Board, Choice)>,
/// State of game.
board: Board,
/// Choices available to current player.
choices: Vec<Choice>,
}
impl State {
fn new(
game: Progression,
traversal: &[(Board, Choice)],
board: Board,
choices: &[Choice],
) -> Self {
State {
game,
traversal: traversal
.into_iter()
.map(|(b, c)| (b.to_owned(), c.to_owned()))
.collect(),
board,
choices: choices
.into_iter()
.map(|c| c.to_owned())
.collect(),
}
}
}
/// Generate a `State` from a chosen `Board` consequence and the `Tree` where that `Board`
/// must exist. Runs inside a loop skipping over states that have only one turn left in
/// them except for Winning states. Uses some logic to detect draw states.
fn state_from_board(
mut current_board: Board, tree: &Tree, outcome: LastAttack,
) -> Result<State, usize> {
let mut traversal: Vec<(Board, Choice)> = Vec::new();
let mut depth: usize = 1;
let state = loop {
let choices = tree
.fetch_choices(¤t_board)
.ok_or(depth)?;
// If there's only one choice left, it may be a passing/gameover/win move. Or the
// last available attack.
if choices.len() == 1 {
depth += 1;
match choices[0].action() {
Action::Attack(_, _, _, _) => {
// There is one last attack to make. We won't execute this choice
// for the player as that'd be overstepping our bounds. Thus we jump
// out of this loop.
break State::new(
Progression::PlayOn(outcome),
traversal.as_slice(),
current_board,
choices,
);
},
Action::Pass => {
// It'd be cumbersome to manually pass a move. The player can't "do"
// anything. So let's just deal with it automatically.
// In order to do this, we need to figure out the passing consequence.
match choices[0].consequence() {
Consequence::Stalemate(next_board) => break State::new(
Progression::GameOverStalemate(next_board.players().playing()),
traversal.as_slice(),
next_board.to_owned(),
choices,
),
Consequence::Winner(next_board) => break State::new(
Progression::GameOverWinner(next_board.players().current()),
traversal.as_slice(),
next_board.to_owned(),
choices,
),
Consequence::GameOver(next_board) => {
// We need to iterate the progression.
traversal.push((current_board, choices[0].to_owned()));
current_board = next_board.to_owned();
continue;
},
Consequence::TurnOver(next_board) => {
// We need to iterate the progression.
traversal.push((current_board, choices[0].to_owned()));
current_board = next_board.to_owned();
continue;
},
Consequence::Continue(_) => unreachable!(),
}
},
}
}
// If we make it here, there are choices that need to be made.
break State::new(
Progression::PlayOn(outcome),
traversal.as_slice(),
current_board,
choices,
);
};
Ok(state)
}
/// A game in progress. The `traversals` indicate how many turns have passed. Maintains
/// all state of the game.
///
/// ## Invariants
/// 1. The `Tree` will always be valid.
/// 2. The first `State` in the `turns` is the starting position sans any inital traversals.
/// 3. There will always be at least one `State` in the `turns`.
#[derive(Debug, Clone, Getters)]
pub struct Session {
turns: Vec<State>,
tree: Option<Tree>,
move_limit: NonZeroU8,
rand: rngs::ThreadRng,
}
impl Session {
pub fn new(start: Board, tree: Tree, move_limit: NonZeroU8) -> Self | turns: vec![first_turn],
tree,
move_limit,
rand: rand::thread_rng(),
}
}
pub fn reset(self) -> Self {
let first = self.turns.first().unwrap().board.to_owned();
Session::new(
first.clone(),
game::start_tree_horizon_limited(first, 1, self.move_limit.get()),
self.move_limit,
)
}
pub fn current_turn(&self) -> &State {
self.turns.last().unwrap()
}
/// Take an `Action` and advance the game state. Advances the tree if necessary. Takes
/// an `index` of the `[Choice]`. The `Choice` will always be an attacking action.
pub fn advance(&mut self, index: usize) -> Result<&State, String> {
let choice = self
.current_turn()
.choices()
.get(index)
.ok_or("Index out of bounds.".to_owned())?
.to_owned();
let (attacker_coordinate, attacker_dice, defender_dice) = match choice.action() {
Action::Attack(ac, _, ad, dd) => (*ac, *ad, *dd),
Action::Pass => unreachable!(), // Must never happen. `Session` must always
// return with attack choices or game over.
};
let attacker_roll = roll_d6s(attacker_dice, &mut self.rand);
let defender_roll = roll_d6s(defender_dice, &mut self.rand);
let outcome = LastAttack::new(
attacker_dice, attacker_roll, defender_dice, defender_roll
);
let next_board = if attacker_roll > defender_roll {
// Board advances due to win.
choice.consequence().board().to_owned()
} else {
// Board stays the same sans one move due to loss and the losing hex frozen.
let current_board = &self.current_turn().board;
Board::new(
*current_board.players(),
current_board
.grid()
.fork_with(|coordinate, hold| {
// Freeze the losing hex til next turn.
if coordinate == &attacker_coordinate {
u8::new(hold.owner(), hold.dice(), false)
} else {
hold
}
}),
*current_board.captured_dice(),
*current_board.moved() + 1,
)
};
let state = loop {
match state_from_board(
next_board.clone(), &self.tree.as_ref().unwrap(), outcome,
) {
Ok(state) => break state,
Err(depth) => {
let new_tree = game::start_tree_horizon_limited(
next_board.to_owned(), depth, self.move_limit.get(),
);
self.tree = Some(new_tree);
},
}
};
self.turns.push(state);
Ok(self.current_turn())
}
/// Score the tree up to the depth specified in `horizon`. Will then edit current
/// `State` to put the scoring into the current choices. A deep horizon will cause the
/// system to lock up. High chance that an OOM error will follow.
pub fn score_with_depth_horizon(&mut self, horizon: usize) -> &State {
let current_board = self.current_turn().board.to_owned();
let tree = game::start_tree_horizon_limited(
current_board, horizon, self.move_limit.get(),
);
let _ = game::score_tree(&tree);
let choices = tree.fetch_choices(tree.root()).unwrap().to_owned();
let last_state = self.turns.last_mut().unwrap();
last_state.choices = choices;
self.tree = Some(tree);
last_state
}
/// Score the tree up to the the board insert budget specified. The first tree layer
/// though will be computed without taking into account the budget, this way there will
/// always be all available choices for the turn.
pub fn score_with_insert_budget(&mut self, insert_budget: usize) -> &State {
let current_board = self.current_turn().board.to_owned();
let tree = game::start_tree_insert_budgeted(
current_board, insert_budget, self.move_limit.get(),
);
let _ = game::score_tree(&tree);
let choices = tree.fetch_choices(tree.root()).unwrap().to_owned();
let last_state = self.turns.last_mut().unwrap();
last_state.choices = choices;
self.tree = Some(tree);
last_state
}
}
/// Setup a game session. Can set the number of players and the board size and to use
/// canned boards (feed it a starting position. The board can only be rectangular.
#[derive(Debug, Clone, Getters)]
pub struct Setup {
players: Players,
board: Option<Board>,
move_limit: NonZeroU8,
}
impl Setup {
pub fn new() -> Self {
Setup {
players: Players::new(2),
board: None,
move_limit: NonZeroU8::new(6).unwrap(),
}
}
/// If the number of players changes in any way, it will invalidate the `Board`.
pub fn set_players(&mut self, players: Players) -> &mut Self {
if self.players!= players {
self.board = None;
}
self.players = players;
self
}
pub fn set_move_limit(&mut self, move_limit: NonZeroU8) -> &mut Self {
self.move_limit = move_limit;
self
}
/// Set the board. This will also set the players since the `Board` lists all state.
pub fn set_board(&mut self, board: Board) -> &mut Self {
self.players = *board.players();
self.board = Some(board);
self
}
/// Will generate a new board using the loaded in `Players` setting.
pub fn gen_board(&mut self, columns: u32, rows: u32) -> &mut Self {
self.board = Some(game::generate_random_board(columns, rows, self.players));
self
}
/// Produce a game session! Will return an error if there is no `Board` setup. Boards
/// greater than 3x3 will hang the system as the current state of the library is to
///'solve' the game by resolving the entire tree of every possible action.
pub fn session(&self) -> Result<Session, String> {
if let Some(board) = self.board.clone() {
let tree = game::start_tree_horizon_limited(
board.clone(), 1, self.move_limit.get());
Ok(Session::new(board, tree, self.move_limit))
} else {
Err("No board set.".to_owned())
}
}
}
impl Default for Setup {
fn default() -> Self {
Setup::new()
}
}
#[cfg(test)]
mod test {
use std::error;
use crate::{game, session};
use super::*;
#[test]
fn state_from_board_2x1() -> Result<(), Box<dyn error::Error>> {
let start = game::canned_2x1_start01();
let s_grid = start.grid().to_owned();
let tree = game::build_tree(start.clone(), 10);
let last_attack = LastAttack::default();
let state = state_from_board(start, &tree, last_attack).unwrap();
let f_grid = state.board().grid().to_owned();
assert!(s_grid == f_grid);
Ok(())
}
#[test]
fn state_from_board_2x2() -> Result<(), Box<dyn error::Error>> {
let start = game::canned_2x2_start01();
let s_grid = start.grid().to_owned();
let tree = game::build_tree(start.clone(), 10);
let last_attack = LastAttack::default();
let state = state_from_board(start, &tree, last_attack).unwrap();
let f_grid = state.board().grid().to_owned();
assert!(s_grid == f_grid);
Ok(())
}
#[test]
fn start_grid_matches_2x1() -> Result<(), Box<dyn error::Error>> {
let start = game::canned_2x1_start01();
let s_grid = start.grid().to_owned();
let session = session::Setup::new()
.set_board(start)
.session()?;
let state = session.current_turn().to_owned();
let f_grid = state.board().grid().to_owned();
assert!(s_grid == f_grid);
Ok(())
}
#[test]
fn start_grid_matches_2x2() -> Result<(), Box<dyn error::Error>> {
let start = game::canned_2x2_start01();
let s_grid = start.grid().to_owned();
let session = session::Setup::new()
.set_board(start)
.session()?;
let state = session.current_turn().to_owned();
let f_grid = state.board().grid().to_owned();
assert!(s_grid == f_grid);
Ok(())
}
}
| {
// The start may contain pass move. Cycle to get at the first true turn.
// This code is a copy of what's happening in `advance` below. TODO: Refactor me.
let mut tree = Some(tree);
let first_turn = loop {
match state_from_board(
start.clone(), tree.as_ref().unwrap(), LastAttack::default()
) {
Ok(state) => break state,
Err(depth) => {
let new_tree = game::start_tree_horizon_limited(
start.clone(), depth, move_limit.get(),
);
tree = Some(new_tree);
},
}
};
Session { | identifier_body |
session.rs | //! Handle a game.
use std::num::NonZeroU8;
use std::fmt;
use derive_getters::Getters;
use rand::{rngs, Rng};
use crate::game::{self, Tree, Board, Players, Player, Choice, Action, Consequence, Holding};
fn roll_d6s<T: Rng>(d6s: u8, random: &mut T) -> usize {
(0..d6s)
.fold(0, |sum, _| -> usize {
sum + random.gen_range(1, 7)
})
}
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub struct LastAttack {
attacker_dice: u8,
attacker_rolled: usize,
defender_dice: u8,
defender_rolled: usize,
}
impl LastAttack {
fn | (
attacker_dice: u8, attacker_rolled: usize, defender_dice: u8, defender_rolled: usize
) -> Self {
LastAttack { attacker_dice, attacker_rolled, defender_dice, defender_rolled }
}
}
impl fmt::Display for LastAttack {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
if self.attacker_rolled == 0 && self.defender_rolled == 0 {
write!(f, "") // Sentinel value for first turn thus no preceding attacks.
} else {
if self.attacker_rolled > self.defender_rolled {
write!(
f,
"Attacker with {} dice rolled {} beating \
defender with {} dice who rolled {}.",
&self.attacker_dice,
&self.attacker_rolled,
&self.defender_dice,
&self.defender_rolled,
)
} else {
write!(
f,
"Defender with {} dice rolled {} holding against \
attacker with {} dice who rolled {}.",
&self.defender_dice,
&self.defender_rolled,
&self.attacker_dice,
&self.attacker_rolled,
)
}
}
}
}
impl Default for LastAttack {
fn default() -> Self {
LastAttack::new(0, 0, 0, 0)
}
}
/// State of game progression. Whether the game is on, over and what kind of over.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Progression {
PlayOn(LastAttack),
GameOverWinner(Player),
GameOverStalemate(Vec<Player>), // Easier to calculate than a draw...
}
/// The state of the session.
#[derive(Debug, Clone, Getters)]
pub struct State {
/// Whether we continue play or not.
game: Progression,
/// If a bunch of single move turns needed to be made first. These include players
/// being knocked out or players only being able to pass their turn.
traversal: Vec<(Board, Choice)>,
/// State of game.
board: Board,
/// Choices available to current player.
choices: Vec<Choice>,
}
impl State {
fn new(
game: Progression,
traversal: &[(Board, Choice)],
board: Board,
choices: &[Choice],
) -> Self {
State {
game,
traversal: traversal
.into_iter()
.map(|(b, c)| (b.to_owned(), c.to_owned()))
.collect(),
board,
choices: choices
.into_iter()
.map(|c| c.to_owned())
.collect(),
}
}
}
/// Generate a `State` from a chosen `Board` consequence and the `Tree` where that `Board`
/// must exist. Runs inside a loop skipping over states that have only one turn left in
/// them except for Winning states. Uses some logic to detect draw states.
fn state_from_board(
mut current_board: Board, tree: &Tree, outcome: LastAttack,
) -> Result<State, usize> {
let mut traversal: Vec<(Board, Choice)> = Vec::new();
let mut depth: usize = 1;
let state = loop {
let choices = tree
.fetch_choices(¤t_board)
.ok_or(depth)?;
// If there's only one choice left, it may be a passing/gameover/win move. Or the
// last available attack.
if choices.len() == 1 {
depth += 1;
match choices[0].action() {
Action::Attack(_, _, _, _) => {
// There is one last attack to make. We won't execute this choice
// for the player as that'd be overstepping our bounds. Thus we jump
// out of this loop.
break State::new(
Progression::PlayOn(outcome),
traversal.as_slice(),
current_board,
choices,
);
},
Action::Pass => {
// It'd be cumbersome to manually pass a move. The player can't "do"
// anything. So let's just deal with it automatically.
// In order to do this, we need to figure out the passing consequence.
match choices[0].consequence() {
Consequence::Stalemate(next_board) => break State::new(
Progression::GameOverStalemate(next_board.players().playing()),
traversal.as_slice(),
next_board.to_owned(),
choices,
),
Consequence::Winner(next_board) => break State::new(
Progression::GameOverWinner(next_board.players().current()),
traversal.as_slice(),
next_board.to_owned(),
choices,
),
Consequence::GameOver(next_board) => {
// We need to iterate the progression.
traversal.push((current_board, choices[0].to_owned()));
current_board = next_board.to_owned();
continue;
},
Consequence::TurnOver(next_board) => {
// We need to iterate the progression.
traversal.push((current_board, choices[0].to_owned()));
current_board = next_board.to_owned();
continue;
},
Consequence::Continue(_) => unreachable!(),
}
},
}
}
// If we make it here, there are choices that need to be made.
break State::new(
Progression::PlayOn(outcome),
traversal.as_slice(),
current_board,
choices,
);
};
Ok(state)
}
/// A game in progress. The `traversals` indicate how many turns have passed. Maintains
/// all state of the game.
///
/// ## Invariants
/// 1. The `Tree` will always be valid.
/// 2. The first `State` in the `turns` is the starting position sans any inital traversals.
/// 3. There will always be at least one `State` in the `turns`.
#[derive(Debug, Clone, Getters)]
pub struct Session {
turns: Vec<State>,
tree: Option<Tree>,
move_limit: NonZeroU8,
rand: rngs::ThreadRng,
}
impl Session {
pub fn new(start: Board, tree: Tree, move_limit: NonZeroU8) -> Self {
// The start may contain pass move. Cycle to get at the first true turn.
// This code is a copy of what's happening in `advance` below. TODO: Refactor me.
let mut tree = Some(tree);
let first_turn = loop {
match state_from_board(
start.clone(), tree.as_ref().unwrap(), LastAttack::default()
) {
Ok(state) => break state,
Err(depth) => {
let new_tree = game::start_tree_horizon_limited(
start.clone(), depth, move_limit.get(),
);
tree = Some(new_tree);
},
}
};
Session {
turns: vec![first_turn],
tree,
move_limit,
rand: rand::thread_rng(),
}
}
pub fn reset(self) -> Self {
let first = self.turns.first().unwrap().board.to_owned();
Session::new(
first.clone(),
game::start_tree_horizon_limited(first, 1, self.move_limit.get()),
self.move_limit,
)
}
pub fn current_turn(&self) -> &State {
self.turns.last().unwrap()
}
/// Take an `Action` and advance the game state. Advances the tree if necessary. Takes
/// an `index` of the `[Choice]`. The `Choice` will always be an attacking action.
pub fn advance(&mut self, index: usize) -> Result<&State, String> {
let choice = self
.current_turn()
.choices()
.get(index)
.ok_or("Index out of bounds.".to_owned())?
.to_owned();
let (attacker_coordinate, attacker_dice, defender_dice) = match choice.action() {
Action::Attack(ac, _, ad, dd) => (*ac, *ad, *dd),
Action::Pass => unreachable!(), // Must never happen. `Session` must always
// return with attack choices or game over.
};
let attacker_roll = roll_d6s(attacker_dice, &mut self.rand);
let defender_roll = roll_d6s(defender_dice, &mut self.rand);
let outcome = LastAttack::new(
attacker_dice, attacker_roll, defender_dice, defender_roll
);
let next_board = if attacker_roll > defender_roll {
// Board advances due to win.
choice.consequence().board().to_owned()
} else {
// Board stays the same sans one move due to loss and the losing hex frozen.
let current_board = &self.current_turn().board;
Board::new(
*current_board.players(),
current_board
.grid()
.fork_with(|coordinate, hold| {
// Freeze the losing hex til next turn.
if coordinate == &attacker_coordinate {
u8::new(hold.owner(), hold.dice(), false)
} else {
hold
}
}),
*current_board.captured_dice(),
*current_board.moved() + 1,
)
};
let state = loop {
match state_from_board(
next_board.clone(), &self.tree.as_ref().unwrap(), outcome,
) {
Ok(state) => break state,
Err(depth) => {
let new_tree = game::start_tree_horizon_limited(
next_board.to_owned(), depth, self.move_limit.get(),
);
self.tree = Some(new_tree);
},
}
};
self.turns.push(state);
Ok(self.current_turn())
}
/// Score the tree up to the depth specified in `horizon`. Will then edit current
/// `State` to put the scoring into the current choices. A deep horizon will cause the
/// system to lock up. High chance that an OOM error will follow.
pub fn score_with_depth_horizon(&mut self, horizon: usize) -> &State {
let current_board = self.current_turn().board.to_owned();
let tree = game::start_tree_horizon_limited(
current_board, horizon, self.move_limit.get(),
);
let _ = game::score_tree(&tree);
let choices = tree.fetch_choices(tree.root()).unwrap().to_owned();
let last_state = self.turns.last_mut().unwrap();
last_state.choices = choices;
self.tree = Some(tree);
last_state
}
/// Score the tree up to the the board insert budget specified. The first tree layer
/// though will be computed without taking into account the budget, this way there will
/// always be all available choices for the turn.
pub fn score_with_insert_budget(&mut self, insert_budget: usize) -> &State {
let current_board = self.current_turn().board.to_owned();
let tree = game::start_tree_insert_budgeted(
current_board, insert_budget, self.move_limit.get(),
);
let _ = game::score_tree(&tree);
let choices = tree.fetch_choices(tree.root()).unwrap().to_owned();
let last_state = self.turns.last_mut().unwrap();
last_state.choices = choices;
self.tree = Some(tree);
last_state
}
}
/// Setup a game session. Can set the number of players and the board size and to use
/// canned boards (feed it a starting position. The board can only be rectangular.
#[derive(Debug, Clone, Getters)]
pub struct Setup {
players: Players,
board: Option<Board>,
move_limit: NonZeroU8,
}
impl Setup {
pub fn new() -> Self {
Setup {
players: Players::new(2),
board: None,
move_limit: NonZeroU8::new(6).unwrap(),
}
}
/// If the number of players changes in any way, it will invalidate the `Board`.
pub fn set_players(&mut self, players: Players) -> &mut Self {
if self.players!= players {
self.board = None;
}
self.players = players;
self
}
pub fn set_move_limit(&mut self, move_limit: NonZeroU8) -> &mut Self {
self.move_limit = move_limit;
self
}
/// Set the board. This will also set the players since the `Board` lists all state.
pub fn set_board(&mut self, board: Board) -> &mut Self {
self.players = *board.players();
self.board = Some(board);
self
}
/// Will generate a new board using the loaded in `Players` setting.
pub fn gen_board(&mut self, columns: u32, rows: u32) -> &mut Self {
self.board = Some(game::generate_random_board(columns, rows, self.players));
self
}
/// Produce a game session! Will return an error if there is no `Board` setup. Boards
/// greater than 3x3 will hang the system as the current state of the library is to
///'solve' the game by resolving the entire tree of every possible action.
pub fn session(&self) -> Result<Session, String> {
if let Some(board) = self.board.clone() {
let tree = game::start_tree_horizon_limited(
board.clone(), 1, self.move_limit.get());
Ok(Session::new(board, tree, self.move_limit))
} else {
Err("No board set.".to_owned())
}
}
}
impl Default for Setup {
fn default() -> Self {
Setup::new()
}
}
#[cfg(test)]
mod test {
use std::error;
use crate::{game, session};
use super::*;
#[test]
fn state_from_board_2x1() -> Result<(), Box<dyn error::Error>> {
let start = game::canned_2x1_start01();
let s_grid = start.grid().to_owned();
let tree = game::build_tree(start.clone(), 10);
let last_attack = LastAttack::default();
let state = state_from_board(start, &tree, last_attack).unwrap();
let f_grid = state.board().grid().to_owned();
assert!(s_grid == f_grid);
Ok(())
}
#[test]
fn state_from_board_2x2() -> Result<(), Box<dyn error::Error>> {
let start = game::canned_2x2_start01();
let s_grid = start.grid().to_owned();
let tree = game::build_tree(start.clone(), 10);
let last_attack = LastAttack::default();
let state = state_from_board(start, &tree, last_attack).unwrap();
let f_grid = state.board().grid().to_owned();
assert!(s_grid == f_grid);
Ok(())
}
#[test]
fn start_grid_matches_2x1() -> Result<(), Box<dyn error::Error>> {
let start = game::canned_2x1_start01();
let s_grid = start.grid().to_owned();
let session = session::Setup::new()
.set_board(start)
.session()?;
let state = session.current_turn().to_owned();
let f_grid = state.board().grid().to_owned();
assert!(s_grid == f_grid);
Ok(())
}
#[test]
fn start_grid_matches_2x2() -> Result<(), Box<dyn error::Error>> {
let start = game::canned_2x2_start01();
let s_grid = start.grid().to_owned();
let session = session::Setup::new()
.set_board(start)
.session()?;
let state = session.current_turn().to_owned();
let f_grid = state.board().grid().to_owned();
assert!(s_grid == f_grid);
Ok(())
}
}
| new | identifier_name |
circuit.rs | use ff::{Field, PrimeField, PrimeFieldRepr, BitIterator};
use bellman::{Circuit, ConstraintSystem, SynthesisError};
use zcash_primitives::jubjub::{FixedGenerators, JubjubEngine, edwards, PrimeOrder, JubjubParams};
use zcash_primitives::constants;
use zcash_primitives::primitives::{PaymentAddress, ProofGenerationKey, ValueCommitment};
use zcash_proofs::circuit::ecc;
use zcash_proofs::circuit::pedersen_hash;
use bellman::gadgets::blake2s;
use bellman::gadgets::boolean;
use bellman::gadgets::multipack;
use bellman::gadgets::num;
use bellman::gadgets::Assignment;
use bellman::gadgets::test::TestConstraintSystem;
use pairing::bls12_381::Bls12;
// A circuit for proving that the given vrf_output is valid for the given vrf_input under
// a key from the predefined set. It formalizes the following language:
// {(VRF_INPUT, VRF_OUTPUT, set) | VRF_OUTPUT = vrf(sk, VRF_INPUT), PK = derive(sk) and PK is in set }, where:
// - sk, PK is an elliptic curve keypair, thus PK is a point, sk is a scalar and derive(sk) = sk * B, for a predefined base pont B
// - VRF_INPUT and VRF_OUTPUT are elliptic curve points, and vrf(sk, VRF_INPUT) = sk * VRF_INPUT
// - set //TODO
// These are the values that are required to construct the circuit and populate all the wires.
// They are defined as Options as for CRS generation only circuit structure is relevant,
// not the wires' assignments, so knowing the types is enough.
pub struct Ring<'a, E: JubjubEngine> { // TODO: name
// Jubjub curve parameters.
pub params: &'a E::Params,
// The secret key, an element of Jubjub scalar field.
pub sk: Option<E::Fs>,
// The VRF input, a point in Jubjub prime order subgroup.
pub vrf_input: Option<edwards::Point<E, PrimeOrder>>,
// The authentication path of the public key x-coordinate in the Merkle tree,
// the element of Jubjub base field.
// This is enough to build the root as the base point is hardcoded in the circuit in the lookup tables,
// so we can restore the public key from the secret key.
pub auth_path: Vec<Option<(E::Fr, bool)>>,
}
impl<'a, E: JubjubEngine> Circuit<E> for Ring<'a, E> {
fn synthesize<CS: ConstraintSystem<E>>(self, cs: &mut CS) -> Result<(), SynthesisError> {
// Binary representation of the secret key, a prover's private input.
// fs_bits wires and fs_bits booleanity constraints, where fs_bits = 252 is Jubjub scalar field size.
// It isn't (range-)constrained to be an element of the field, so small values will have duplicate representations.
// That doesn't matter for the following reasons: // TODO: double-check
// 1. Knowledge of a congruence of the secret key is equivalent to the knowledge of the secret key,
// and the check sk * G = PK passes for a congruent (sk + n|fs|) * G = sk * G + n|fs| * G == PK + O
// 2. Multiplication by a congruent secret key results in the same VRF output:
// (sk + n|fs|) * H == sk * H, if ord(H) == |fs|
let sk_bits = boolean::field_into_boolean_vec_le(cs.namespace(|| "sk"), self.sk)?;
// Derives the public key from the secret key using the hardcoded generator,
// that is guaranteed to be in the primeorder subgroup,
// so no on-curve or subgroup checks are required //TODO: double-check
// 750 constraints according to Zcash spec A.3.3.7
let pk = ecc::fixed_base_multiplication(
cs.namespace(|| "PK = sk * G"),
FixedGenerators::SpendingKeyGenerator, //TODO: any NUMS point of full order
&sk_bits,
self.params,
)?;
//
// // Defines first 2 public input wires for the coordinates of the public key in Jubjub base field (~ BLS scalar field)
// // and assures their assignment matches the values calculated in the previous step in 2 constraints.
// // These 2 constraints are not strictly required, just Bellman is implemented this way.
// // TODO: x coordinate only
// pk.inputize(cs.namespace(|| "PK"))?;
// Allocates VRF_BASE on the circuit and checks that it is a point on the curve
// adds 4 constraints (A.3.3.1) to check that it is indeed a point on Jubjub
let vrf_input = ecc::EdwardsPoint::witness(
cs.namespace(|| "VRF_INPUT"),
self.vrf_input,
self.params,
)?;
// Checks that VRF_BASE lies in a proper subgroup of Jubjub. Not strictly required as it is the point provided
// externally as a public input, so MUST be previously checked by the verifier off-circuit.
// But why not double-check it in 16 = 3 * 5 (ec doubling) + 1 (!=0) constraints
// Moreover //TODO
vrf_input.assert_not_small_order(cs.namespace(|| "VRF_BASE not small order"), self.params)?;
// Defines the 3rd and the 4th input wires to be equal VRF_BASE coordinates,
// thus adding 2 more constraints
vrf_input.inputize(cs.namespace(|| "VRF_BASE input"))?;
// Produces VRF output = sk * VRF_BASE, it is a variable base multiplication, thus
// 3252 constraints (A.3.3.8)
// TODO: actually it is 13 more as it is full-length (252 bits) multiplication below
let vrf = vrf_input.mul(cs.namespace(|| "vrf = sk * VRF_BASE"), &sk_bits, self.params)?;
// And 2 more constraints to verify the output
vrf.inputize(cs.namespace(|| "vrf"))?;
// So the circuit is 6 (public inputs) + 252 (sk booleanity) + 750 (fixed-base mul)
// + 20 (on-curve + subgroup check) + 3252 (var-base mul)
// = 4280 constraints
// This is an injective encoding, as cur is a
// point in the prime order subgroup.
let mut cur = pk.get_x().clone();
// Ascend the merkle tree authentication path
for (i, e) in self.auth_path.into_iter().enumerate() {
let cs = &mut cs.namespace(|| format!("merkle tree hash {}", i));
// Determines if the current subtree is the "right" leaf at this
// depth of the tree.
let cur_is_right = boolean::Boolean::from(boolean::AllocatedBit::alloc(
cs.namespace(|| "position bit"),
e.map(|e| e.1),
)?);
// Witness the authentication path element adjacent
// at this depth.
let path_element =
num::AllocatedNum::alloc(cs.namespace(|| "path element"), || Ok(e.get()?.0))?;
// Swap the two if the current subtree is on the right
let (xl, xr) = num::AllocatedNum::conditionally_reverse(
cs.namespace(|| "conditional reversal of preimage"),
&cur,
&path_element,
&cur_is_right,
)?;
// We don't need to be strict, because the function is
// collision-resistant. If the prover witnesses a congruency,
// they will be unable to find an authentication path in the
// tree with high probability.
let mut preimage = vec![];
preimage.extend(xl.to_bits_le(cs.namespace(|| "xl into bits"))?);
preimage.extend(xr.to_bits_le(cs.namespace(|| "xr into bits"))?);
// Compute the new subtree value
cur = pedersen_hash::pedersen_hash(
cs.namespace(|| "computation of pedersen hash"),
pedersen_hash::Personalization::MerkleTree(i),
&preimage,
self.params,
)?
.get_x()
.clone(); // Injective encoding
}
cur.inputize(cs.namespace(|| "anchor"))?;
Ok(())
}
}
#[test]
fn test_ring() {
use bellman::gadgets::test::TestConstraintSystem;
use pairing::bls12_381::{Bls12, Fr,};
use zcash_primitives::pedersen_hash;
use zcash_primitives::jubjub::{JubjubBls12, fs, edwards,};
use rand_core::{RngCore, SeedableRng,};
use rand_xorshift::XorShiftRng;
let params = &JubjubBls12::new();
let rng = &mut XorShiftRng::from_seed([
0x58, 0x62, 0xbe, 0x3d, 0x76, 0x3d, 0x31, 0x8d, 0x17, 0xdb, 0x37, 0x32, 0x54, 0x06, 0xbc,
0xe5,
]);
let sk = fs::Fs::random(rng);
let vrf_base = edwards::Point::rand(rng, params).mul_by_cofactor(params);
let base_point = params.generator(FixedGenerators::SpendingKeyGenerator);
let pk = base_point.mul(sk, params).to_xy();
let tree_depth = 10;
let auth_path = vec![Some((Fr::random(rng), rng.next_u32() % 2!= 0)); tree_depth];
let mut cur = pk.0;
for (i, val) in auth_path.clone().into_iter().enumerate() {
let (uncle, b) = val.unwrap();
let mut lhs = cur;
let mut rhs = uncle;
if b |
let mut lhs: Vec<bool> = BitIterator::new(lhs.into_repr()).collect();
let mut rhs: Vec<bool> = BitIterator::new(rhs.into_repr()).collect();
lhs.reverse();
rhs.reverse();
cur = pedersen_hash::pedersen_hash::<Bls12, _>(
pedersen_hash::Personalization::MerkleTree(i),
lhs.into_iter()
.take(Fr::NUM_BITS as usize)
.chain(rhs.into_iter().take(Fr::NUM_BITS as usize)),
params,
)
.to_xy()
.0;
}
let instance = Ring {
params,
sk: Some(sk),
vrf_input: Some(vrf_base.clone()),
auth_path: auth_path.clone(),
};
let mut cs = TestConstraintSystem::<Bls12>::new();
instance.synthesize(&mut cs).unwrap();
assert!(cs.is_satisfied());
assert_eq!(cs.num_inputs(), 5 + 1); // the 1st public input predefined to be = 1
// assert_eq!(cs.num_constraints(), 4280 + 13); //TODO: 13
println!("{}", cs.num_constraints() - 4293);
assert_eq!(cs.get_input(1, "VRF_BASE input/x/input variable"), vrf_base.to_xy().0);
assert_eq!(cs.get_input(2, "VRF_BASE input/y/input variable"), vrf_base.to_xy().1);
let vrf = vrf_base.mul(sk, params).to_xy();
assert_eq!(cs.get_input(3, "vrf/x/input variable"), vrf.0);
assert_eq!(cs.get_input(4, "vrf/y/input variable"), vrf.1);
assert_eq!(cs.get_input(5, "anchor/input variable"), cur);
} | {
::std::mem::swap(&mut lhs, &mut rhs);
} | conditional_block |
circuit.rs | use ff::{Field, PrimeField, PrimeFieldRepr, BitIterator};
use bellman::{Circuit, ConstraintSystem, SynthesisError};
use zcash_primitives::jubjub::{FixedGenerators, JubjubEngine, edwards, PrimeOrder, JubjubParams};
use zcash_primitives::constants;
use zcash_primitives::primitives::{PaymentAddress, ProofGenerationKey, ValueCommitment};
use zcash_proofs::circuit::ecc;
use zcash_proofs::circuit::pedersen_hash;
use bellman::gadgets::blake2s;
use bellman::gadgets::boolean;
use bellman::gadgets::multipack;
use bellman::gadgets::num;
use bellman::gadgets::Assignment;
use bellman::gadgets::test::TestConstraintSystem;
use pairing::bls12_381::Bls12;
// A circuit for proving that the given vrf_output is valid for the given vrf_input under
// a key from the predefined set. It formalizes the following language:
// {(VRF_INPUT, VRF_OUTPUT, set) | VRF_OUTPUT = vrf(sk, VRF_INPUT), PK = derive(sk) and PK is in set }, where:
// - sk, PK is an elliptic curve keypair, thus PK is a point, sk is a scalar and derive(sk) = sk * B, for a predefined base pont B
// - VRF_INPUT and VRF_OUTPUT are elliptic curve points, and vrf(sk, VRF_INPUT) = sk * VRF_INPUT
// - set //TODO
// These are the values that are required to construct the circuit and populate all the wires.
// They are defined as Options as for CRS generation only circuit structure is relevant,
// not the wires' assignments, so knowing the types is enough.
pub struct | <'a, E: JubjubEngine> { // TODO: name
// Jubjub curve parameters.
pub params: &'a E::Params,
// The secret key, an element of Jubjub scalar field.
pub sk: Option<E::Fs>,
// The VRF input, a point in Jubjub prime order subgroup.
pub vrf_input: Option<edwards::Point<E, PrimeOrder>>,
// The authentication path of the public key x-coordinate in the Merkle tree,
// the element of Jubjub base field.
// This is enough to build the root as the base point is hardcoded in the circuit in the lookup tables,
// so we can restore the public key from the secret key.
pub auth_path: Vec<Option<(E::Fr, bool)>>,
}
impl<'a, E: JubjubEngine> Circuit<E> for Ring<'a, E> {
fn synthesize<CS: ConstraintSystem<E>>(self, cs: &mut CS) -> Result<(), SynthesisError> {
// Binary representation of the secret key, a prover's private input.
// fs_bits wires and fs_bits booleanity constraints, where fs_bits = 252 is Jubjub scalar field size.
// It isn't (range-)constrained to be an element of the field, so small values will have duplicate representations.
// That doesn't matter for the following reasons: // TODO: double-check
// 1. Knowledge of a congruence of the secret key is equivalent to the knowledge of the secret key,
// and the check sk * G = PK passes for a congruent (sk + n|fs|) * G = sk * G + n|fs| * G == PK + O
// 2. Multiplication by a congruent secret key results in the same VRF output:
// (sk + n|fs|) * H == sk * H, if ord(H) == |fs|
let sk_bits = boolean::field_into_boolean_vec_le(cs.namespace(|| "sk"), self.sk)?;
// Derives the public key from the secret key using the hardcoded generator,
// that is guaranteed to be in the primeorder subgroup,
// so no on-curve or subgroup checks are required //TODO: double-check
// 750 constraints according to Zcash spec A.3.3.7
let pk = ecc::fixed_base_multiplication(
cs.namespace(|| "PK = sk * G"),
FixedGenerators::SpendingKeyGenerator, //TODO: any NUMS point of full order
&sk_bits,
self.params,
)?;
//
// // Defines first 2 public input wires for the coordinates of the public key in Jubjub base field (~ BLS scalar field)
// // and assures their assignment matches the values calculated in the previous step in 2 constraints.
// // These 2 constraints are not strictly required, just Bellman is implemented this way.
// // TODO: x coordinate only
// pk.inputize(cs.namespace(|| "PK"))?;
// Allocates VRF_BASE on the circuit and checks that it is a point on the curve
// adds 4 constraints (A.3.3.1) to check that it is indeed a point on Jubjub
let vrf_input = ecc::EdwardsPoint::witness(
cs.namespace(|| "VRF_INPUT"),
self.vrf_input,
self.params,
)?;
// Checks that VRF_BASE lies in a proper subgroup of Jubjub. Not strictly required as it is the point provided
// externally as a public input, so MUST be previously checked by the verifier off-circuit.
// But why not double-check it in 16 = 3 * 5 (ec doubling) + 1 (!=0) constraints
// Moreover //TODO
vrf_input.assert_not_small_order(cs.namespace(|| "VRF_BASE not small order"), self.params)?;
// Defines the 3rd and the 4th input wires to be equal VRF_BASE coordinates,
// thus adding 2 more constraints
vrf_input.inputize(cs.namespace(|| "VRF_BASE input"))?;
// Produces VRF output = sk * VRF_BASE, it is a variable base multiplication, thus
// 3252 constraints (A.3.3.8)
// TODO: actually it is 13 more as it is full-length (252 bits) multiplication below
let vrf = vrf_input.mul(cs.namespace(|| "vrf = sk * VRF_BASE"), &sk_bits, self.params)?;
// And 2 more constraints to verify the output
vrf.inputize(cs.namespace(|| "vrf"))?;
// So the circuit is 6 (public inputs) + 252 (sk booleanity) + 750 (fixed-base mul)
// + 20 (on-curve + subgroup check) + 3252 (var-base mul)
// = 4280 constraints
// This is an injective encoding, as cur is a
// point in the prime order subgroup.
let mut cur = pk.get_x().clone();
// Ascend the merkle tree authentication path
for (i, e) in self.auth_path.into_iter().enumerate() {
let cs = &mut cs.namespace(|| format!("merkle tree hash {}", i));
// Determines if the current subtree is the "right" leaf at this
// depth of the tree.
let cur_is_right = boolean::Boolean::from(boolean::AllocatedBit::alloc(
cs.namespace(|| "position bit"),
e.map(|e| e.1),
)?);
// Witness the authentication path element adjacent
// at this depth.
let path_element =
num::AllocatedNum::alloc(cs.namespace(|| "path element"), || Ok(e.get()?.0))?;
// Swap the two if the current subtree is on the right
let (xl, xr) = num::AllocatedNum::conditionally_reverse(
cs.namespace(|| "conditional reversal of preimage"),
&cur,
&path_element,
&cur_is_right,
)?;
// We don't need to be strict, because the function is
// collision-resistant. If the prover witnesses a congruency,
// they will be unable to find an authentication path in the
// tree with high probability.
let mut preimage = vec![];
preimage.extend(xl.to_bits_le(cs.namespace(|| "xl into bits"))?);
preimage.extend(xr.to_bits_le(cs.namespace(|| "xr into bits"))?);
// Compute the new subtree value
cur = pedersen_hash::pedersen_hash(
cs.namespace(|| "computation of pedersen hash"),
pedersen_hash::Personalization::MerkleTree(i),
&preimage,
self.params,
)?
.get_x()
.clone(); // Injective encoding
}
cur.inputize(cs.namespace(|| "anchor"))?;
Ok(())
}
}
#[test]
fn test_ring() {
use bellman::gadgets::test::TestConstraintSystem;
use pairing::bls12_381::{Bls12, Fr,};
use zcash_primitives::pedersen_hash;
use zcash_primitives::jubjub::{JubjubBls12, fs, edwards,};
use rand_core::{RngCore, SeedableRng,};
use rand_xorshift::XorShiftRng;
let params = &JubjubBls12::new();
let rng = &mut XorShiftRng::from_seed([
0x58, 0x62, 0xbe, 0x3d, 0x76, 0x3d, 0x31, 0x8d, 0x17, 0xdb, 0x37, 0x32, 0x54, 0x06, 0xbc,
0xe5,
]);
let sk = fs::Fs::random(rng);
let vrf_base = edwards::Point::rand(rng, params).mul_by_cofactor(params);
let base_point = params.generator(FixedGenerators::SpendingKeyGenerator);
let pk = base_point.mul(sk, params).to_xy();
let tree_depth = 10;
let auth_path = vec![Some((Fr::random(rng), rng.next_u32() % 2!= 0)); tree_depth];
let mut cur = pk.0;
for (i, val) in auth_path.clone().into_iter().enumerate() {
let (uncle, b) = val.unwrap();
let mut lhs = cur;
let mut rhs = uncle;
if b {
::std::mem::swap(&mut lhs, &mut rhs);
}
let mut lhs: Vec<bool> = BitIterator::new(lhs.into_repr()).collect();
let mut rhs: Vec<bool> = BitIterator::new(rhs.into_repr()).collect();
lhs.reverse();
rhs.reverse();
cur = pedersen_hash::pedersen_hash::<Bls12, _>(
pedersen_hash::Personalization::MerkleTree(i),
lhs.into_iter()
.take(Fr::NUM_BITS as usize)
.chain(rhs.into_iter().take(Fr::NUM_BITS as usize)),
params,
)
.to_xy()
.0;
}
let instance = Ring {
params,
sk: Some(sk),
vrf_input: Some(vrf_base.clone()),
auth_path: auth_path.clone(),
};
let mut cs = TestConstraintSystem::<Bls12>::new();
instance.synthesize(&mut cs).unwrap();
assert!(cs.is_satisfied());
assert_eq!(cs.num_inputs(), 5 + 1); // the 1st public input predefined to be = 1
// assert_eq!(cs.num_constraints(), 4280 + 13); //TODO: 13
println!("{}", cs.num_constraints() - 4293);
assert_eq!(cs.get_input(1, "VRF_BASE input/x/input variable"), vrf_base.to_xy().0);
assert_eq!(cs.get_input(2, "VRF_BASE input/y/input variable"), vrf_base.to_xy().1);
let vrf = vrf_base.mul(sk, params).to_xy();
assert_eq!(cs.get_input(3, "vrf/x/input variable"), vrf.0);
assert_eq!(cs.get_input(4, "vrf/y/input variable"), vrf.1);
assert_eq!(cs.get_input(5, "anchor/input variable"), cur);
} | Ring | identifier_name |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.