file_name
large_stringlengths 4
69
| prefix
large_stringlengths 0
26.7k
| suffix
large_stringlengths 0
24.8k
| middle
large_stringlengths 0
2.12k
| fim_type
large_stringclasses 4
values |
---|---|---|---|---|
io.rs
|
use alloc::boxed::Box;
use core::result;
use std::error;
/// A specialized [`Result`](../result/enum.Result.html) type for I/O
/// operations.
///
/// This type is broadly used across `std::io` for any operation which may
/// produce an error.
///
/// This typedef is generally used to avoid writing out `io::Error` directly and
/// is otherwise a direct mapping to `Result`.
///
/// While usual Rust style is to import types directly, aliases of `Result`
/// often are not, to make it easier to distinguish between them. `Result` is
/// generally assumed to be `std::result::Result`, and so users of this alias
/// will generally use `io::Result` instead of shadowing the prelude's import
/// of `std::result::Result`.
///
/// # Examples
///
/// A convenience function that bubbles an `io::Result` to its caller:
///
/// ```
/// use std::io;
///
/// fn get_string() -> io::Result<String> {
/// let mut buffer = String::new();
///
/// try!(io::stdin().read_line(&mut buffer));
///
/// Ok(buffer)
/// }
/// ```
//#[stable(feature = "rust1", since = "1.0.0")]
pub type Result<T> = result::Result<T, Error>;
/// The error type for I/O operations of the `Read`, `Write`, `Seek`, and
/// associated traits.
///
/// Errors mostly originate from the underlying OS, but custom instances of
/// `Error` can be created with crafted error messages and a particular value of
/// `ErrorKind`.
#[derive(Debug)]
//#[stable(feature = "rust1", since = "1.0.0")]
pub struct Error {
repr: Repr,
}
// XXX manual addition
#[derive(Debug)]
enum Repr {
Os(i32),
Custom(Box<Custom>),
}
#[derive(Debug)]
struct Custom {
kind: ErrorKind,
error: Box<error::Error+Send+Sync>,
}
/// A list specifying general categories of I/O error.
///
/// This list is intended to grow over time and it is not recommended to
/// exhaustively match against it.
#[derive(Copy, PartialEq, Eq, Clone, Debug)]
//#[stable(feature = "rust1", since = "1.0.0")]
#[allow(deprecated)]
pub enum ErrorKind {
/// An entity was not found, often a file.
//#[stable(feature = "rust1", since = "1.0.0")]
NotFound,
/// The operation lacked the necessary privileges to complete.
//#[stable(feature = "rust1", since = "1.0.0")]
PermissionDenied,
/// The connection was refused by the remote server.
//#[stable(feature = "rust1", since = "1.0.0")]
ConnectionRefused,
/// The connection was reset by the remote server.
//#[stable(feature = "rust1", since = "1.0.0")]
ConnectionReset,
/// The connection was aborted (terminated) by the remote server.
//#[stable(feature = "rust1", since = "1.0.0")]
ConnectionAborted,
/// The network operation failed because it was not connected yet.
//#[stable(feature = "rust1", since = "1.0.0")]
NotConnected,
/// A socket address could not be bound because the address is already in
/// use elsewhere.
//#[stable(feature = "rust1", since = "1.0.0")]
AddrInUse,
/// A nonexistent interface was requested or the requested address was not
/// local.
//#[stable(feature = "rust1", since = "1.0.0")]
AddrNotAvailable,
/// The operation failed because a pipe was closed.
//#[stable(feature = "rust1", since = "1.0.0")]
BrokenPipe,
/// An entity already exists, often a file.
//#[stable(feature = "rust1", since = "1.0.0")]
AlreadyExists,
/// The operation needs to block to complete, but the blocking operation was
/// requested to not occur.
//#[stable(feature = "rust1", since = "1.0.0")]
WouldBlock,
/// A parameter was incorrect.
//#[stable(feature = "rust1", since = "1.0.0")]
InvalidInput,
/// Data not valid for the operation were encountered.
///
/// Unlike `InvalidInput`, this typically means that the operation
/// parameters were valid, however the error was caused by malformed
/// input data.
///
/// For example, a function that reads a file into a string will error with
/// `InvalidData` if the file's contents are not valid UTF-8.
//#[stable(feature = "io_invalid_data", since = "1.2.0")]
InvalidData,
/// The I/O operation's timeout expired, causing it to be canceled.
//#[stable(feature = "rust1", since = "1.0.0")]
TimedOut,
/// An error returned when an operation could not be completed because a
/// call to `write` returned `Ok(0)`.
///
/// This typically means that an operation could only succeed if it wrote a
/// particular number of bytes but only a smaller number of bytes could be
/// written.
//#[stable(feature = "rust1", since = "1.0.0")]
WriteZero,
/// This operation was interrupted.
///
/// Interrupted operations can typically be retried.
//#[stable(feature = "rust1", since = "1.0.0")]
Interrupted,
/// Any I/O error not part of this list.
//#[stable(feature = "rust1", since = "1.0.0")]
Other,
#[allow(missing_docs)]
/*
#[unstable(feature = "read_exact_old", reason = "recently added",
issue = "0")]
*/
//#[rustc_deprecated(since = "1.6.0", reason = "renamed to UnexpectedEof")]
UnexpectedEOF,
/// An error returned when an operation could not be completed because an
/// "end of file" was reached prematurely.
///
/// This typically means that an operation could only succeed if it read a
/// particular number of bytes but only a smaller number of bytes could be
/// read.
//#[stable(feature = "read_exact", since = "1.6.0")]
UnexpectedEof,
/// Any I/O error not part of this list.
/*
#[unstable(feature = "io_error_internals",
reason = "better expressed through extensible enums that this \
enum cannot be exhaustively matched against",
issue = "0")]
*/
#[doc(hidden)]
__Nonexhaustive,
}
impl Error {
/// Creates a new I/O error from a known kind of error as well as an
/// arbitrary error payload.
///
/// This function is used to generically create I/O errors which do not
/// originate from the OS itself. The `error` argument is an arbitrary
/// payload which will be contained in this `Error`.
///
/// # Examples
///
/// ```
/// use std::io::{Error, ErrorKind};
///
/// // errors can be created from strings
/// let custom_error = Error::new(ErrorKind::Other, "oh no!");
///
/// // errors can also be created from other errors
/// let custom_error2 = Error::new(ErrorKind::Interrupted, custom_error);
/// ```
//#[stable(feature = "rust1", since = "1.0.0")]
pub fn new<E>(kind: ErrorKind, error: E) -> Error
where E: Into<Box<error::Error+Send+Sync>>
{
Self::_new(kind, error.into())
}
fn _new(kind: ErrorKind, error: Box<error::Error+Send+Sync>) -> Error {
Error {
repr: Repr::Custom(Box::new(Custom {
kind: kind,
error: error,
}))
}
}
}
pub trait Write {
fn write(&mut self, buf: &[u8]) -> Result<usize>;
fn
|
(&mut self) -> Result<()> {
Ok(())
}
}
pub trait Read {
fn read(&mut self, buf: &mut [u8]) -> Result<usize>;
}
|
flush
|
identifier_name
|
io.rs
|
use alloc::boxed::Box;
use core::result;
use std::error;
/// A specialized [`Result`](../result/enum.Result.html) type for I/O
/// operations.
///
/// This type is broadly used across `std::io` for any operation which may
/// produce an error.
///
/// This typedef is generally used to avoid writing out `io::Error` directly and
/// is otherwise a direct mapping to `Result`.
///
/// While usual Rust style is to import types directly, aliases of `Result`
/// often are not, to make it easier to distinguish between them. `Result` is
/// generally assumed to be `std::result::Result`, and so users of this alias
/// will generally use `io::Result` instead of shadowing the prelude's import
/// of `std::result::Result`.
///
/// # Examples
///
/// A convenience function that bubbles an `io::Result` to its caller:
///
/// ```
/// use std::io;
///
/// fn get_string() -> io::Result<String> {
/// let mut buffer = String::new();
///
/// try!(io::stdin().read_line(&mut buffer));
///
|
/// ```
//#[stable(feature = "rust1", since = "1.0.0")]
pub type Result<T> = result::Result<T, Error>;
/// The error type for I/O operations of the `Read`, `Write`, `Seek`, and
/// associated traits.
///
/// Errors mostly originate from the underlying OS, but custom instances of
/// `Error` can be created with crafted error messages and a particular value of
/// `ErrorKind`.
#[derive(Debug)]
//#[stable(feature = "rust1", since = "1.0.0")]
pub struct Error {
repr: Repr,
}
// XXX manual addition
#[derive(Debug)]
enum Repr {
Os(i32),
Custom(Box<Custom>),
}
#[derive(Debug)]
struct Custom {
kind: ErrorKind,
error: Box<error::Error+Send+Sync>,
}
/// A list specifying general categories of I/O error.
///
/// This list is intended to grow over time and it is not recommended to
/// exhaustively match against it.
#[derive(Copy, PartialEq, Eq, Clone, Debug)]
//#[stable(feature = "rust1", since = "1.0.0")]
#[allow(deprecated)]
pub enum ErrorKind {
/// An entity was not found, often a file.
//#[stable(feature = "rust1", since = "1.0.0")]
NotFound,
/// The operation lacked the necessary privileges to complete.
//#[stable(feature = "rust1", since = "1.0.0")]
PermissionDenied,
/// The connection was refused by the remote server.
//#[stable(feature = "rust1", since = "1.0.0")]
ConnectionRefused,
/// The connection was reset by the remote server.
//#[stable(feature = "rust1", since = "1.0.0")]
ConnectionReset,
/// The connection was aborted (terminated) by the remote server.
//#[stable(feature = "rust1", since = "1.0.0")]
ConnectionAborted,
/// The network operation failed because it was not connected yet.
//#[stable(feature = "rust1", since = "1.0.0")]
NotConnected,
/// A socket address could not be bound because the address is already in
/// use elsewhere.
//#[stable(feature = "rust1", since = "1.0.0")]
AddrInUse,
/// A nonexistent interface was requested or the requested address was not
/// local.
//#[stable(feature = "rust1", since = "1.0.0")]
AddrNotAvailable,
/// The operation failed because a pipe was closed.
//#[stable(feature = "rust1", since = "1.0.0")]
BrokenPipe,
/// An entity already exists, often a file.
//#[stable(feature = "rust1", since = "1.0.0")]
AlreadyExists,
/// The operation needs to block to complete, but the blocking operation was
/// requested to not occur.
//#[stable(feature = "rust1", since = "1.0.0")]
WouldBlock,
/// A parameter was incorrect.
//#[stable(feature = "rust1", since = "1.0.0")]
InvalidInput,
/// Data not valid for the operation were encountered.
///
/// Unlike `InvalidInput`, this typically means that the operation
/// parameters were valid, however the error was caused by malformed
/// input data.
///
/// For example, a function that reads a file into a string will error with
/// `InvalidData` if the file's contents are not valid UTF-8.
//#[stable(feature = "io_invalid_data", since = "1.2.0")]
InvalidData,
/// The I/O operation's timeout expired, causing it to be canceled.
//#[stable(feature = "rust1", since = "1.0.0")]
TimedOut,
/// An error returned when an operation could not be completed because a
/// call to `write` returned `Ok(0)`.
///
/// This typically means that an operation could only succeed if it wrote a
/// particular number of bytes but only a smaller number of bytes could be
/// written.
//#[stable(feature = "rust1", since = "1.0.0")]
WriteZero,
/// This operation was interrupted.
///
/// Interrupted operations can typically be retried.
//#[stable(feature = "rust1", since = "1.0.0")]
Interrupted,
/// Any I/O error not part of this list.
//#[stable(feature = "rust1", since = "1.0.0")]
Other,
#[allow(missing_docs)]
/*
#[unstable(feature = "read_exact_old", reason = "recently added",
issue = "0")]
*/
//#[rustc_deprecated(since = "1.6.0", reason = "renamed to UnexpectedEof")]
UnexpectedEOF,
/// An error returned when an operation could not be completed because an
/// "end of file" was reached prematurely.
///
/// This typically means that an operation could only succeed if it read a
/// particular number of bytes but only a smaller number of bytes could be
/// read.
//#[stable(feature = "read_exact", since = "1.6.0")]
UnexpectedEof,
/// Any I/O error not part of this list.
/*
#[unstable(feature = "io_error_internals",
reason = "better expressed through extensible enums that this \
enum cannot be exhaustively matched against",
issue = "0")]
*/
#[doc(hidden)]
__Nonexhaustive,
}
impl Error {
/// Creates a new I/O error from a known kind of error as well as an
/// arbitrary error payload.
///
/// This function is used to generically create I/O errors which do not
/// originate from the OS itself. The `error` argument is an arbitrary
/// payload which will be contained in this `Error`.
///
/// # Examples
///
/// ```
/// use std::io::{Error, ErrorKind};
///
/// // errors can be created from strings
/// let custom_error = Error::new(ErrorKind::Other, "oh no!");
///
/// // errors can also be created from other errors
/// let custom_error2 = Error::new(ErrorKind::Interrupted, custom_error);
/// ```
//#[stable(feature = "rust1", since = "1.0.0")]
pub fn new<E>(kind: ErrorKind, error: E) -> Error
where E: Into<Box<error::Error+Send+Sync>>
{
Self::_new(kind, error.into())
}
fn _new(kind: ErrorKind, error: Box<error::Error+Send+Sync>) -> Error {
Error {
repr: Repr::Custom(Box::new(Custom {
kind: kind,
error: error,
}))
}
}
}
pub trait Write {
fn write(&mut self, buf: &[u8]) -> Result<usize>;
fn flush(&mut self) -> Result<()> {
Ok(())
}
}
pub trait Read {
fn read(&mut self, buf: &mut [u8]) -> Result<usize>;
}
|
/// Ok(buffer)
/// }
|
random_line_split
|
challenge13.rs
|
use aes::Aes128;
use aes::{chunks_count, BLOCK_SIZE};
use crate::errors::*;
use crate::prefix_suffix_oracles::Oracle;
use crate::prefix_suffix_oracles::Oracle13;
use super::challenge12::prefix_plus_suffix_length;
use super::prefix_length;
// The following function works under the single assumption that the target value "user" (to be
// replaced by "admin") is stored at the very end of the profile.
pub fn
|
() -> Result<()> {
let oracle = Oracle13::new()?;
let prefix_len = prefix_length(&oracle)?;
let (prefix_chunks_count, prefix_fill_len) = chunks_count(prefix_len);
let target_cleartext = b"admin".pad();
let mut input = vec![0; prefix_fill_len];
input.extend_from_slice(&target_cleartext);
// Determine the ciphertext for target_cleartext
let target_last_block = &oracle
.encrypt(&input)?
.split_off(prefix_chunks_count * BLOCK_SIZE)[0..BLOCK_SIZE];
// The following input is chosen in such a way that the cleartext in oracle looks as follows:
// email=\0... \0 || \0...\0&uid=10&role= || user <- padding ->
let (chunks_count, fill_len) = chunks_count(prefix_plus_suffix_length(&oracle)?);
let mut ciphertext = oracle.encrypt(&vec![0; fill_len + "user".len()])?;
// Sanity check
compare_eq((chunks_count + 1) * BLOCK_SIZE, ciphertext.len())?;
// Replace last block with target_last_block
ciphertext[chunks_count * BLOCK_SIZE..].copy_from_slice(target_last_block);
oracle.verify_solution(&ciphertext)
}
|
run
|
identifier_name
|
challenge13.rs
|
use aes::Aes128;
use aes::{chunks_count, BLOCK_SIZE};
use crate::errors::*;
use crate::prefix_suffix_oracles::Oracle;
use crate::prefix_suffix_oracles::Oracle13;
use super::challenge12::prefix_plus_suffix_length;
use super::prefix_length;
// The following function works under the single assumption that the target value "user" (to be
// replaced by "admin") is stored at the very end of the profile.
pub fn run() -> Result<()>
|
compare_eq((chunks_count + 1) * BLOCK_SIZE, ciphertext.len())?;
// Replace last block with target_last_block
ciphertext[chunks_count * BLOCK_SIZE..].copy_from_slice(target_last_block);
oracle.verify_solution(&ciphertext)
}
|
{
let oracle = Oracle13::new()?;
let prefix_len = prefix_length(&oracle)?;
let (prefix_chunks_count, prefix_fill_len) = chunks_count(prefix_len);
let target_cleartext = b"admin".pad();
let mut input = vec![0; prefix_fill_len];
input.extend_from_slice(&target_cleartext);
// Determine the ciphertext for target_cleartext
let target_last_block = &oracle
.encrypt(&input)?
.split_off(prefix_chunks_count * BLOCK_SIZE)[0..BLOCK_SIZE];
// The following input is chosen in such a way that the cleartext in oracle looks as follows:
// email=\0 ... \0 || \0 ...\0&uid=10&role= || user <- padding ->
let (chunks_count, fill_len) = chunks_count(prefix_plus_suffix_length(&oracle)?);
let mut ciphertext = oracle.encrypt(&vec![0; fill_len + "user".len()])?;
// Sanity check
|
identifier_body
|
challenge13.rs
|
use aes::Aes128;
use aes::{chunks_count, BLOCK_SIZE};
use crate::errors::*;
use crate::prefix_suffix_oracles::Oracle;
use crate::prefix_suffix_oracles::Oracle13;
use super::challenge12::prefix_plus_suffix_length;
use super::prefix_length;
// The following function works under the single assumption that the target value "user" (to be
// replaced by "admin") is stored at the very end of the profile.
pub fn run() -> Result<()> {
let oracle = Oracle13::new()?;
let prefix_len = prefix_length(&oracle)?;
let (prefix_chunks_count, prefix_fill_len) = chunks_count(prefix_len);
let target_cleartext = b"admin".pad();
let mut input = vec![0; prefix_fill_len];
input.extend_from_slice(&target_cleartext);
// Determine the ciphertext for target_cleartext
let target_last_block = &oracle
|
// The following input is chosen in such a way that the cleartext in oracle looks as follows:
// email=\0... \0 || \0...\0&uid=10&role= || user <- padding ->
let (chunks_count, fill_len) = chunks_count(prefix_plus_suffix_length(&oracle)?);
let mut ciphertext = oracle.encrypt(&vec![0; fill_len + "user".len()])?;
// Sanity check
compare_eq((chunks_count + 1) * BLOCK_SIZE, ciphertext.len())?;
// Replace last block with target_last_block
ciphertext[chunks_count * BLOCK_SIZE..].copy_from_slice(target_last_block);
oracle.verify_solution(&ciphertext)
}
|
.encrypt(&input)?
.split_off(prefix_chunks_count * BLOCK_SIZE)[0..BLOCK_SIZE];
|
random_line_split
|
svg.rs
|
use gio::MemoryInputStream;
use glib::Bytes;
use predicates::prelude::*;
use predicates::reflection::{Case, Child, PredicateReflection, Product};
use std::cmp;
use std::fmt;
use librsvg::{CairoRenderer, Length, Loader, LoadingError, SvgHandle};
/// Checks that the variable of type [u8] can be parsed as a SVG file.
#[derive(Debug)]
pub struct SvgPredicate {}
impl SvgPredicate {
pub fn
|
(self: Self, width: Length, height: Length) -> DetailPredicate<Self> {
DetailPredicate::<Self> {
p: self,
d: Detail::Size(Dimensions {
w: width,
h: height,
}),
}
}
}
fn svg_from_bytes(data: &[u8]) -> Result<SvgHandle, LoadingError> {
let bytes = Bytes::from(data);
let stream = MemoryInputStream::from_bytes(&bytes);
Loader::new().read_stream(&stream, None::<&gio::File>, None::<&gio::Cancellable>)
}
impl Predicate<[u8]> for SvgPredicate {
fn eval(&self, data: &[u8]) -> bool {
svg_from_bytes(data).is_ok()
}
fn find_case<'a>(&'a self, _expected: bool, data: &[u8]) -> Option<Case<'a>> {
match svg_from_bytes(data) {
Ok(_) => None,
Err(e) => Some(Case::new(Some(self), false).add_product(Product::new("Error", e))),
}
}
}
impl PredicateReflection for SvgPredicate {}
impl fmt::Display for SvgPredicate {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "is an SVG")
}
}
/// Extends a SVG Predicate by a check for its size
#[derive(Debug)]
pub struct DetailPredicate<SvgPredicate> {
p: SvgPredicate,
d: Detail,
}
#[derive(Debug)]
enum Detail {
Size(Dimensions),
}
/// SVG's dimensions
#[derive(Debug)]
struct Dimensions {
w: Length,
h: Length,
}
impl Dimensions {
pub fn width(self: &Self) -> f64 {
self.w.length
}
pub fn height(self: &Self) -> f64 {
self.h.length
}
}
impl fmt::Display for Dimensions {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(
f,
"{}{} x {}{}",
self.width(),
self.w.unit,
self.height(),
self.h.unit
)
}
}
impl cmp::PartialEq for Dimensions {
fn eq(&self, other: &Self) -> bool {
approx_eq!(f64, self.width(), other.width(), epsilon = 0.000_001)
&& approx_eq!(f64, self.height(), other.height(), epsilon = 0.000_001)
&& (self.w.unit == self.h.unit)
&& (self.h.unit == other.h.unit)
&& (other.h.unit == other.w.unit)
}
}
impl cmp::Eq for Dimensions {}
trait Details {
fn get_size(&self) -> Option<Dimensions>;
}
impl DetailPredicate<SvgPredicate> {
fn eval_doc(&self, handle: &SvgHandle) -> bool {
match &self.d {
Detail::Size(d) => {
let renderer = CairoRenderer::new(handle);
let dimensions = renderer.intrinsic_dimensions();
(dimensions.width, dimensions.height) == (d.w, d.h)
}
}
}
fn find_case_for_doc<'a>(&'a self, expected: bool, handle: &SvgHandle) -> Option<Case<'a>> {
if self.eval_doc(handle) == expected {
let product = self.product_for_doc(handle);
Some(Case::new(Some(self), false).add_product(product))
} else {
None
}
}
fn product_for_doc(&self, handle: &SvgHandle) -> Product {
match &self.d {
Detail::Size(_) => {
let renderer = CairoRenderer::new(handle);
let dimensions = renderer.intrinsic_dimensions();
Product::new(
"actual size",
format!(
"width={:?}, height={:?}",
dimensions.width, dimensions.height
),
)
}
}
}
}
impl Predicate<[u8]> for DetailPredicate<SvgPredicate> {
fn eval(&self, data: &[u8]) -> bool {
match svg_from_bytes(data) {
Ok(handle) => self.eval_doc(&handle),
_ => false,
}
}
fn find_case<'a>(&'a self, expected: bool, data: &[u8]) -> Option<Case<'a>> {
match svg_from_bytes(data) {
Ok(handle) => self.find_case_for_doc(expected, &handle),
Err(e) => Some(Case::new(Some(self), false).add_product(Product::new("Error", e))),
}
}
}
impl PredicateReflection for DetailPredicate<SvgPredicate> {
fn children<'a>(&'a self) -> Box<dyn Iterator<Item = Child<'a>> + 'a> {
let params = vec![Child::new("predicate", &self.p)];
Box::new(params.into_iter())
}
}
impl fmt::Display for DetailPredicate<SvgPredicate> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match &self.d {
Detail::Size(d) => write!(f, "is an SVG sized {}", d),
}
}
}
|
with_size
|
identifier_name
|
svg.rs
|
use gio::MemoryInputStream;
use glib::Bytes;
use predicates::prelude::*;
use predicates::reflection::{Case, Child, PredicateReflection, Product};
use std::cmp;
use std::fmt;
use librsvg::{CairoRenderer, Length, Loader, LoadingError, SvgHandle};
/// Checks that the variable of type [u8] can be parsed as a SVG file.
#[derive(Debug)]
pub struct SvgPredicate {}
impl SvgPredicate {
pub fn with_size(self: Self, width: Length, height: Length) -> DetailPredicate<Self> {
DetailPredicate::<Self> {
p: self,
d: Detail::Size(Dimensions {
w: width,
h: height,
}),
}
}
}
fn svg_from_bytes(data: &[u8]) -> Result<SvgHandle, LoadingError> {
let bytes = Bytes::from(data);
let stream = MemoryInputStream::from_bytes(&bytes);
Loader::new().read_stream(&stream, None::<&gio::File>, None::<&gio::Cancellable>)
}
impl Predicate<[u8]> for SvgPredicate {
fn eval(&self, data: &[u8]) -> bool
|
fn find_case<'a>(&'a self, _expected: bool, data: &[u8]) -> Option<Case<'a>> {
match svg_from_bytes(data) {
Ok(_) => None,
Err(e) => Some(Case::new(Some(self), false).add_product(Product::new("Error", e))),
}
}
}
impl PredicateReflection for SvgPredicate {}
impl fmt::Display for SvgPredicate {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "is an SVG")
}
}
/// Extends a SVG Predicate by a check for its size
#[derive(Debug)]
pub struct DetailPredicate<SvgPredicate> {
p: SvgPredicate,
d: Detail,
}
#[derive(Debug)]
enum Detail {
Size(Dimensions),
}
/// SVG's dimensions
#[derive(Debug)]
struct Dimensions {
w: Length,
h: Length,
}
impl Dimensions {
pub fn width(self: &Self) -> f64 {
self.w.length
}
pub fn height(self: &Self) -> f64 {
self.h.length
}
}
impl fmt::Display for Dimensions {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(
f,
"{}{} x {}{}",
self.width(),
self.w.unit,
self.height(),
self.h.unit
)
}
}
impl cmp::PartialEq for Dimensions {
fn eq(&self, other: &Self) -> bool {
approx_eq!(f64, self.width(), other.width(), epsilon = 0.000_001)
&& approx_eq!(f64, self.height(), other.height(), epsilon = 0.000_001)
&& (self.w.unit == self.h.unit)
&& (self.h.unit == other.h.unit)
&& (other.h.unit == other.w.unit)
}
}
impl cmp::Eq for Dimensions {}
trait Details {
fn get_size(&self) -> Option<Dimensions>;
}
impl DetailPredicate<SvgPredicate> {
fn eval_doc(&self, handle: &SvgHandle) -> bool {
match &self.d {
Detail::Size(d) => {
let renderer = CairoRenderer::new(handle);
let dimensions = renderer.intrinsic_dimensions();
(dimensions.width, dimensions.height) == (d.w, d.h)
}
}
}
fn find_case_for_doc<'a>(&'a self, expected: bool, handle: &SvgHandle) -> Option<Case<'a>> {
if self.eval_doc(handle) == expected {
let product = self.product_for_doc(handle);
Some(Case::new(Some(self), false).add_product(product))
} else {
None
}
}
fn product_for_doc(&self, handle: &SvgHandle) -> Product {
match &self.d {
Detail::Size(_) => {
let renderer = CairoRenderer::new(handle);
let dimensions = renderer.intrinsic_dimensions();
Product::new(
"actual size",
format!(
"width={:?}, height={:?}",
dimensions.width, dimensions.height
),
)
}
}
}
}
impl Predicate<[u8]> for DetailPredicate<SvgPredicate> {
fn eval(&self, data: &[u8]) -> bool {
match svg_from_bytes(data) {
Ok(handle) => self.eval_doc(&handle),
_ => false,
}
}
fn find_case<'a>(&'a self, expected: bool, data: &[u8]) -> Option<Case<'a>> {
match svg_from_bytes(data) {
Ok(handle) => self.find_case_for_doc(expected, &handle),
Err(e) => Some(Case::new(Some(self), false).add_product(Product::new("Error", e))),
}
}
}
impl PredicateReflection for DetailPredicate<SvgPredicate> {
fn children<'a>(&'a self) -> Box<dyn Iterator<Item = Child<'a>> + 'a> {
let params = vec![Child::new("predicate", &self.p)];
Box::new(params.into_iter())
}
}
impl fmt::Display for DetailPredicate<SvgPredicate> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match &self.d {
Detail::Size(d) => write!(f, "is an SVG sized {}", d),
}
}
}
|
{
svg_from_bytes(data).is_ok()
}
|
identifier_body
|
svg.rs
|
use gio::MemoryInputStream;
use glib::Bytes;
use predicates::prelude::*;
use predicates::reflection::{Case, Child, PredicateReflection, Product};
use std::cmp;
use std::fmt;
use librsvg::{CairoRenderer, Length, Loader, LoadingError, SvgHandle};
/// Checks that the variable of type [u8] can be parsed as a SVG file.
#[derive(Debug)]
pub struct SvgPredicate {}
impl SvgPredicate {
pub fn with_size(self: Self, width: Length, height: Length) -> DetailPredicate<Self> {
DetailPredicate::<Self> {
p: self,
d: Detail::Size(Dimensions {
w: width,
h: height,
}),
}
}
}
fn svg_from_bytes(data: &[u8]) -> Result<SvgHandle, LoadingError> {
let bytes = Bytes::from(data);
let stream = MemoryInputStream::from_bytes(&bytes);
Loader::new().read_stream(&stream, None::<&gio::File>, None::<&gio::Cancellable>)
}
impl Predicate<[u8]> for SvgPredicate {
fn eval(&self, data: &[u8]) -> bool {
svg_from_bytes(data).is_ok()
}
fn find_case<'a>(&'a self, _expected: bool, data: &[u8]) -> Option<Case<'a>> {
match svg_from_bytes(data) {
Ok(_) => None,
Err(e) => Some(Case::new(Some(self), false).add_product(Product::new("Error", e))),
}
}
}
|
impl PredicateReflection for SvgPredicate {}
impl fmt::Display for SvgPredicate {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "is an SVG")
}
}
/// Extends a SVG Predicate by a check for its size
#[derive(Debug)]
pub struct DetailPredicate<SvgPredicate> {
p: SvgPredicate,
d: Detail,
}
#[derive(Debug)]
enum Detail {
Size(Dimensions),
}
/// SVG's dimensions
#[derive(Debug)]
struct Dimensions {
w: Length,
h: Length,
}
impl Dimensions {
pub fn width(self: &Self) -> f64 {
self.w.length
}
pub fn height(self: &Self) -> f64 {
self.h.length
}
}
impl fmt::Display for Dimensions {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(
f,
"{}{} x {}{}",
self.width(),
self.w.unit,
self.height(),
self.h.unit
)
}
}
impl cmp::PartialEq for Dimensions {
fn eq(&self, other: &Self) -> bool {
approx_eq!(f64, self.width(), other.width(), epsilon = 0.000_001)
&& approx_eq!(f64, self.height(), other.height(), epsilon = 0.000_001)
&& (self.w.unit == self.h.unit)
&& (self.h.unit == other.h.unit)
&& (other.h.unit == other.w.unit)
}
}
impl cmp::Eq for Dimensions {}
trait Details {
fn get_size(&self) -> Option<Dimensions>;
}
impl DetailPredicate<SvgPredicate> {
fn eval_doc(&self, handle: &SvgHandle) -> bool {
match &self.d {
Detail::Size(d) => {
let renderer = CairoRenderer::new(handle);
let dimensions = renderer.intrinsic_dimensions();
(dimensions.width, dimensions.height) == (d.w, d.h)
}
}
}
fn find_case_for_doc<'a>(&'a self, expected: bool, handle: &SvgHandle) -> Option<Case<'a>> {
if self.eval_doc(handle) == expected {
let product = self.product_for_doc(handle);
Some(Case::new(Some(self), false).add_product(product))
} else {
None
}
}
fn product_for_doc(&self, handle: &SvgHandle) -> Product {
match &self.d {
Detail::Size(_) => {
let renderer = CairoRenderer::new(handle);
let dimensions = renderer.intrinsic_dimensions();
Product::new(
"actual size",
format!(
"width={:?}, height={:?}",
dimensions.width, dimensions.height
),
)
}
}
}
}
impl Predicate<[u8]> for DetailPredicate<SvgPredicate> {
fn eval(&self, data: &[u8]) -> bool {
match svg_from_bytes(data) {
Ok(handle) => self.eval_doc(&handle),
_ => false,
}
}
fn find_case<'a>(&'a self, expected: bool, data: &[u8]) -> Option<Case<'a>> {
match svg_from_bytes(data) {
Ok(handle) => self.find_case_for_doc(expected, &handle),
Err(e) => Some(Case::new(Some(self), false).add_product(Product::new("Error", e))),
}
}
}
impl PredicateReflection for DetailPredicate<SvgPredicate> {
fn children<'a>(&'a self) -> Box<dyn Iterator<Item = Child<'a>> + 'a> {
let params = vec![Child::new("predicate", &self.p)];
Box::new(params.into_iter())
}
}
impl fmt::Display for DetailPredicate<SvgPredicate> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match &self.d {
Detail::Size(d) => write!(f, "is an SVG sized {}", d),
}
}
}
|
random_line_split
|
|
roxor.rs
|
use std::env;
use std::fs::File;
use std::io::prelude::*;
use std::path::Path;
use std::process;
const PREVIEW_LEN: usize = 50;
#[derive(Debug, PartialEq)]
struct Match {
offset: usize,
key: u8,
preview: String,
}
fn main() {
let args: Vec<String> = env::args().skip(1).collect();
if args.len() < 2 {
eprintln!("usage: roxor <file> <crib>");
process::exit(1);
}
let path = Path::new(&args[0]);
let crib = args[1].as_bytes();
if crib.len() < 2 {
eprintln!("error: crib too short");
process::exit(1);
}
let mut file = match File::open(&path) {
Err(why) => {
eprintln!("error: failed to open {}: {}", path.display(), why);
process::exit(1);
}
Ok(file) => file,
};
let mut ciphertext: Vec<u8> = Vec::new();
file.read_to_end(&mut ciphertext).unwrap();
if ciphertext.len() < crib.len() {
eprintln!("error: ciphertext too short");
process::exit(1);
}
for m in attack_cipher(&ciphertext[..], crib) {
println!("Found text at 0x{:x} (XOR key 0x{:02x})", m.offset, m.key);
println!(" preview: {}", m.preview);
}
}
fn
|
(ciphertext: &[u8], crib: &[u8]) -> Vec<Match> {
let mut matches: Vec<Match> = Vec::new();
for (i, x) in ciphertext.iter().enumerate() {
let mut old_key = x ^ crib[0];
let mut j = 1;
while j < crib.len() && (i + j) < ciphertext.len() {
let key = ciphertext[i + j] ^ crib[j];
if key!= old_key {
break;
}
old_key = key;
j += 1;
if j == crib.len() {
let preview: String = ciphertext[i..]
.iter()
.take(PREVIEW_LEN)
.map(|x| x ^ key)
.map(|x| if x >= 32 && x <= 126 { x as char } else { '.' })
.collect();
matches.push(Match {
offset: i,
key,
preview,
});
}
}
}
matches
}
#[cfg(test)]
mod tests {
use super::{attack_cipher, Match};
struct Test {
ciphertext: &'static [u8],
crib: &'static [u8],
matches: Vec<Match>,
}
#[test]
fn test_attack_cipher() {
let tests = vec![
Test {
ciphertext: &[],
crib: &[],
matches: vec![],
},
Test {
ciphertext: b"haystack",
crib: b"needle",
matches: vec![],
},
Test {
ciphertext: b"needle in haystack",
crib: b"needle",
matches: vec![
Match { offset: 0, key: 0, preview: "needle in haystack".to_string() },
],
},
Test {
ciphertext: b"a needle, another needle",
crib: b"needle",
matches: vec![
Match { offset: 2, key: 0, preview: "needle, another needle".to_string() },
Match { offset: 18, key: 0, preview: "needle".to_string() },
],
},
Test {
ciphertext: b"\x23\x62\x2c\x27\x27\x26\x2e\x27\x6e\x62\x23\x2c\x2d\x36\x2a\x27\x30\x62\x2c\x27\x27\x26\x2e\x27",
crib: b"needle",
matches: vec![
Match { offset: 2, key: 0x42, preview: "needle, another needle".to_string() },
Match { offset: 18, key: 0x42, preview: "needle".to_string() },
],
},
];
for t in tests.iter() {
assert_eq!(attack_cipher(t.ciphertext, t.crib), t.matches);
}
}
}
|
attack_cipher
|
identifier_name
|
roxor.rs
|
use std::env;
use std::fs::File;
use std::io::prelude::*;
use std::path::Path;
use std::process;
const PREVIEW_LEN: usize = 50;
#[derive(Debug, PartialEq)]
struct Match {
offset: usize,
key: u8,
preview: String,
}
fn main() {
let args: Vec<String> = env::args().skip(1).collect();
if args.len() < 2 {
eprintln!("usage: roxor <file> <crib>");
process::exit(1);
}
let path = Path::new(&args[0]);
let crib = args[1].as_bytes();
if crib.len() < 2 {
eprintln!("error: crib too short");
process::exit(1);
}
let mut file = match File::open(&path) {
Err(why) => {
eprintln!("error: failed to open {}: {}", path.display(), why);
process::exit(1);
}
Ok(file) => file,
};
let mut ciphertext: Vec<u8> = Vec::new();
file.read_to_end(&mut ciphertext).unwrap();
if ciphertext.len() < crib.len() {
eprintln!("error: ciphertext too short");
process::exit(1);
}
for m in attack_cipher(&ciphertext[..], crib) {
println!("Found text at 0x{:x} (XOR key 0x{:02x})", m.offset, m.key);
println!(" preview: {}", m.preview);
}
}
fn attack_cipher(ciphertext: &[u8], crib: &[u8]) -> Vec<Match>
|
offset: i,
key,
preview,
});
}
}
}
matches
}
#[cfg(test)]
mod tests {
use super::{attack_cipher, Match};
struct Test {
ciphertext: &'static [u8],
crib: &'static [u8],
matches: Vec<Match>,
}
#[test]
fn test_attack_cipher() {
let tests = vec![
Test {
ciphertext: &[],
crib: &[],
matches: vec![],
},
Test {
ciphertext: b"haystack",
crib: b"needle",
matches: vec![],
},
Test {
ciphertext: b"needle in haystack",
crib: b"needle",
matches: vec![
Match { offset: 0, key: 0, preview: "needle in haystack".to_string() },
],
},
Test {
ciphertext: b"a needle, another needle",
crib: b"needle",
matches: vec![
Match { offset: 2, key: 0, preview: "needle, another needle".to_string() },
Match { offset: 18, key: 0, preview: "needle".to_string() },
],
},
Test {
ciphertext: b"\x23\x62\x2c\x27\x27\x26\x2e\x27\x6e\x62\x23\x2c\x2d\x36\x2a\x27\x30\x62\x2c\x27\x27\x26\x2e\x27",
crib: b"needle",
matches: vec![
Match { offset: 2, key: 0x42, preview: "needle, another needle".to_string() },
Match { offset: 18, key: 0x42, preview: "needle".to_string() },
],
},
];
for t in tests.iter() {
assert_eq!(attack_cipher(t.ciphertext, t.crib), t.matches);
}
}
}
|
{
let mut matches: Vec<Match> = Vec::new();
for (i, x) in ciphertext.iter().enumerate() {
let mut old_key = x ^ crib[0];
let mut j = 1;
while j < crib.len() && (i + j) < ciphertext.len() {
let key = ciphertext[i + j] ^ crib[j];
if key != old_key {
break;
}
old_key = key;
j += 1;
if j == crib.len() {
let preview: String = ciphertext[i..]
.iter()
.take(PREVIEW_LEN)
.map(|x| x ^ key)
.map(|x| if x >= 32 && x <= 126 { x as char } else { '.' })
.collect();
matches.push(Match {
|
identifier_body
|
roxor.rs
|
use std::env;
use std::fs::File;
use std::io::prelude::*;
use std::path::Path;
use std::process;
const PREVIEW_LEN: usize = 50;
#[derive(Debug, PartialEq)]
struct Match {
offset: usize,
key: u8,
preview: String,
}
fn main() {
let args: Vec<String> = env::args().skip(1).collect();
if args.len() < 2 {
eprintln!("usage: roxor <file> <crib>");
process::exit(1);
}
let path = Path::new(&args[0]);
let crib = args[1].as_bytes();
if crib.len() < 2 {
eprintln!("error: crib too short");
process::exit(1);
}
let mut file = match File::open(&path) {
Err(why) => {
eprintln!("error: failed to open {}: {}", path.display(), why);
process::exit(1);
}
Ok(file) => file,
};
let mut ciphertext: Vec<u8> = Vec::new();
file.read_to_end(&mut ciphertext).unwrap();
if ciphertext.len() < crib.len() {
eprintln!("error: ciphertext too short");
process::exit(1);
}
for m in attack_cipher(&ciphertext[..], crib) {
println!("Found text at 0x{:x} (XOR key 0x{:02x})", m.offset, m.key);
println!(" preview: {}", m.preview);
}
}
fn attack_cipher(ciphertext: &[u8], crib: &[u8]) -> Vec<Match> {
let mut matches: Vec<Match> = Vec::new();
for (i, x) in ciphertext.iter().enumerate() {
let mut old_key = x ^ crib[0];
let mut j = 1;
while j < crib.len() && (i + j) < ciphertext.len() {
let key = ciphertext[i + j] ^ crib[j];
if key!= old_key {
break;
}
old_key = key;
j += 1;
if j == crib.len() {
let preview: String = ciphertext[i..]
.iter()
.take(PREVIEW_LEN)
.map(|x| x ^ key)
.map(|x| if x >= 32 && x <= 126
|
else { '.' })
.collect();
matches.push(Match {
offset: i,
key,
preview,
});
}
}
}
matches
}
#[cfg(test)]
mod tests {
use super::{attack_cipher, Match};
struct Test {
ciphertext: &'static [u8],
crib: &'static [u8],
matches: Vec<Match>,
}
#[test]
fn test_attack_cipher() {
let tests = vec![
Test {
ciphertext: &[],
crib: &[],
matches: vec![],
},
Test {
ciphertext: b"haystack",
crib: b"needle",
matches: vec![],
},
Test {
ciphertext: b"needle in haystack",
crib: b"needle",
matches: vec![
Match { offset: 0, key: 0, preview: "needle in haystack".to_string() },
],
},
Test {
ciphertext: b"a needle, another needle",
crib: b"needle",
matches: vec![
Match { offset: 2, key: 0, preview: "needle, another needle".to_string() },
Match { offset: 18, key: 0, preview: "needle".to_string() },
],
},
Test {
ciphertext: b"\x23\x62\x2c\x27\x27\x26\x2e\x27\x6e\x62\x23\x2c\x2d\x36\x2a\x27\x30\x62\x2c\x27\x27\x26\x2e\x27",
crib: b"needle",
matches: vec![
Match { offset: 2, key: 0x42, preview: "needle, another needle".to_string() },
Match { offset: 18, key: 0x42, preview: "needle".to_string() },
],
},
];
for t in tests.iter() {
assert_eq!(attack_cipher(t.ciphertext, t.crib), t.matches);
}
}
}
|
{ x as char }
|
conditional_block
|
roxor.rs
|
use std::env;
use std::fs::File;
use std::io::prelude::*;
use std::path::Path;
use std::process;
const PREVIEW_LEN: usize = 50;
#[derive(Debug, PartialEq)]
struct Match {
offset: usize,
key: u8,
preview: String,
}
fn main() {
let args: Vec<String> = env::args().skip(1).collect();
if args.len() < 2 {
eprintln!("usage: roxor <file> <crib>");
process::exit(1);
}
let path = Path::new(&args[0]);
let crib = args[1].as_bytes();
if crib.len() < 2 {
eprintln!("error: crib too short");
process::exit(1);
}
let mut file = match File::open(&path) {
Err(why) => {
eprintln!("error: failed to open {}: {}", path.display(), why);
process::exit(1);
}
Ok(file) => file,
};
let mut ciphertext: Vec<u8> = Vec::new();
file.read_to_end(&mut ciphertext).unwrap();
if ciphertext.len() < crib.len() {
eprintln!("error: ciphertext too short");
process::exit(1);
}
for m in attack_cipher(&ciphertext[..], crib) {
println!("Found text at 0x{:x} (XOR key 0x{:02x})", m.offset, m.key);
println!(" preview: {}", m.preview);
}
}
fn attack_cipher(ciphertext: &[u8], crib: &[u8]) -> Vec<Match> {
let mut matches: Vec<Match> = Vec::new();
for (i, x) in ciphertext.iter().enumerate() {
let mut old_key = x ^ crib[0];
let mut j = 1;
while j < crib.len() && (i + j) < ciphertext.len() {
let key = ciphertext[i + j] ^ crib[j];
if key!= old_key {
break;
}
old_key = key;
j += 1;
if j == crib.len() {
let preview: String = ciphertext[i..]
.iter()
.take(PREVIEW_LEN)
.map(|x| x ^ key)
.map(|x| if x >= 32 && x <= 126 { x as char } else { '.' })
.collect();
matches.push(Match {
offset: i,
key,
preview,
});
}
}
}
matches
}
#[cfg(test)]
mod tests {
use super::{attack_cipher, Match};
struct Test {
ciphertext: &'static [u8],
crib: &'static [u8],
matches: Vec<Match>,
}
#[test]
fn test_attack_cipher() {
let tests = vec![
Test {
ciphertext: &[],
crib: &[],
matches: vec![],
},
Test {
ciphertext: b"haystack",
crib: b"needle",
matches: vec![],
|
ciphertext: b"needle in haystack",
crib: b"needle",
matches: vec![
Match { offset: 0, key: 0, preview: "needle in haystack".to_string() },
],
},
Test {
ciphertext: b"a needle, another needle",
crib: b"needle",
matches: vec![
Match { offset: 2, key: 0, preview: "needle, another needle".to_string() },
Match { offset: 18, key: 0, preview: "needle".to_string() },
],
},
Test {
ciphertext: b"\x23\x62\x2c\x27\x27\x26\x2e\x27\x6e\x62\x23\x2c\x2d\x36\x2a\x27\x30\x62\x2c\x27\x27\x26\x2e\x27",
crib: b"needle",
matches: vec![
Match { offset: 2, key: 0x42, preview: "needle, another needle".to_string() },
Match { offset: 18, key: 0x42, preview: "needle".to_string() },
],
},
];
for t in tests.iter() {
assert_eq!(attack_cipher(t.ciphertext, t.crib), t.matches);
}
}
}
|
},
Test {
|
random_line_split
|
parser_any_macro.rs
|
// from src/libsyntax/ext/tt/macro_rules.rs
use std::cell::RefCell;
use syntax::parse::parser::Parser;
use syntax::parse::token;
use syntax::ast;
use syntax::ptr::P;
use syntax::ext::base::MacResult;
use syntax::util::small_vector::SmallVector;
pub struct ParserAnyMacro<'a> {
parser: RefCell<Parser<'a>>,
}
impl<'a> ParserAnyMacro<'a> {
pub fn new(p: Parser<'a>) -> ParserAnyMacro<'a>
|
/// Make sure we don't have any tokens left to parse, so we don't
/// silently drop anything. `allow_semi` is so that "optional"
/// semicolons at the end of normal expressions aren't complained
/// about e.g. the semicolon in `macro_rules! kapow { () => {
/// panic!(); } }` doesn't get picked up by.parse_expr(), but it's
/// allowed to be there.
fn ensure_complete_parse(&self, allow_semi: bool) {
let mut parser = self.parser.borrow_mut();
if allow_semi && parser.token == token::Semi {
parser.bump()
}
if parser.token!= token::Eof {
let token_str = parser.this_token_to_string();
let msg = format!("macro expansion ignores token `{}` and any \
following",
token_str);
let span = parser.span;
parser.span_err(span, &msg[..]);
}
}
}
impl<'a> MacResult for ParserAnyMacro<'a> {
fn make_expr(self: Box<ParserAnyMacro<'a>>) -> Option<P<ast::Expr>> {
let ret = self.parser.borrow_mut().parse_expr();
self.ensure_complete_parse(true);
Some(ret)
}
fn make_pat(self: Box<ParserAnyMacro<'a>>) -> Option<P<ast::Pat>> {
let ret = self.parser.borrow_mut().parse_pat();
self.ensure_complete_parse(false);
Some(ret)
}
fn make_items(self: Box<ParserAnyMacro<'a>>) -> Option<SmallVector<P<ast::Item>>> {
let mut ret = SmallVector::zero();
while let Some(item) = self.parser.borrow_mut().parse_item() {
ret.push(item);
}
self.ensure_complete_parse(false);
Some(ret)
}
fn make_impl_items(self: Box<ParserAnyMacro<'a>>)
-> Option<SmallVector<P<ast::ImplItem>>> {
let mut ret = SmallVector::zero();
loop {
let mut parser = self.parser.borrow_mut();
match parser.token {
token::Eof => break,
_ => ret.push(parser.parse_impl_item())
}
}
self.ensure_complete_parse(false);
Some(ret)
}
fn make_stmt(self: Box<ParserAnyMacro<'a>>) -> Option<P<ast::Stmt>> {
let ret = self.parser.borrow_mut().parse_stmt();
self.ensure_complete_parse(true);
ret
}
}
|
{
ParserAnyMacro {
parser: RefCell::new(p)
}
}
|
identifier_body
|
parser_any_macro.rs
|
// from src/libsyntax/ext/tt/macro_rules.rs
use std::cell::RefCell;
|
use syntax::parse::token;
use syntax::ast;
use syntax::ptr::P;
use syntax::ext::base::MacResult;
use syntax::util::small_vector::SmallVector;
pub struct ParserAnyMacro<'a> {
parser: RefCell<Parser<'a>>,
}
impl<'a> ParserAnyMacro<'a> {
pub fn new(p: Parser<'a>) -> ParserAnyMacro<'a> {
ParserAnyMacro {
parser: RefCell::new(p)
}
}
/// Make sure we don't have any tokens left to parse, so we don't
/// silently drop anything. `allow_semi` is so that "optional"
/// semicolons at the end of normal expressions aren't complained
/// about e.g. the semicolon in `macro_rules! kapow { () => {
/// panic!(); } }` doesn't get picked up by.parse_expr(), but it's
/// allowed to be there.
fn ensure_complete_parse(&self, allow_semi: bool) {
let mut parser = self.parser.borrow_mut();
if allow_semi && parser.token == token::Semi {
parser.bump()
}
if parser.token!= token::Eof {
let token_str = parser.this_token_to_string();
let msg = format!("macro expansion ignores token `{}` and any \
following",
token_str);
let span = parser.span;
parser.span_err(span, &msg[..]);
}
}
}
impl<'a> MacResult for ParserAnyMacro<'a> {
fn make_expr(self: Box<ParserAnyMacro<'a>>) -> Option<P<ast::Expr>> {
let ret = self.parser.borrow_mut().parse_expr();
self.ensure_complete_parse(true);
Some(ret)
}
fn make_pat(self: Box<ParserAnyMacro<'a>>) -> Option<P<ast::Pat>> {
let ret = self.parser.borrow_mut().parse_pat();
self.ensure_complete_parse(false);
Some(ret)
}
fn make_items(self: Box<ParserAnyMacro<'a>>) -> Option<SmallVector<P<ast::Item>>> {
let mut ret = SmallVector::zero();
while let Some(item) = self.parser.borrow_mut().parse_item() {
ret.push(item);
}
self.ensure_complete_parse(false);
Some(ret)
}
fn make_impl_items(self: Box<ParserAnyMacro<'a>>)
-> Option<SmallVector<P<ast::ImplItem>>> {
let mut ret = SmallVector::zero();
loop {
let mut parser = self.parser.borrow_mut();
match parser.token {
token::Eof => break,
_ => ret.push(parser.parse_impl_item())
}
}
self.ensure_complete_parse(false);
Some(ret)
}
fn make_stmt(self: Box<ParserAnyMacro<'a>>) -> Option<P<ast::Stmt>> {
let ret = self.parser.borrow_mut().parse_stmt();
self.ensure_complete_parse(true);
ret
}
}
|
use syntax::parse::parser::Parser;
|
random_line_split
|
parser_any_macro.rs
|
// from src/libsyntax/ext/tt/macro_rules.rs
use std::cell::RefCell;
use syntax::parse::parser::Parser;
use syntax::parse::token;
use syntax::ast;
use syntax::ptr::P;
use syntax::ext::base::MacResult;
use syntax::util::small_vector::SmallVector;
pub struct ParserAnyMacro<'a> {
parser: RefCell<Parser<'a>>,
}
impl<'a> ParserAnyMacro<'a> {
pub fn new(p: Parser<'a>) -> ParserAnyMacro<'a> {
ParserAnyMacro {
parser: RefCell::new(p)
}
}
/// Make sure we don't have any tokens left to parse, so we don't
/// silently drop anything. `allow_semi` is so that "optional"
/// semicolons at the end of normal expressions aren't complained
/// about e.g. the semicolon in `macro_rules! kapow { () => {
/// panic!(); } }` doesn't get picked up by.parse_expr(), but it's
/// allowed to be there.
fn ensure_complete_parse(&self, allow_semi: bool) {
let mut parser = self.parser.borrow_mut();
if allow_semi && parser.token == token::Semi {
parser.bump()
}
if parser.token!= token::Eof {
let token_str = parser.this_token_to_string();
let msg = format!("macro expansion ignores token `{}` and any \
following",
token_str);
let span = parser.span;
parser.span_err(span, &msg[..]);
}
}
}
impl<'a> MacResult for ParserAnyMacro<'a> {
fn make_expr(self: Box<ParserAnyMacro<'a>>) -> Option<P<ast::Expr>> {
let ret = self.parser.borrow_mut().parse_expr();
self.ensure_complete_parse(true);
Some(ret)
}
fn make_pat(self: Box<ParserAnyMacro<'a>>) -> Option<P<ast::Pat>> {
let ret = self.parser.borrow_mut().parse_pat();
self.ensure_complete_parse(false);
Some(ret)
}
fn
|
(self: Box<ParserAnyMacro<'a>>) -> Option<SmallVector<P<ast::Item>>> {
let mut ret = SmallVector::zero();
while let Some(item) = self.parser.borrow_mut().parse_item() {
ret.push(item);
}
self.ensure_complete_parse(false);
Some(ret)
}
fn make_impl_items(self: Box<ParserAnyMacro<'a>>)
-> Option<SmallVector<P<ast::ImplItem>>> {
let mut ret = SmallVector::zero();
loop {
let mut parser = self.parser.borrow_mut();
match parser.token {
token::Eof => break,
_ => ret.push(parser.parse_impl_item())
}
}
self.ensure_complete_parse(false);
Some(ret)
}
fn make_stmt(self: Box<ParserAnyMacro<'a>>) -> Option<P<ast::Stmt>> {
let ret = self.parser.borrow_mut().parse_stmt();
self.ensure_complete_parse(true);
ret
}
}
|
make_items
|
identifier_name
|
loop_def.rs
|
// loop operation
// store temp values for loop name and refill addresses
use super::ItemID;
use super::vm_code::CodeCollection;
pub struct Loop {
pub name: Option<String>,
pub continue_addr: usize,
pub break_addrs: Vec<usize>,
}
pub struct LoopCollection {
loops: Vec<Loop>,
pub ret_type: ItemID, // so it is not only loop collection but a jump statement context storage
}
impl LoopCollection {
pub fn new() -> LoopCollection {
LoopCollection{
loops: Vec::new(),
ret_type: ItemID::new_invalid(),
}
}
pub fn push_loop(&mut self, name: Option<String>, continue_addr: usize) {
self.loops.push(Loop{ name: name, continue_addr: continue_addr, break_addrs: Vec::new() })
}
// Panic on empty contents
pub fn pop_and_refill(&mut self, refill_adder: usize, codes: &mut CodeCollection) {
let last = self.loops.pop().unwrap();
for break_addr in last.break_addrs {
codes.refill_addr(break_addr, refill_adder);
}
}
pub fn get_last_loop_continue_addr(&self) -> Option<usize> {
match self.loops.len() {
0 => None,
n => Some(self.loops[n - 1].continue_addr),
}
}
pub fn get_loop_continue_addr(&self, name: &str) -> Option<usize> {
for lp in &self.loops {
match lp.name {
Some(ref lp_name) => {
if lp_name == name {
return Some(lp.continue_addr);
}
}
_ => (),
}
}
return None;
}
pub fn push_last_loop_break_addr(&mut self, break_addr: usize) -> Option<()> {
match self.loops.len() {
0 => None,
n => {
self.loops[n - 1].break_addrs.push(break_addr);
Some(())
}
}
}
pub fn push_loop_break_addr(&mut self, name: &str, break_addr: usize) -> Option<()> {
for mut lp in &mut self.loops {
match lp.name {
Some(ref lp_name) => {
if lp_name == name
|
}
_ => (),
}
}
return None;
}
}
|
{
lp.break_addrs.push(break_addr);
return Some(());
}
|
conditional_block
|
loop_def.rs
|
// loop operation
// store temp values for loop name and refill addresses
use super::ItemID;
use super::vm_code::CodeCollection;
pub struct Loop {
pub name: Option<String>,
pub continue_addr: usize,
pub break_addrs: Vec<usize>,
}
pub struct LoopCollection {
loops: Vec<Loop>,
|
pub ret_type: ItemID, // so it is not only loop collection but a jump statement context storage
}
impl LoopCollection {
pub fn new() -> LoopCollection {
LoopCollection{
loops: Vec::new(),
ret_type: ItemID::new_invalid(),
}
}
pub fn push_loop(&mut self, name: Option<String>, continue_addr: usize) {
self.loops.push(Loop{ name: name, continue_addr: continue_addr, break_addrs: Vec::new() })
}
// Panic on empty contents
pub fn pop_and_refill(&mut self, refill_adder: usize, codes: &mut CodeCollection) {
let last = self.loops.pop().unwrap();
for break_addr in last.break_addrs {
codes.refill_addr(break_addr, refill_adder);
}
}
pub fn get_last_loop_continue_addr(&self) -> Option<usize> {
match self.loops.len() {
0 => None,
n => Some(self.loops[n - 1].continue_addr),
}
}
pub fn get_loop_continue_addr(&self, name: &str) -> Option<usize> {
for lp in &self.loops {
match lp.name {
Some(ref lp_name) => {
if lp_name == name {
return Some(lp.continue_addr);
}
}
_ => (),
}
}
return None;
}
pub fn push_last_loop_break_addr(&mut self, break_addr: usize) -> Option<()> {
match self.loops.len() {
0 => None,
n => {
self.loops[n - 1].break_addrs.push(break_addr);
Some(())
}
}
}
pub fn push_loop_break_addr(&mut self, name: &str, break_addr: usize) -> Option<()> {
for mut lp in &mut self.loops {
match lp.name {
Some(ref lp_name) => {
if lp_name == name {
lp.break_addrs.push(break_addr);
return Some(());
}
}
_ => (),
}
}
return None;
}
}
|
random_line_split
|
|
loop_def.rs
|
// loop operation
// store temp values for loop name and refill addresses
use super::ItemID;
use super::vm_code::CodeCollection;
pub struct Loop {
pub name: Option<String>,
pub continue_addr: usize,
pub break_addrs: Vec<usize>,
}
pub struct LoopCollection {
loops: Vec<Loop>,
pub ret_type: ItemID, // so it is not only loop collection but a jump statement context storage
}
impl LoopCollection {
pub fn
|
() -> LoopCollection {
LoopCollection{
loops: Vec::new(),
ret_type: ItemID::new_invalid(),
}
}
pub fn push_loop(&mut self, name: Option<String>, continue_addr: usize) {
self.loops.push(Loop{ name: name, continue_addr: continue_addr, break_addrs: Vec::new() })
}
// Panic on empty contents
pub fn pop_and_refill(&mut self, refill_adder: usize, codes: &mut CodeCollection) {
let last = self.loops.pop().unwrap();
for break_addr in last.break_addrs {
codes.refill_addr(break_addr, refill_adder);
}
}
pub fn get_last_loop_continue_addr(&self) -> Option<usize> {
match self.loops.len() {
0 => None,
n => Some(self.loops[n - 1].continue_addr),
}
}
pub fn get_loop_continue_addr(&self, name: &str) -> Option<usize> {
for lp in &self.loops {
match lp.name {
Some(ref lp_name) => {
if lp_name == name {
return Some(lp.continue_addr);
}
}
_ => (),
}
}
return None;
}
pub fn push_last_loop_break_addr(&mut self, break_addr: usize) -> Option<()> {
match self.loops.len() {
0 => None,
n => {
self.loops[n - 1].break_addrs.push(break_addr);
Some(())
}
}
}
pub fn push_loop_break_addr(&mut self, name: &str, break_addr: usize) -> Option<()> {
for mut lp in &mut self.loops {
match lp.name {
Some(ref lp_name) => {
if lp_name == name {
lp.break_addrs.push(break_addr);
return Some(());
}
}
_ => (),
}
}
return None;
}
}
|
new
|
identifier_name
|
loop_def.rs
|
// loop operation
// store temp values for loop name and refill addresses
use super::ItemID;
use super::vm_code::CodeCollection;
pub struct Loop {
pub name: Option<String>,
pub continue_addr: usize,
pub break_addrs: Vec<usize>,
}
pub struct LoopCollection {
loops: Vec<Loop>,
pub ret_type: ItemID, // so it is not only loop collection but a jump statement context storage
}
impl LoopCollection {
pub fn new() -> LoopCollection {
LoopCollection{
loops: Vec::new(),
ret_type: ItemID::new_invalid(),
}
}
pub fn push_loop(&mut self, name: Option<String>, continue_addr: usize)
|
// Panic on empty contents
pub fn pop_and_refill(&mut self, refill_adder: usize, codes: &mut CodeCollection) {
let last = self.loops.pop().unwrap();
for break_addr in last.break_addrs {
codes.refill_addr(break_addr, refill_adder);
}
}
pub fn get_last_loop_continue_addr(&self) -> Option<usize> {
match self.loops.len() {
0 => None,
n => Some(self.loops[n - 1].continue_addr),
}
}
pub fn get_loop_continue_addr(&self, name: &str) -> Option<usize> {
for lp in &self.loops {
match lp.name {
Some(ref lp_name) => {
if lp_name == name {
return Some(lp.continue_addr);
}
}
_ => (),
}
}
return None;
}
pub fn push_last_loop_break_addr(&mut self, break_addr: usize) -> Option<()> {
match self.loops.len() {
0 => None,
n => {
self.loops[n - 1].break_addrs.push(break_addr);
Some(())
}
}
}
pub fn push_loop_break_addr(&mut self, name: &str, break_addr: usize) -> Option<()> {
for mut lp in &mut self.loops {
match lp.name {
Some(ref lp_name) => {
if lp_name == name {
lp.break_addrs.push(break_addr);
return Some(());
}
}
_ => (),
}
}
return None;
}
}
|
{
self.loops.push(Loop{ name: name, continue_addr: continue_addr, break_addrs: Vec::new() })
}
|
identifier_body
|
cache.rs
|
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use std::collections::HashMap;
use std::collections::hash_map::{Occupied, Vacant};
use rand::Rng;
use std::hash::{Hash, sip};
use std::iter::repeat;
use std::rand::task_rng;
use std::slice::Items;
#[cfg(test)]
use std::cell::Cell;
pub trait Cache<K: PartialEq, V: Clone> {
fn insert(&mut self, key: K, value: V);
fn find(&mut self, key: &K) -> Option<V>;
fn find_or_create(&mut self, key: &K, blk: |&K| -> V) -> V;
fn evict_all(&mut self);
}
pub struct HashCache<K, V> {
entries: HashMap<K, V>,
}
impl<K: Clone + PartialEq + Eq + Hash, V: Clone> HashCache<K,V> {
pub fn new() -> HashCache<K, V> {
HashCache {
entries: HashMap::new(),
}
}
}
impl<K: Clone + PartialEq + Eq + Hash, V: Clone> Cache<K,V> for HashCache<K,V> {
fn insert(&mut self, key: K, value: V) {
self.entries.insert(key, value);
}
fn find(&mut self, key: &K) -> Option<V> {
match self.entries.get(key) {
Some(v) => Some(v.clone()),
None => None,
}
}
fn find_or_create(&mut self, key: &K, blk: |&K| -> V) -> V {
match self.entries.entry(key.clone()) {
Occupied(occupied) => {
(*occupied.get()).clone()
}
Vacant(vacant) => {
(*vacant.set(blk(key))).clone()
}
}
}
fn evict_all(&mut self) {
self.entries.clear();
}
}
impl<K,V> HashCache<K,V> where K: Clone + PartialEq + Eq + Hash, V: Clone {
pub fn find_equiv<'a,Sized? Q>(&'a self, key: &Q) -> Option<&'a V> where Q: Hash + Equiv<K> {
self.entries.find_equiv(key)
}
}
#[test]
fn test_hashcache() {
let mut cache: HashCache<uint, Cell<&str>> = HashCache::new();
|
cache.insert(1, Cell::new("one"));
assert!(cache.find(&1).is_some());
assert!(cache.find(&2).is_none());
cache.find_or_create(&2, |_v| { Cell::new("two") });
assert!(cache.find(&1).is_some());
assert!(cache.find(&2).is_some());
}
pub struct LRUCache<K, V> {
entries: Vec<(K, V)>,
cache_size: uint,
}
impl<K: Clone + PartialEq, V: Clone> LRUCache<K,V> {
pub fn new(size: uint) -> LRUCache<K, V> {
LRUCache {
entries: vec!(),
cache_size: size,
}
}
#[inline]
pub fn touch(&mut self, pos: uint) -> V {
let last_index = self.entries.len() - 1;
if pos!= last_index {
let entry = self.entries.remove(pos);
self.entries.push(entry.unwrap());
}
self.entries[last_index].ref1().clone()
}
pub fn iter<'a>(&'a self) -> Items<'a,(K,V)> {
self.entries.iter()
}
}
impl<K: Clone + PartialEq, V: Clone> Cache<K,V> for LRUCache<K,V> {
fn insert(&mut self, key: K, val: V) {
if self.entries.len() == self.cache_size {
self.entries.remove(0);
}
self.entries.push((key, val));
}
fn find(&mut self, key: &K) -> Option<V> {
match self.entries.iter().position(|&(ref k, _)| *k == *key) {
Some(pos) => Some(self.touch(pos)),
None => None,
}
}
fn find_or_create(&mut self, key: &K, blk: |&K| -> V) -> V {
match self.entries.iter().position(|&(ref k, _)| *k == *key) {
Some(pos) => self.touch(pos),
None => {
let val = blk(key);
self.insert(key.clone(), val.clone());
val
}
}
}
fn evict_all(&mut self) {
self.entries.clear();
}
}
pub struct SimpleHashCache<K,V> {
entries: Vec<Option<(K,V)>>,
k0: u64,
k1: u64,
}
impl<K:Clone+PartialEq+Hash,V:Clone> SimpleHashCache<K,V> {
pub fn new(cache_size: uint) -> SimpleHashCache<K,V> {
let mut r = task_rng();
SimpleHashCache {
entries: repeat(None).take(cache_size).collect(),
k0: r.gen(),
k1: r.gen(),
}
}
#[inline]
fn to_bucket(&self, h: uint) -> uint {
h % self.entries.len()
}
#[inline]
fn bucket_for_key<Q:Hash>(&self, key: &Q) -> uint {
self.to_bucket(sip::hash_with_keys(self.k0, self.k1, key) as uint)
}
#[inline]
pub fn find_equiv<'a,Q:Hash+Equiv<K>>(&'a self, key: &Q) -> Option<&'a V> {
let bucket_index = self.bucket_for_key(key);
match self.entries[bucket_index] {
Some((ref existing_key, ref value)) if key.equiv(existing_key) => Some(value),
_ => None,
}
}
}
impl<K:Clone+PartialEq+Hash,V:Clone> Cache<K,V> for SimpleHashCache<K,V> {
fn insert(&mut self, key: K, value: V) {
let bucket_index = self.bucket_for_key(&key);
self.entries[bucket_index] = Some((key, value));
}
fn find(&mut self, key: &K) -> Option<V> {
let bucket_index = self.bucket_for_key(key);
match self.entries[bucket_index] {
Some((ref existing_key, ref value)) if existing_key == key => Some((*value).clone()),
_ => None,
}
}
fn find_or_create(&mut self, key: &K, blk: |&K| -> V) -> V {
match self.find(key) {
Some(value) => return value,
None => {}
}
let value = blk(key);
self.insert((*key).clone(), value.clone());
value
}
fn evict_all(&mut self) {
for slot in self.entries.iter_mut() {
*slot = None
}
}
}
#[test]
fn test_lru_cache() {
let one = Cell::new("one");
let two = Cell::new("two");
let three = Cell::new("three");
let four = Cell::new("four");
// Test normal insertion.
let mut cache: LRUCache<uint,Cell<&str>> = LRUCache::new(2); // (_, _) (cache is empty)
cache.insert(1, one); // (1, _)
cache.insert(2, two); // (1, 2)
cache.insert(3, three); // (2, 3)
assert!(cache.find(&1).is_none()); // (2, 3) (no change)
assert!(cache.find(&3).is_some()); // (2, 3)
assert!(cache.find(&2).is_some()); // (3, 2)
// Test that LRU works (this insertion should replace 3, not 2).
cache.insert(4, four); // (2, 4)
assert!(cache.find(&1).is_none()); // (2, 4) (no change)
assert!(cache.find(&2).is_some()); // (4, 2)
assert!(cache.find(&3).is_none()); // (4, 2) (no change)
assert!(cache.find(&4).is_some()); // (2, 4) (no change)
// Test find_or_create.
cache.find_or_create(&1, |_| { Cell::new("one") }); // (4, 1)
assert!(cache.find(&1).is_some()); // (4, 1) (no change)
assert!(cache.find(&2).is_none()); // (4, 1) (no change)
assert!(cache.find(&3).is_none()); // (4, 1) (no change)
assert!(cache.find(&4).is_some()); // (1, 4)
}
|
random_line_split
|
|
cache.rs
|
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use std::collections::HashMap;
use std::collections::hash_map::{Occupied, Vacant};
use rand::Rng;
use std::hash::{Hash, sip};
use std::iter::repeat;
use std::rand::task_rng;
use std::slice::Items;
#[cfg(test)]
use std::cell::Cell;
pub trait Cache<K: PartialEq, V: Clone> {
fn insert(&mut self, key: K, value: V);
fn find(&mut self, key: &K) -> Option<V>;
fn find_or_create(&mut self, key: &K, blk: |&K| -> V) -> V;
fn evict_all(&mut self);
}
pub struct HashCache<K, V> {
entries: HashMap<K, V>,
}
impl<K: Clone + PartialEq + Eq + Hash, V: Clone> HashCache<K,V> {
pub fn new() -> HashCache<K, V> {
HashCache {
entries: HashMap::new(),
}
}
}
impl<K: Clone + PartialEq + Eq + Hash, V: Clone> Cache<K,V> for HashCache<K,V> {
fn insert(&mut self, key: K, value: V) {
self.entries.insert(key, value);
}
fn find(&mut self, key: &K) -> Option<V> {
match self.entries.get(key) {
Some(v) => Some(v.clone()),
None => None,
}
}
fn find_or_create(&mut self, key: &K, blk: |&K| -> V) -> V {
match self.entries.entry(key.clone()) {
Occupied(occupied) => {
(*occupied.get()).clone()
}
Vacant(vacant) => {
(*vacant.set(blk(key))).clone()
}
}
}
fn evict_all(&mut self) {
self.entries.clear();
}
}
impl<K,V> HashCache<K,V> where K: Clone + PartialEq + Eq + Hash, V: Clone {
pub fn find_equiv<'a,Sized? Q>(&'a self, key: &Q) -> Option<&'a V> where Q: Hash + Equiv<K> {
self.entries.find_equiv(key)
}
}
#[test]
fn test_hashcache() {
let mut cache: HashCache<uint, Cell<&str>> = HashCache::new();
cache.insert(1, Cell::new("one"));
assert!(cache.find(&1).is_some());
assert!(cache.find(&2).is_none());
cache.find_or_create(&2, |_v| { Cell::new("two") });
assert!(cache.find(&1).is_some());
assert!(cache.find(&2).is_some());
}
pub struct LRUCache<K, V> {
entries: Vec<(K, V)>,
cache_size: uint,
}
impl<K: Clone + PartialEq, V: Clone> LRUCache<K,V> {
pub fn new(size: uint) -> LRUCache<K, V> {
LRUCache {
entries: vec!(),
cache_size: size,
}
}
#[inline]
pub fn touch(&mut self, pos: uint) -> V {
let last_index = self.entries.len() - 1;
if pos!= last_index {
let entry = self.entries.remove(pos);
self.entries.push(entry.unwrap());
}
self.entries[last_index].ref1().clone()
}
pub fn iter<'a>(&'a self) -> Items<'a,(K,V)>
|
}
impl<K: Clone + PartialEq, V: Clone> Cache<K,V> for LRUCache<K,V> {
fn insert(&mut self, key: K, val: V) {
if self.entries.len() == self.cache_size {
self.entries.remove(0);
}
self.entries.push((key, val));
}
fn find(&mut self, key: &K) -> Option<V> {
match self.entries.iter().position(|&(ref k, _)| *k == *key) {
Some(pos) => Some(self.touch(pos)),
None => None,
}
}
fn find_or_create(&mut self, key: &K, blk: |&K| -> V) -> V {
match self.entries.iter().position(|&(ref k, _)| *k == *key) {
Some(pos) => self.touch(pos),
None => {
let val = blk(key);
self.insert(key.clone(), val.clone());
val
}
}
}
fn evict_all(&mut self) {
self.entries.clear();
}
}
pub struct SimpleHashCache<K,V> {
entries: Vec<Option<(K,V)>>,
k0: u64,
k1: u64,
}
impl<K:Clone+PartialEq+Hash,V:Clone> SimpleHashCache<K,V> {
pub fn new(cache_size: uint) -> SimpleHashCache<K,V> {
let mut r = task_rng();
SimpleHashCache {
entries: repeat(None).take(cache_size).collect(),
k0: r.gen(),
k1: r.gen(),
}
}
#[inline]
fn to_bucket(&self, h: uint) -> uint {
h % self.entries.len()
}
#[inline]
fn bucket_for_key<Q:Hash>(&self, key: &Q) -> uint {
self.to_bucket(sip::hash_with_keys(self.k0, self.k1, key) as uint)
}
#[inline]
pub fn find_equiv<'a,Q:Hash+Equiv<K>>(&'a self, key: &Q) -> Option<&'a V> {
let bucket_index = self.bucket_for_key(key);
match self.entries[bucket_index] {
Some((ref existing_key, ref value)) if key.equiv(existing_key) => Some(value),
_ => None,
}
}
}
impl<K:Clone+PartialEq+Hash,V:Clone> Cache<K,V> for SimpleHashCache<K,V> {
fn insert(&mut self, key: K, value: V) {
let bucket_index = self.bucket_for_key(&key);
self.entries[bucket_index] = Some((key, value));
}
fn find(&mut self, key: &K) -> Option<V> {
let bucket_index = self.bucket_for_key(key);
match self.entries[bucket_index] {
Some((ref existing_key, ref value)) if existing_key == key => Some((*value).clone()),
_ => None,
}
}
fn find_or_create(&mut self, key: &K, blk: |&K| -> V) -> V {
match self.find(key) {
Some(value) => return value,
None => {}
}
let value = blk(key);
self.insert((*key).clone(), value.clone());
value
}
fn evict_all(&mut self) {
for slot in self.entries.iter_mut() {
*slot = None
}
}
}
#[test]
fn test_lru_cache() {
let one = Cell::new("one");
let two = Cell::new("two");
let three = Cell::new("three");
let four = Cell::new("four");
// Test normal insertion.
let mut cache: LRUCache<uint,Cell<&str>> = LRUCache::new(2); // (_, _) (cache is empty)
cache.insert(1, one); // (1, _)
cache.insert(2, two); // (1, 2)
cache.insert(3, three); // (2, 3)
assert!(cache.find(&1).is_none()); // (2, 3) (no change)
assert!(cache.find(&3).is_some()); // (2, 3)
assert!(cache.find(&2).is_some()); // (3, 2)
// Test that LRU works (this insertion should replace 3, not 2).
cache.insert(4, four); // (2, 4)
assert!(cache.find(&1).is_none()); // (2, 4) (no change)
assert!(cache.find(&2).is_some()); // (4, 2)
assert!(cache.find(&3).is_none()); // (4, 2) (no change)
assert!(cache.find(&4).is_some()); // (2, 4) (no change)
// Test find_or_create.
cache.find_or_create(&1, |_| { Cell::new("one") }); // (4, 1)
assert!(cache.find(&1).is_some()); // (4, 1) (no change)
assert!(cache.find(&2).is_none()); // (4, 1) (no change)
assert!(cache.find(&3).is_none()); // (4, 1) (no change)
assert!(cache.find(&4).is_some()); // (1, 4)
}
|
{
self.entries.iter()
}
|
identifier_body
|
cache.rs
|
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use std::collections::HashMap;
use std::collections::hash_map::{Occupied, Vacant};
use rand::Rng;
use std::hash::{Hash, sip};
use std::iter::repeat;
use std::rand::task_rng;
use std::slice::Items;
#[cfg(test)]
use std::cell::Cell;
pub trait Cache<K: PartialEq, V: Clone> {
fn insert(&mut self, key: K, value: V);
fn find(&mut self, key: &K) -> Option<V>;
fn find_or_create(&mut self, key: &K, blk: |&K| -> V) -> V;
fn evict_all(&mut self);
}
pub struct HashCache<K, V> {
entries: HashMap<K, V>,
}
impl<K: Clone + PartialEq + Eq + Hash, V: Clone> HashCache<K,V> {
pub fn new() -> HashCache<K, V> {
HashCache {
entries: HashMap::new(),
}
}
}
impl<K: Clone + PartialEq + Eq + Hash, V: Clone> Cache<K,V> for HashCache<K,V> {
fn insert(&mut self, key: K, value: V) {
self.entries.insert(key, value);
}
fn find(&mut self, key: &K) -> Option<V> {
match self.entries.get(key) {
Some(v) => Some(v.clone()),
None => None,
}
}
fn find_or_create(&mut self, key: &K, blk: |&K| -> V) -> V {
match self.entries.entry(key.clone()) {
Occupied(occupied) => {
(*occupied.get()).clone()
}
Vacant(vacant) => {
(*vacant.set(blk(key))).clone()
}
}
}
fn
|
(&mut self) {
self.entries.clear();
}
}
impl<K,V> HashCache<K,V> where K: Clone + PartialEq + Eq + Hash, V: Clone {
pub fn find_equiv<'a,Sized? Q>(&'a self, key: &Q) -> Option<&'a V> where Q: Hash + Equiv<K> {
self.entries.find_equiv(key)
}
}
#[test]
fn test_hashcache() {
let mut cache: HashCache<uint, Cell<&str>> = HashCache::new();
cache.insert(1, Cell::new("one"));
assert!(cache.find(&1).is_some());
assert!(cache.find(&2).is_none());
cache.find_or_create(&2, |_v| { Cell::new("two") });
assert!(cache.find(&1).is_some());
assert!(cache.find(&2).is_some());
}
pub struct LRUCache<K, V> {
entries: Vec<(K, V)>,
cache_size: uint,
}
impl<K: Clone + PartialEq, V: Clone> LRUCache<K,V> {
pub fn new(size: uint) -> LRUCache<K, V> {
LRUCache {
entries: vec!(),
cache_size: size,
}
}
#[inline]
pub fn touch(&mut self, pos: uint) -> V {
let last_index = self.entries.len() - 1;
if pos!= last_index {
let entry = self.entries.remove(pos);
self.entries.push(entry.unwrap());
}
self.entries[last_index].ref1().clone()
}
pub fn iter<'a>(&'a self) -> Items<'a,(K,V)> {
self.entries.iter()
}
}
impl<K: Clone + PartialEq, V: Clone> Cache<K,V> for LRUCache<K,V> {
fn insert(&mut self, key: K, val: V) {
if self.entries.len() == self.cache_size {
self.entries.remove(0);
}
self.entries.push((key, val));
}
fn find(&mut self, key: &K) -> Option<V> {
match self.entries.iter().position(|&(ref k, _)| *k == *key) {
Some(pos) => Some(self.touch(pos)),
None => None,
}
}
fn find_or_create(&mut self, key: &K, blk: |&K| -> V) -> V {
match self.entries.iter().position(|&(ref k, _)| *k == *key) {
Some(pos) => self.touch(pos),
None => {
let val = blk(key);
self.insert(key.clone(), val.clone());
val
}
}
}
fn evict_all(&mut self) {
self.entries.clear();
}
}
pub struct SimpleHashCache<K,V> {
entries: Vec<Option<(K,V)>>,
k0: u64,
k1: u64,
}
impl<K:Clone+PartialEq+Hash,V:Clone> SimpleHashCache<K,V> {
pub fn new(cache_size: uint) -> SimpleHashCache<K,V> {
let mut r = task_rng();
SimpleHashCache {
entries: repeat(None).take(cache_size).collect(),
k0: r.gen(),
k1: r.gen(),
}
}
#[inline]
fn to_bucket(&self, h: uint) -> uint {
h % self.entries.len()
}
#[inline]
fn bucket_for_key<Q:Hash>(&self, key: &Q) -> uint {
self.to_bucket(sip::hash_with_keys(self.k0, self.k1, key) as uint)
}
#[inline]
pub fn find_equiv<'a,Q:Hash+Equiv<K>>(&'a self, key: &Q) -> Option<&'a V> {
let bucket_index = self.bucket_for_key(key);
match self.entries[bucket_index] {
Some((ref existing_key, ref value)) if key.equiv(existing_key) => Some(value),
_ => None,
}
}
}
impl<K:Clone+PartialEq+Hash,V:Clone> Cache<K,V> for SimpleHashCache<K,V> {
fn insert(&mut self, key: K, value: V) {
let bucket_index = self.bucket_for_key(&key);
self.entries[bucket_index] = Some((key, value));
}
fn find(&mut self, key: &K) -> Option<V> {
let bucket_index = self.bucket_for_key(key);
match self.entries[bucket_index] {
Some((ref existing_key, ref value)) if existing_key == key => Some((*value).clone()),
_ => None,
}
}
fn find_or_create(&mut self, key: &K, blk: |&K| -> V) -> V {
match self.find(key) {
Some(value) => return value,
None => {}
}
let value = blk(key);
self.insert((*key).clone(), value.clone());
value
}
fn evict_all(&mut self) {
for slot in self.entries.iter_mut() {
*slot = None
}
}
}
#[test]
fn test_lru_cache() {
let one = Cell::new("one");
let two = Cell::new("two");
let three = Cell::new("three");
let four = Cell::new("four");
// Test normal insertion.
let mut cache: LRUCache<uint,Cell<&str>> = LRUCache::new(2); // (_, _) (cache is empty)
cache.insert(1, one); // (1, _)
cache.insert(2, two); // (1, 2)
cache.insert(3, three); // (2, 3)
assert!(cache.find(&1).is_none()); // (2, 3) (no change)
assert!(cache.find(&3).is_some()); // (2, 3)
assert!(cache.find(&2).is_some()); // (3, 2)
// Test that LRU works (this insertion should replace 3, not 2).
cache.insert(4, four); // (2, 4)
assert!(cache.find(&1).is_none()); // (2, 4) (no change)
assert!(cache.find(&2).is_some()); // (4, 2)
assert!(cache.find(&3).is_none()); // (4, 2) (no change)
assert!(cache.find(&4).is_some()); // (2, 4) (no change)
// Test find_or_create.
cache.find_or_create(&1, |_| { Cell::new("one") }); // (4, 1)
assert!(cache.find(&1).is_some()); // (4, 1) (no change)
assert!(cache.find(&2).is_none()); // (4, 1) (no change)
assert!(cache.find(&3).is_none()); // (4, 1) (no change)
assert!(cache.find(&4).is_some()); // (1, 4)
}
|
evict_all
|
identifier_name
|
init-res-into-things.rs
|
// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
#![feature(managed_boxes, unsafe_destructor)]
use std::cell::Cell;
use std::gc::{Gc, GC};
// Resources can't be copied, but storing into data structures counts
// as a move unless the stored thing is used afterwards.
struct r {
i: Gc<Cell<int>>,
}
struct
|
{ x: r }
#[unsafe_destructor]
impl Drop for r {
fn drop(&mut self) {
self.i.set(self.i.get() + 1)
}
}
fn r(i: Gc<Cell<int>>) -> r {
r {
i: i
}
}
fn test_box() {
let i = box(GC) Cell::new(0i);
{
let _a = box(GC) r(i);
}
assert_eq!(i.get(), 1);
}
fn test_rec() {
let i = box(GC) Cell::new(0i);
{
let _a = Box {x: r(i)};
}
assert_eq!(i.get(), 1);
}
fn test_tag() {
enum t {
t0(r),
}
let i = box(GC) Cell::new(0i);
{
let _a = t0(r(i));
}
assert_eq!(i.get(), 1);
}
fn test_tup() {
let i = box(GC) Cell::new(0i);
{
let _a = (r(i), 0i);
}
assert_eq!(i.get(), 1);
}
fn test_unique() {
let i = box(GC) Cell::new(0i);
{
let _a = box r(i);
}
assert_eq!(i.get(), 1);
}
fn test_box_rec() {
let i = box(GC) Cell::new(0i);
{
let _a = box(GC) Box {
x: r(i)
};
}
assert_eq!(i.get(), 1);
}
pub fn main() {
test_box();
test_rec();
test_tag();
test_tup();
test_unique();
test_box_rec();
}
|
Box
|
identifier_name
|
init-res-into-things.rs
|
// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
#![feature(managed_boxes, unsafe_destructor)]
use std::cell::Cell;
use std::gc::{Gc, GC};
// Resources can't be copied, but storing into data structures counts
// as a move unless the stored thing is used afterwards.
struct r {
i: Gc<Cell<int>>,
}
struct Box { x: r }
#[unsafe_destructor]
impl Drop for r {
fn drop(&mut self) {
self.i.set(self.i.get() + 1)
}
}
fn r(i: Gc<Cell<int>>) -> r {
r {
i: i
}
}
fn test_box() {
let i = box(GC) Cell::new(0i);
{
let _a = box(GC) r(i);
}
assert_eq!(i.get(), 1);
}
fn test_rec() {
let i = box(GC) Cell::new(0i);
{
let _a = Box {x: r(i)};
|
fn test_tag() {
enum t {
t0(r),
}
let i = box(GC) Cell::new(0i);
{
let _a = t0(r(i));
}
assert_eq!(i.get(), 1);
}
fn test_tup() {
let i = box(GC) Cell::new(0i);
{
let _a = (r(i), 0i);
}
assert_eq!(i.get(), 1);
}
fn test_unique() {
let i = box(GC) Cell::new(0i);
{
let _a = box r(i);
}
assert_eq!(i.get(), 1);
}
fn test_box_rec() {
let i = box(GC) Cell::new(0i);
{
let _a = box(GC) Box {
x: r(i)
};
}
assert_eq!(i.get(), 1);
}
pub fn main() {
test_box();
test_rec();
test_tag();
test_tup();
test_unique();
test_box_rec();
}
|
}
assert_eq!(i.get(), 1);
}
|
random_line_split
|
nibbleslice.rs
|
// Copyright 2015, 2016 Ethcore (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! Nibble-orientated view onto byte-slice, allowing nibble-precision offsets.
use std::cmp::*;
use std::fmt;
use bytes::*;
/// Nibble-orientated view onto byte-slice, allowing nibble-precision offsets.
///
/// This is an immutable struct. No operations actually change it.
///
/// # Example
/// ```rust
/// extern crate ethcore_util;
/// use ethcore_util::nibbleslice::*;
/// fn main() {
/// let d1 = &[0x01u8, 0x23, 0x45];
/// let d2 = &[0x34u8, 0x50, 0x12];
/// let d3 = &[0x00u8, 0x12];
/// let n1 = NibbleSlice::new(d1); // 0,1,2,3,4,5
/// let n2 = NibbleSlice::new(d2); // 3,4,5,0,1,2
/// let n3 = NibbleSlice::new_offset(d3, 1); // 0,1,2
/// assert!(n1 > n3); // 0,1,2,... > 0,1,2
/// assert!(n1 < n2); // 0,... < 3,...
/// assert!(n2.mid(3) == n3); // 0,1,2 == 0,1,2
/// assert!(n1.starts_with(&n3));
/// assert_eq!(n1.common_prefix(&n3), 3);
/// assert_eq!(n2.mid(3).common_prefix(&n1), 3);
/// }
/// ```
#[derive(Copy, Clone, Eq, Ord)]
pub struct NibbleSlice<'a> {
data: &'a [u8],
offset: usize,
data_encode_suffix: &'a [u8],
offset_encode_suffix: usize,
}
/// Iterator type for a nibble slice.
pub struct NibbleSliceIterator<'a> {
p: &'a NibbleSlice<'a>,
i: usize,
}
impl<'a> Iterator for NibbleSliceIterator<'a> {
type Item = u8;
fn next(&mut self) -> Option<u8> {
self.i += 1;
match self.i <= self.p.len() {
true => Some(self.p.at(self.i - 1)),
false => None,
}
}
}
impl<'a, 'view> NibbleSlice<'a>
where 'a: 'view,
{
/// Create a new nibble slice with the given byte-slice.
pub fn new(data: &'a [u8]) -> Self {
NibbleSlice::new_offset(data, 0)
}
/// Create a new nibble slice with the given byte-slice with a nibble offset.
pub fn new_offset(data: &'a [u8], offset: usize) -> Self {
NibbleSlice {
data: data,
offset: offset,
data_encode_suffix: &b""[..],
offset_encode_suffix: 0,
}
}
/// Create a composed nibble slice; one followed by the other.
pub fn new_composed(a: &'a NibbleSlice, b: &'a NibbleSlice) -> Self {
NibbleSlice {
data: a.data,
offset: a.offset,
data_encode_suffix: b.data,
offset_encode_suffix: b.offset,
}
}
// pub fn new_composed_bytes_offset(a: &NibbleSlice, b: &NibbleSlice) -> (Bytes, usize) {
// let r: Vec<u8>::with_capacity((a.len() + b.len() + 1) / 2);
// let mut i = (a.len() + b.len()) % 2;
// while i < a.len() {
// match i % 2 {
// 0 =>,
// 1 =>,
// }
// i += 1;
// }
// while i < a.len() + b.len() {
// i += 1;
// }
// (r, a.len() + b.len())
// }
/// Get an iterator for the series of nibbles.
pub fn iter(&'a self) -> NibbleSliceIterator<'a> {
NibbleSliceIterator { p: self, i: 0 }
}
/// Create a new nibble slice from the given HPE encoded data (e.g. output of `encoded()`).
pub fn from_encoded(data: &'a [u8]) -> (NibbleSlice, bool) {
(Self::new_offset(data, if data[0] & 16 == 16 { 1 } else { 2 }), data[0] & 32 == 32)
}
/// Is this an empty slice?
pub fn is_empty(&self) -> bool {
self.len() == 0
}
/// Get the length (in nibbles, naturally) of this slice.
pub fn len(&self) -> usize {
(self.data.len() + self.data_encode_suffix.len()) * 2 - self.offset - self.offset_encode_suffix
}
/// Get the nibble at position `i`.
pub fn at(&self, i: usize) -> u8 {
let l = self.data.len() * 2 - self.offset;
if i < l {
if (self.offset + i) & 1 == 1 { self.data[(self.offset + i) / 2] & 15u8 } else { self.data[(self.offset + i) / 2] >> 4 }
} else {
let i = i - l;
if (self.offset_encode_suffix + i) & 1 == 1 { self.data_encode_suffix[(self.offset_encode_suffix + i) / 2] & 15u8 } else { self.data_encode_suffix[(self.offset_encode_suffix + i) / 2] >> 4 }
}
}
/// Return object which represents a view on to this slice (further) offset by `i` nibbles.
pub fn mid(&'view self, i: usize) -> NibbleSlice<'a> {
NibbleSlice {
data: self.data,
offset: self.offset + i,
data_encode_suffix: &b""[..],
offset_encode_suffix: 0,
}
}
/// Do we start with the same nibbles as the whole of `them`?
pub fn starts_with(&self, them: &Self) -> bool {
self.common_prefix(them) == them.len()
}
/// How many of the same nibbles at the beginning do we match with `them`?
pub fn common_prefix(&self, them: &Self) -> usize {
let s = min(self.len(), them.len());
let mut i = 0usize;
while i < s {
if self.at(i)!= them.at(i) {
break;
}
i += 1;
}
i
}
/// Encode while nibble slice in prefixed hex notation, noting whether it `is_leaf`.
pub fn encoded(&self, is_leaf: bool) -> Bytes {
let l = self.len();
let mut r = Bytes::with_capacity(l / 2 + 1);
let mut i = l % 2;
r.push(if i == 1 { 0x10 + self.at(0) } else { 0 } + if is_leaf { 0x20 } else { 0 });
while i < l {
r.push(self.at(i) * 16 + self.at(i + 1));
i += 2;
}
r
}
/// Encode only the leftmost `n` bytes of the nibble slice in prefixed hex notation,
/// noting whether it `is_leaf`.
pub fn encoded_leftmost(&self, n: usize, is_leaf: bool) -> Bytes {
let l = min(self.len(), n);
let mut r = Bytes::with_capacity(l / 2 + 1);
let mut i = l % 2;
r.push(if i == 1 { 0x10 + self.at(0) } else
|
+ if is_leaf { 0x20 } else { 0 });
while i < l {
r.push(self.at(i) * 16 + self.at(i + 1));
i += 2;
}
r
}
}
impl<'a> PartialEq for NibbleSlice<'a> {
fn eq(&self, them: &Self) -> bool {
self.len() == them.len() && self.starts_with(them)
}
}
impl<'a> PartialOrd for NibbleSlice<'a> {
fn partial_cmp(&self, them: &Self) -> Option<Ordering> {
let s = min(self.len(), them.len());
let mut i = 0usize;
while i < s {
match self.at(i).partial_cmp(&them.at(i)).unwrap() {
Ordering::Less => return Some(Ordering::Less),
Ordering::Greater => return Some(Ordering::Greater),
_ => i += 1,
}
}
self.len().partial_cmp(&them.len())
}
}
impl<'a> fmt::Debug for NibbleSlice<'a> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
for i in 0..self.len() {
match i {
0 => try!(write!(f, "{:01x}", self.at(i))),
_ => try!(write!(f, "'{:01x}", self.at(i))),
}
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::NibbleSlice;
static D: &'static [u8; 3] = &[0x01u8, 0x23, 0x45];
#[test]
fn basics() {
let n = NibbleSlice::new(D);
assert_eq!(n.len(), 6);
assert!(!n.is_empty());
let n = NibbleSlice::new_offset(D, 6);
assert!(n.is_empty());
let n = NibbleSlice::new_offset(D, 3);
assert_eq!(n.len(), 3);
for i in 0..3 {
assert_eq!(n.at(i), i as u8 + 3);
}
}
#[test]
fn iterator() {
let n = NibbleSlice::new(D);
let mut nibbles: Vec<u8> = vec![];
nibbles.extend(n.iter());
assert_eq!(nibbles, (0u8..6).collect::<Vec<_>>())
}
#[test]
fn mid() {
let n = NibbleSlice::new(D);
let m = n.mid(2);
for i in 0..4 {
assert_eq!(m.at(i), i as u8 + 2);
}
let m = n.mid(3);
for i in 0..3 {
assert_eq!(m.at(i), i as u8 + 3);
}
}
#[test]
fn encoded() {
let n = NibbleSlice::new(D);
assert_eq!(n.encoded(false), &[0x00, 0x01, 0x23, 0x45]);
assert_eq!(n.encoded(true), &[0x20, 0x01, 0x23, 0x45]);
assert_eq!(n.mid(1).encoded(false), &[0x11, 0x23, 0x45]);
assert_eq!(n.mid(1).encoded(true), &[0x31, 0x23, 0x45]);
}
#[test]
fn from_encoded() {
let n = NibbleSlice::new(D);
assert_eq!((n, false), NibbleSlice::from_encoded(&[0x00, 0x01, 0x23, 0x45]));
assert_eq!((n, true), NibbleSlice::from_encoded(&[0x20, 0x01, 0x23, 0x45]));
assert_eq!((n.mid(1), false), NibbleSlice::from_encoded(&[0x11, 0x23, 0x45]));
assert_eq!((n.mid(1), true), NibbleSlice::from_encoded(&[0x31, 0x23, 0x45]));
}
#[test]
fn shared() {
let n = NibbleSlice::new(D);
let other = &[0x01u8, 0x23, 0x01, 0x23, 0x45, 0x67];
let m = NibbleSlice::new(other);
assert_eq!(n.common_prefix(&m), 4);
assert_eq!(m.common_prefix(&n), 4);
assert_eq!(n.mid(1).common_prefix(&m.mid(1)), 3);
assert_eq!(n.mid(1).common_prefix(&m.mid(2)), 0);
assert_eq!(n.common_prefix(&m.mid(4)), 6);
assert!(!n.starts_with(&m.mid(4)));
assert!(m.mid(4).starts_with(&n));
}
#[test]
fn compare() {
let other = &[0x01u8, 0x23, 0x01, 0x23, 0x45];
let n = NibbleSlice::new(D);
let m = NibbleSlice::new(other);
assert!(n!= m);
assert!(n > m);
assert!(m < n);
assert!(n == m.mid(4));
assert!(n >= m.mid(4));
assert!(n <= m.mid(4));
}
}
|
{ 0 }
|
conditional_block
|
nibbleslice.rs
|
// Copyright 2015, 2016 Ethcore (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! Nibble-orientated view onto byte-slice, allowing nibble-precision offsets.
use std::cmp::*;
use std::fmt;
use bytes::*;
/// Nibble-orientated view onto byte-slice, allowing nibble-precision offsets.
///
/// This is an immutable struct. No operations actually change it.
///
/// # Example
/// ```rust
/// extern crate ethcore_util;
/// use ethcore_util::nibbleslice::*;
/// fn main() {
/// let d1 = &[0x01u8, 0x23, 0x45];
/// let d2 = &[0x34u8, 0x50, 0x12];
/// let d3 = &[0x00u8, 0x12];
/// let n1 = NibbleSlice::new(d1); // 0,1,2,3,4,5
/// let n2 = NibbleSlice::new(d2); // 3,4,5,0,1,2
/// let n3 = NibbleSlice::new_offset(d3, 1); // 0,1,2
/// assert!(n1 > n3); // 0,1,2,... > 0,1,2
/// assert!(n1 < n2); // 0,... < 3,...
/// assert!(n2.mid(3) == n3); // 0,1,2 == 0,1,2
/// assert!(n1.starts_with(&n3));
/// assert_eq!(n1.common_prefix(&n3), 3);
/// assert_eq!(n2.mid(3).common_prefix(&n1), 3);
/// }
/// ```
#[derive(Copy, Clone, Eq, Ord)]
pub struct NibbleSlice<'a> {
data: &'a [u8],
offset: usize,
data_encode_suffix: &'a [u8],
offset_encode_suffix: usize,
}
/// Iterator type for a nibble slice.
pub struct NibbleSliceIterator<'a> {
p: &'a NibbleSlice<'a>,
i: usize,
}
impl<'a> Iterator for NibbleSliceIterator<'a> {
type Item = u8;
fn next(&mut self) -> Option<u8> {
self.i += 1;
match self.i <= self.p.len() {
true => Some(self.p.at(self.i - 1)),
false => None,
}
}
}
impl<'a, 'view> NibbleSlice<'a>
where 'a: 'view,
{
/// Create a new nibble slice with the given byte-slice.
pub fn new(data: &'a [u8]) -> Self {
NibbleSlice::new_offset(data, 0)
}
/// Create a new nibble slice with the given byte-slice with a nibble offset.
pub fn new_offset(data: &'a [u8], offset: usize) -> Self {
NibbleSlice {
data: data,
offset: offset,
data_encode_suffix: &b""[..],
offset_encode_suffix: 0,
}
}
/// Create a composed nibble slice; one followed by the other.
pub fn new_composed(a: &'a NibbleSlice, b: &'a NibbleSlice) -> Self {
NibbleSlice {
data: a.data,
offset: a.offset,
data_encode_suffix: b.data,
offset_encode_suffix: b.offset,
}
}
// pub fn new_composed_bytes_offset(a: &NibbleSlice, b: &NibbleSlice) -> (Bytes, usize) {
// let r: Vec<u8>::with_capacity((a.len() + b.len() + 1) / 2);
// let mut i = (a.len() + b.len()) % 2;
// while i < a.len() {
// match i % 2 {
// 0 =>,
// 1 =>,
// }
// i += 1;
// }
// while i < a.len() + b.len() {
// i += 1;
// }
// (r, a.len() + b.len())
// }
/// Get an iterator for the series of nibbles.
pub fn iter(&'a self) -> NibbleSliceIterator<'a> {
NibbleSliceIterator { p: self, i: 0 }
}
/// Create a new nibble slice from the given HPE encoded data (e.g. output of `encoded()`).
pub fn from_encoded(data: &'a [u8]) -> (NibbleSlice, bool) {
(Self::new_offset(data, if data[0] & 16 == 16 { 1 } else { 2 }), data[0] & 32 == 32)
}
/// Is this an empty slice?
pub fn is_empty(&self) -> bool {
self.len() == 0
}
/// Get the length (in nibbles, naturally) of this slice.
pub fn len(&self) -> usize {
(self.data.len() + self.data_encode_suffix.len()) * 2 - self.offset - self.offset_encode_suffix
}
/// Get the nibble at position `i`.
pub fn at(&self, i: usize) -> u8 {
let l = self.data.len() * 2 - self.offset;
if i < l {
if (self.offset + i) & 1 == 1 { self.data[(self.offset + i) / 2] & 15u8 } else { self.data[(self.offset + i) / 2] >> 4 }
} else {
let i = i - l;
if (self.offset_encode_suffix + i) & 1 == 1 { self.data_encode_suffix[(self.offset_encode_suffix + i) / 2] & 15u8 } else { self.data_encode_suffix[(self.offset_encode_suffix + i) / 2] >> 4 }
}
}
/// Return object which represents a view on to this slice (further) offset by `i` nibbles.
pub fn mid(&'view self, i: usize) -> NibbleSlice<'a> {
NibbleSlice {
data: self.data,
offset: self.offset + i,
data_encode_suffix: &b""[..],
offset_encode_suffix: 0,
}
}
/// Do we start with the same nibbles as the whole of `them`?
pub fn starts_with(&self, them: &Self) -> bool {
self.common_prefix(them) == them.len()
}
/// How many of the same nibbles at the beginning do we match with `them`?
pub fn common_prefix(&self, them: &Self) -> usize {
let s = min(self.len(), them.len());
let mut i = 0usize;
while i < s {
if self.at(i)!= them.at(i) {
break;
}
i += 1;
}
i
}
/// Encode while nibble slice in prefixed hex notation, noting whether it `is_leaf`.
pub fn encoded(&self, is_leaf: bool) -> Bytes {
let l = self.len();
let mut r = Bytes::with_capacity(l / 2 + 1);
let mut i = l % 2;
r.push(if i == 1 { 0x10 + self.at(0) } else { 0 } + if is_leaf { 0x20 } else { 0 });
while i < l {
r.push(self.at(i) * 16 + self.at(i + 1));
i += 2;
}
r
}
/// Encode only the leftmost `n` bytes of the nibble slice in prefixed hex notation,
/// noting whether it `is_leaf`.
pub fn encoded_leftmost(&self, n: usize, is_leaf: bool) -> Bytes
|
}
impl<'a> PartialEq for NibbleSlice<'a> {
fn eq(&self, them: &Self) -> bool {
self.len() == them.len() && self.starts_with(them)
}
}
impl<'a> PartialOrd for NibbleSlice<'a> {
fn partial_cmp(&self, them: &Self) -> Option<Ordering> {
let s = min(self.len(), them.len());
let mut i = 0usize;
while i < s {
match self.at(i).partial_cmp(&them.at(i)).unwrap() {
Ordering::Less => return Some(Ordering::Less),
Ordering::Greater => return Some(Ordering::Greater),
_ => i += 1,
}
}
self.len().partial_cmp(&them.len())
}
}
impl<'a> fmt::Debug for NibbleSlice<'a> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
for i in 0..self.len() {
match i {
0 => try!(write!(f, "{:01x}", self.at(i))),
_ => try!(write!(f, "'{:01x}", self.at(i))),
}
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::NibbleSlice;
static D: &'static [u8; 3] = &[0x01u8, 0x23, 0x45];
#[test]
fn basics() {
let n = NibbleSlice::new(D);
assert_eq!(n.len(), 6);
assert!(!n.is_empty());
let n = NibbleSlice::new_offset(D, 6);
assert!(n.is_empty());
let n = NibbleSlice::new_offset(D, 3);
assert_eq!(n.len(), 3);
for i in 0..3 {
assert_eq!(n.at(i), i as u8 + 3);
}
}
#[test]
fn iterator() {
let n = NibbleSlice::new(D);
let mut nibbles: Vec<u8> = vec![];
nibbles.extend(n.iter());
assert_eq!(nibbles, (0u8..6).collect::<Vec<_>>())
}
#[test]
fn mid() {
let n = NibbleSlice::new(D);
let m = n.mid(2);
for i in 0..4 {
assert_eq!(m.at(i), i as u8 + 2);
}
let m = n.mid(3);
for i in 0..3 {
assert_eq!(m.at(i), i as u8 + 3);
}
}
#[test]
fn encoded() {
let n = NibbleSlice::new(D);
assert_eq!(n.encoded(false), &[0x00, 0x01, 0x23, 0x45]);
assert_eq!(n.encoded(true), &[0x20, 0x01, 0x23, 0x45]);
assert_eq!(n.mid(1).encoded(false), &[0x11, 0x23, 0x45]);
assert_eq!(n.mid(1).encoded(true), &[0x31, 0x23, 0x45]);
}
#[test]
fn from_encoded() {
let n = NibbleSlice::new(D);
assert_eq!((n, false), NibbleSlice::from_encoded(&[0x00, 0x01, 0x23, 0x45]));
assert_eq!((n, true), NibbleSlice::from_encoded(&[0x20, 0x01, 0x23, 0x45]));
assert_eq!((n.mid(1), false), NibbleSlice::from_encoded(&[0x11, 0x23, 0x45]));
assert_eq!((n.mid(1), true), NibbleSlice::from_encoded(&[0x31, 0x23, 0x45]));
}
#[test]
fn shared() {
let n = NibbleSlice::new(D);
let other = &[0x01u8, 0x23, 0x01, 0x23, 0x45, 0x67];
let m = NibbleSlice::new(other);
assert_eq!(n.common_prefix(&m), 4);
assert_eq!(m.common_prefix(&n), 4);
assert_eq!(n.mid(1).common_prefix(&m.mid(1)), 3);
assert_eq!(n.mid(1).common_prefix(&m.mid(2)), 0);
assert_eq!(n.common_prefix(&m.mid(4)), 6);
assert!(!n.starts_with(&m.mid(4)));
assert!(m.mid(4).starts_with(&n));
}
#[test]
fn compare() {
let other = &[0x01u8, 0x23, 0x01, 0x23, 0x45];
let n = NibbleSlice::new(D);
let m = NibbleSlice::new(other);
assert!(n!= m);
assert!(n > m);
assert!(m < n);
assert!(n == m.mid(4));
assert!(n >= m.mid(4));
assert!(n <= m.mid(4));
}
}
|
{
let l = min(self.len(), n);
let mut r = Bytes::with_capacity(l / 2 + 1);
let mut i = l % 2;
r.push(if i == 1 { 0x10 + self.at(0) } else { 0 } + if is_leaf { 0x20 } else { 0 });
while i < l {
r.push(self.at(i) * 16 + self.at(i + 1));
i += 2;
}
r
}
|
identifier_body
|
nibbleslice.rs
|
// Copyright 2015, 2016 Ethcore (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! Nibble-orientated view onto byte-slice, allowing nibble-precision offsets.
use std::cmp::*;
use std::fmt;
use bytes::*;
/// Nibble-orientated view onto byte-slice, allowing nibble-precision offsets.
///
/// This is an immutable struct. No operations actually change it.
///
/// # Example
/// ```rust
/// extern crate ethcore_util;
/// use ethcore_util::nibbleslice::*;
/// fn main() {
/// let d1 = &[0x01u8, 0x23, 0x45];
/// let d2 = &[0x34u8, 0x50, 0x12];
/// let d3 = &[0x00u8, 0x12];
/// let n1 = NibbleSlice::new(d1); // 0,1,2,3,4,5
/// let n2 = NibbleSlice::new(d2); // 3,4,5,0,1,2
/// let n3 = NibbleSlice::new_offset(d3, 1); // 0,1,2
/// assert!(n1 > n3); // 0,1,2,... > 0,1,2
/// assert!(n1 < n2); // 0,... < 3,...
/// assert!(n2.mid(3) == n3); // 0,1,2 == 0,1,2
/// assert!(n1.starts_with(&n3));
/// assert_eq!(n1.common_prefix(&n3), 3);
/// assert_eq!(n2.mid(3).common_prefix(&n1), 3);
/// }
/// ```
#[derive(Copy, Clone, Eq, Ord)]
pub struct NibbleSlice<'a> {
data: &'a [u8],
offset: usize,
data_encode_suffix: &'a [u8],
offset_encode_suffix: usize,
}
/// Iterator type for a nibble slice.
pub struct NibbleSliceIterator<'a> {
p: &'a NibbleSlice<'a>,
i: usize,
}
impl<'a> Iterator for NibbleSliceIterator<'a> {
type Item = u8;
fn next(&mut self) -> Option<u8> {
self.i += 1;
match self.i <= self.p.len() {
true => Some(self.p.at(self.i - 1)),
false => None,
}
}
}
impl<'a, 'view> NibbleSlice<'a>
where 'a: 'view,
{
/// Create a new nibble slice with the given byte-slice.
pub fn new(data: &'a [u8]) -> Self {
NibbleSlice::new_offset(data, 0)
}
/// Create a new nibble slice with the given byte-slice with a nibble offset.
pub fn new_offset(data: &'a [u8], offset: usize) -> Self {
NibbleSlice {
data: data,
offset: offset,
data_encode_suffix: &b""[..],
offset_encode_suffix: 0,
}
}
/// Create a composed nibble slice; one followed by the other.
pub fn new_composed(a: &'a NibbleSlice, b: &'a NibbleSlice) -> Self {
NibbleSlice {
data: a.data,
offset: a.offset,
data_encode_suffix: b.data,
offset_encode_suffix: b.offset,
}
}
// pub fn new_composed_bytes_offset(a: &NibbleSlice, b: &NibbleSlice) -> (Bytes, usize) {
// let r: Vec<u8>::with_capacity((a.len() + b.len() + 1) / 2);
// let mut i = (a.len() + b.len()) % 2;
// while i < a.len() {
// match i % 2 {
// 0 =>,
// 1 =>,
// }
// i += 1;
// }
// while i < a.len() + b.len() {
// i += 1;
// }
// (r, a.len() + b.len())
// }
/// Get an iterator for the series of nibbles.
pub fn iter(&'a self) -> NibbleSliceIterator<'a> {
NibbleSliceIterator { p: self, i: 0 }
}
/// Create a new nibble slice from the given HPE encoded data (e.g. output of `encoded()`).
pub fn from_encoded(data: &'a [u8]) -> (NibbleSlice, bool) {
(Self::new_offset(data, if data[0] & 16 == 16 { 1 } else { 2 }), data[0] & 32 == 32)
}
/// Is this an empty slice?
pub fn is_empty(&self) -> bool {
self.len() == 0
}
/// Get the length (in nibbles, naturally) of this slice.
pub fn len(&self) -> usize {
(self.data.len() + self.data_encode_suffix.len()) * 2 - self.offset - self.offset_encode_suffix
}
/// Get the nibble at position `i`.
pub fn
|
(&self, i: usize) -> u8 {
let l = self.data.len() * 2 - self.offset;
if i < l {
if (self.offset + i) & 1 == 1 { self.data[(self.offset + i) / 2] & 15u8 } else { self.data[(self.offset + i) / 2] >> 4 }
} else {
let i = i - l;
if (self.offset_encode_suffix + i) & 1 == 1 { self.data_encode_suffix[(self.offset_encode_suffix + i) / 2] & 15u8 } else { self.data_encode_suffix[(self.offset_encode_suffix + i) / 2] >> 4 }
}
}
/// Return object which represents a view on to this slice (further) offset by `i` nibbles.
pub fn mid(&'view self, i: usize) -> NibbleSlice<'a> {
NibbleSlice {
data: self.data,
offset: self.offset + i,
data_encode_suffix: &b""[..],
offset_encode_suffix: 0,
}
}
/// Do we start with the same nibbles as the whole of `them`?
pub fn starts_with(&self, them: &Self) -> bool {
self.common_prefix(them) == them.len()
}
/// How many of the same nibbles at the beginning do we match with `them`?
pub fn common_prefix(&self, them: &Self) -> usize {
let s = min(self.len(), them.len());
let mut i = 0usize;
while i < s {
if self.at(i)!= them.at(i) {
break;
}
i += 1;
}
i
}
/// Encode while nibble slice in prefixed hex notation, noting whether it `is_leaf`.
pub fn encoded(&self, is_leaf: bool) -> Bytes {
let l = self.len();
let mut r = Bytes::with_capacity(l / 2 + 1);
let mut i = l % 2;
r.push(if i == 1 { 0x10 + self.at(0) } else { 0 } + if is_leaf { 0x20 } else { 0 });
while i < l {
r.push(self.at(i) * 16 + self.at(i + 1));
i += 2;
}
r
}
/// Encode only the leftmost `n` bytes of the nibble slice in prefixed hex notation,
/// noting whether it `is_leaf`.
pub fn encoded_leftmost(&self, n: usize, is_leaf: bool) -> Bytes {
let l = min(self.len(), n);
let mut r = Bytes::with_capacity(l / 2 + 1);
let mut i = l % 2;
r.push(if i == 1 { 0x10 + self.at(0) } else { 0 } + if is_leaf { 0x20 } else { 0 });
while i < l {
r.push(self.at(i) * 16 + self.at(i + 1));
i += 2;
}
r
}
}
impl<'a> PartialEq for NibbleSlice<'a> {
fn eq(&self, them: &Self) -> bool {
self.len() == them.len() && self.starts_with(them)
}
}
impl<'a> PartialOrd for NibbleSlice<'a> {
fn partial_cmp(&self, them: &Self) -> Option<Ordering> {
let s = min(self.len(), them.len());
let mut i = 0usize;
while i < s {
match self.at(i).partial_cmp(&them.at(i)).unwrap() {
Ordering::Less => return Some(Ordering::Less),
Ordering::Greater => return Some(Ordering::Greater),
_ => i += 1,
}
}
self.len().partial_cmp(&them.len())
}
}
impl<'a> fmt::Debug for NibbleSlice<'a> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
for i in 0..self.len() {
match i {
0 => try!(write!(f, "{:01x}", self.at(i))),
_ => try!(write!(f, "'{:01x}", self.at(i))),
}
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::NibbleSlice;
static D: &'static [u8; 3] = &[0x01u8, 0x23, 0x45];
#[test]
fn basics() {
let n = NibbleSlice::new(D);
assert_eq!(n.len(), 6);
assert!(!n.is_empty());
let n = NibbleSlice::new_offset(D, 6);
assert!(n.is_empty());
let n = NibbleSlice::new_offset(D, 3);
assert_eq!(n.len(), 3);
for i in 0..3 {
assert_eq!(n.at(i), i as u8 + 3);
}
}
#[test]
fn iterator() {
let n = NibbleSlice::new(D);
let mut nibbles: Vec<u8> = vec![];
nibbles.extend(n.iter());
assert_eq!(nibbles, (0u8..6).collect::<Vec<_>>())
}
#[test]
fn mid() {
let n = NibbleSlice::new(D);
let m = n.mid(2);
for i in 0..4 {
assert_eq!(m.at(i), i as u8 + 2);
}
let m = n.mid(3);
for i in 0..3 {
assert_eq!(m.at(i), i as u8 + 3);
}
}
#[test]
fn encoded() {
let n = NibbleSlice::new(D);
assert_eq!(n.encoded(false), &[0x00, 0x01, 0x23, 0x45]);
assert_eq!(n.encoded(true), &[0x20, 0x01, 0x23, 0x45]);
assert_eq!(n.mid(1).encoded(false), &[0x11, 0x23, 0x45]);
assert_eq!(n.mid(1).encoded(true), &[0x31, 0x23, 0x45]);
}
#[test]
fn from_encoded() {
let n = NibbleSlice::new(D);
assert_eq!((n, false), NibbleSlice::from_encoded(&[0x00, 0x01, 0x23, 0x45]));
assert_eq!((n, true), NibbleSlice::from_encoded(&[0x20, 0x01, 0x23, 0x45]));
assert_eq!((n.mid(1), false), NibbleSlice::from_encoded(&[0x11, 0x23, 0x45]));
assert_eq!((n.mid(1), true), NibbleSlice::from_encoded(&[0x31, 0x23, 0x45]));
}
#[test]
fn shared() {
let n = NibbleSlice::new(D);
let other = &[0x01u8, 0x23, 0x01, 0x23, 0x45, 0x67];
let m = NibbleSlice::new(other);
assert_eq!(n.common_prefix(&m), 4);
assert_eq!(m.common_prefix(&n), 4);
assert_eq!(n.mid(1).common_prefix(&m.mid(1)), 3);
assert_eq!(n.mid(1).common_prefix(&m.mid(2)), 0);
assert_eq!(n.common_prefix(&m.mid(4)), 6);
assert!(!n.starts_with(&m.mid(4)));
assert!(m.mid(4).starts_with(&n));
}
#[test]
fn compare() {
let other = &[0x01u8, 0x23, 0x01, 0x23, 0x45];
let n = NibbleSlice::new(D);
let m = NibbleSlice::new(other);
assert!(n!= m);
assert!(n > m);
assert!(m < n);
assert!(n == m.mid(4));
assert!(n >= m.mid(4));
assert!(n <= m.mid(4));
}
}
|
at
|
identifier_name
|
nibbleslice.rs
|
// Copyright 2015, 2016 Ethcore (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! Nibble-orientated view onto byte-slice, allowing nibble-precision offsets.
use std::cmp::*;
use std::fmt;
use bytes::*;
/// Nibble-orientated view onto byte-slice, allowing nibble-precision offsets.
///
/// This is an immutable struct. No operations actually change it.
///
/// # Example
/// ```rust
/// extern crate ethcore_util;
/// use ethcore_util::nibbleslice::*;
/// fn main() {
/// let d1 = &[0x01u8, 0x23, 0x45];
/// let d2 = &[0x34u8, 0x50, 0x12];
/// let d3 = &[0x00u8, 0x12];
/// let n1 = NibbleSlice::new(d1); // 0,1,2,3,4,5
/// let n2 = NibbleSlice::new(d2); // 3,4,5,0,1,2
/// let n3 = NibbleSlice::new_offset(d3, 1); // 0,1,2
/// assert!(n1 > n3); // 0,1,2,... > 0,1,2
/// assert!(n1 < n2); // 0,... < 3,...
/// assert!(n2.mid(3) == n3); // 0,1,2 == 0,1,2
/// assert!(n1.starts_with(&n3));
/// assert_eq!(n1.common_prefix(&n3), 3);
/// assert_eq!(n2.mid(3).common_prefix(&n1), 3);
/// }
/// ```
#[derive(Copy, Clone, Eq, Ord)]
pub struct NibbleSlice<'a> {
data: &'a [u8],
offset: usize,
data_encode_suffix: &'a [u8],
offset_encode_suffix: usize,
}
/// Iterator type for a nibble slice.
pub struct NibbleSliceIterator<'a> {
p: &'a NibbleSlice<'a>,
i: usize,
}
impl<'a> Iterator for NibbleSliceIterator<'a> {
type Item = u8;
fn next(&mut self) -> Option<u8> {
self.i += 1;
match self.i <= self.p.len() {
true => Some(self.p.at(self.i - 1)),
false => None,
}
}
|
where 'a: 'view,
{
/// Create a new nibble slice with the given byte-slice.
pub fn new(data: &'a [u8]) -> Self {
NibbleSlice::new_offset(data, 0)
}
/// Create a new nibble slice with the given byte-slice with a nibble offset.
pub fn new_offset(data: &'a [u8], offset: usize) -> Self {
NibbleSlice {
data: data,
offset: offset,
data_encode_suffix: &b""[..],
offset_encode_suffix: 0,
}
}
/// Create a composed nibble slice; one followed by the other.
pub fn new_composed(a: &'a NibbleSlice, b: &'a NibbleSlice) -> Self {
NibbleSlice {
data: a.data,
offset: a.offset,
data_encode_suffix: b.data,
offset_encode_suffix: b.offset,
}
}
// pub fn new_composed_bytes_offset(a: &NibbleSlice, b: &NibbleSlice) -> (Bytes, usize) {
// let r: Vec<u8>::with_capacity((a.len() + b.len() + 1) / 2);
// let mut i = (a.len() + b.len()) % 2;
// while i < a.len() {
// match i % 2 {
// 0 =>,
// 1 =>,
// }
// i += 1;
// }
// while i < a.len() + b.len() {
// i += 1;
// }
// (r, a.len() + b.len())
// }
/// Get an iterator for the series of nibbles.
pub fn iter(&'a self) -> NibbleSliceIterator<'a> {
NibbleSliceIterator { p: self, i: 0 }
}
/// Create a new nibble slice from the given HPE encoded data (e.g. output of `encoded()`).
pub fn from_encoded(data: &'a [u8]) -> (NibbleSlice, bool) {
(Self::new_offset(data, if data[0] & 16 == 16 { 1 } else { 2 }), data[0] & 32 == 32)
}
/// Is this an empty slice?
pub fn is_empty(&self) -> bool {
self.len() == 0
}
/// Get the length (in nibbles, naturally) of this slice.
pub fn len(&self) -> usize {
(self.data.len() + self.data_encode_suffix.len()) * 2 - self.offset - self.offset_encode_suffix
}
/// Get the nibble at position `i`.
pub fn at(&self, i: usize) -> u8 {
let l = self.data.len() * 2 - self.offset;
if i < l {
if (self.offset + i) & 1 == 1 { self.data[(self.offset + i) / 2] & 15u8 } else { self.data[(self.offset + i) / 2] >> 4 }
} else {
let i = i - l;
if (self.offset_encode_suffix + i) & 1 == 1 { self.data_encode_suffix[(self.offset_encode_suffix + i) / 2] & 15u8 } else { self.data_encode_suffix[(self.offset_encode_suffix + i) / 2] >> 4 }
}
}
/// Return object which represents a view on to this slice (further) offset by `i` nibbles.
pub fn mid(&'view self, i: usize) -> NibbleSlice<'a> {
NibbleSlice {
data: self.data,
offset: self.offset + i,
data_encode_suffix: &b""[..],
offset_encode_suffix: 0,
}
}
/// Do we start with the same nibbles as the whole of `them`?
pub fn starts_with(&self, them: &Self) -> bool {
self.common_prefix(them) == them.len()
}
/// How many of the same nibbles at the beginning do we match with `them`?
pub fn common_prefix(&self, them: &Self) -> usize {
let s = min(self.len(), them.len());
let mut i = 0usize;
while i < s {
if self.at(i)!= them.at(i) {
break;
}
i += 1;
}
i
}
/// Encode while nibble slice in prefixed hex notation, noting whether it `is_leaf`.
pub fn encoded(&self, is_leaf: bool) -> Bytes {
let l = self.len();
let mut r = Bytes::with_capacity(l / 2 + 1);
let mut i = l % 2;
r.push(if i == 1 { 0x10 + self.at(0) } else { 0 } + if is_leaf { 0x20 } else { 0 });
while i < l {
r.push(self.at(i) * 16 + self.at(i + 1));
i += 2;
}
r
}
/// Encode only the leftmost `n` bytes of the nibble slice in prefixed hex notation,
/// noting whether it `is_leaf`.
pub fn encoded_leftmost(&self, n: usize, is_leaf: bool) -> Bytes {
let l = min(self.len(), n);
let mut r = Bytes::with_capacity(l / 2 + 1);
let mut i = l % 2;
r.push(if i == 1 { 0x10 + self.at(0) } else { 0 } + if is_leaf { 0x20 } else { 0 });
while i < l {
r.push(self.at(i) * 16 + self.at(i + 1));
i += 2;
}
r
}
}
impl<'a> PartialEq for NibbleSlice<'a> {
fn eq(&self, them: &Self) -> bool {
self.len() == them.len() && self.starts_with(them)
}
}
impl<'a> PartialOrd for NibbleSlice<'a> {
fn partial_cmp(&self, them: &Self) -> Option<Ordering> {
let s = min(self.len(), them.len());
let mut i = 0usize;
while i < s {
match self.at(i).partial_cmp(&them.at(i)).unwrap() {
Ordering::Less => return Some(Ordering::Less),
Ordering::Greater => return Some(Ordering::Greater),
_ => i += 1,
}
}
self.len().partial_cmp(&them.len())
}
}
impl<'a> fmt::Debug for NibbleSlice<'a> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
for i in 0..self.len() {
match i {
0 => try!(write!(f, "{:01x}", self.at(i))),
_ => try!(write!(f, "'{:01x}", self.at(i))),
}
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::NibbleSlice;
static D: &'static [u8; 3] = &[0x01u8, 0x23, 0x45];
#[test]
fn basics() {
let n = NibbleSlice::new(D);
assert_eq!(n.len(), 6);
assert!(!n.is_empty());
let n = NibbleSlice::new_offset(D, 6);
assert!(n.is_empty());
let n = NibbleSlice::new_offset(D, 3);
assert_eq!(n.len(), 3);
for i in 0..3 {
assert_eq!(n.at(i), i as u8 + 3);
}
}
#[test]
fn iterator() {
let n = NibbleSlice::new(D);
let mut nibbles: Vec<u8> = vec![];
nibbles.extend(n.iter());
assert_eq!(nibbles, (0u8..6).collect::<Vec<_>>())
}
#[test]
fn mid() {
let n = NibbleSlice::new(D);
let m = n.mid(2);
for i in 0..4 {
assert_eq!(m.at(i), i as u8 + 2);
}
let m = n.mid(3);
for i in 0..3 {
assert_eq!(m.at(i), i as u8 + 3);
}
}
#[test]
fn encoded() {
let n = NibbleSlice::new(D);
assert_eq!(n.encoded(false), &[0x00, 0x01, 0x23, 0x45]);
assert_eq!(n.encoded(true), &[0x20, 0x01, 0x23, 0x45]);
assert_eq!(n.mid(1).encoded(false), &[0x11, 0x23, 0x45]);
assert_eq!(n.mid(1).encoded(true), &[0x31, 0x23, 0x45]);
}
#[test]
fn from_encoded() {
let n = NibbleSlice::new(D);
assert_eq!((n, false), NibbleSlice::from_encoded(&[0x00, 0x01, 0x23, 0x45]));
assert_eq!((n, true), NibbleSlice::from_encoded(&[0x20, 0x01, 0x23, 0x45]));
assert_eq!((n.mid(1), false), NibbleSlice::from_encoded(&[0x11, 0x23, 0x45]));
assert_eq!((n.mid(1), true), NibbleSlice::from_encoded(&[0x31, 0x23, 0x45]));
}
#[test]
fn shared() {
let n = NibbleSlice::new(D);
let other = &[0x01u8, 0x23, 0x01, 0x23, 0x45, 0x67];
let m = NibbleSlice::new(other);
assert_eq!(n.common_prefix(&m), 4);
assert_eq!(m.common_prefix(&n), 4);
assert_eq!(n.mid(1).common_prefix(&m.mid(1)), 3);
assert_eq!(n.mid(1).common_prefix(&m.mid(2)), 0);
assert_eq!(n.common_prefix(&m.mid(4)), 6);
assert!(!n.starts_with(&m.mid(4)));
assert!(m.mid(4).starts_with(&n));
}
#[test]
fn compare() {
let other = &[0x01u8, 0x23, 0x01, 0x23, 0x45];
let n = NibbleSlice::new(D);
let m = NibbleSlice::new(other);
assert!(n!= m);
assert!(n > m);
assert!(m < n);
assert!(n == m.mid(4));
assert!(n >= m.mid(4));
assert!(n <= m.mid(4));
}
}
|
}
impl<'a, 'view> NibbleSlice<'a>
|
random_line_split
|
common.rs
|
// Copyright 2014 The html5ever Project Developers. See the
// COPYRIGHT file at the top-level directory of this distribution.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use tokenizer::Attribute;
|
pub use self::NodeEnum::{Document, Doctype, Text, Comment, Element};
/// The different kinds of nodes in the DOM.
#[derive(Show)]
pub enum NodeEnum {
/// The `Document` itself.
Document,
/// A `DOCTYPE` with name, public id, and system id.
Doctype(String, String, String),
/// A text node.
Text(String),
/// A comment.
Comment(String),
/// An element with attributes.
Element(QualName, Vec<Attribute>),
}
|
use collections::vec::Vec;
use collections::string::String;
use string_cache::QualName;
|
random_line_split
|
common.rs
|
// Copyright 2014 The html5ever Project Developers. See the
// COPYRIGHT file at the top-level directory of this distribution.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use tokenizer::Attribute;
use collections::vec::Vec;
use collections::string::String;
use string_cache::QualName;
pub use self::NodeEnum::{Document, Doctype, Text, Comment, Element};
/// The different kinds of nodes in the DOM.
#[derive(Show)]
pub enum
|
{
/// The `Document` itself.
Document,
/// A `DOCTYPE` with name, public id, and system id.
Doctype(String, String, String),
/// A text node.
Text(String),
/// A comment.
Comment(String),
/// An element with attributes.
Element(QualName, Vec<Attribute>),
}
|
NodeEnum
|
identifier_name
|
node.rs
|
use std::convert::From;
use std::io::Write;
use super::serializer::Serializer;
use string_cache::QualName;
pub struct Attr<'a>(&'a QualName, &'a str);
impl<'a> From<(&'a QualName, &'a str)> for Attr<'a> {
fn
|
((key, value): (&'a QualName, &'a str)) -> Attr {
Attr(key, value)
}
}
impl<'a, 'b: 'a> From<&'a (&'b QualName, &'b str)> for Attr<'b> {
fn from(&(key, value): &'a (&'b QualName, &'b str)) -> Attr<'b> {
Attr(key, value)
}
}
pub struct NodeSerializer<'a, 'b: 'a, 'w: 'b, W: 'w + Write> {
name: Option<QualName>,
serializer: &'a mut Serializer<'b, 'w, W>,
}
impl<'a, 'b: 'a, 'c: 'b, 'd: 'c, 'w: 'd, W: Write> NodeSerializer<'c, 'd, 'w, W> {
pub fn text(&mut self, text: &str) {
self.serializer.write_text(text);
}
pub fn comment(&mut self, comment: &str) {
self.serializer.write_comment(comment);
}
pub fn element<'i, I, II>(&'a mut self,
name: QualName,
attrs: I)
-> NodeSerializer<'a, 'd, 'w, W>
where I: Iterator<Item = II>,
II: Into<Attr<'i>>
{
self.serializer.start_elem(name.clone(),
attrs.into_iter().map(|a| {
let a = a.into();
(a.0, a.1)
}));
NodeSerializer {
name: Some(name),
serializer: self.serializer,
}
}
}
impl<'a, 'b, 'w, W: Write> Drop for NodeSerializer<'a, 'b, 'w, W> {
fn drop(&mut self) {
match self.name {
Some(ref name) => self.serializer.end_elem(name.clone()),
None => (),
}
}
}
pub fn new_node_ser<'a, 'b: 'a, 'w: 'b, W>(s: &'a mut Serializer<'b, 'w, W>)
-> NodeSerializer<'a, 'b, 'w, W>
where W: 'w + Write
{
NodeSerializer {
name: None,
serializer: s,
}
}
|
from
|
identifier_name
|
node.rs
|
use std::convert::From;
use std::io::Write;
use super::serializer::Serializer;
use string_cache::QualName;
pub struct Attr<'a>(&'a QualName, &'a str);
impl<'a> From<(&'a QualName, &'a str)> for Attr<'a> {
fn from((key, value): (&'a QualName, &'a str)) -> Attr {
Attr(key, value)
}
}
impl<'a, 'b: 'a> From<&'a (&'b QualName, &'b str)> for Attr<'b> {
fn from(&(key, value): &'a (&'b QualName, &'b str)) -> Attr<'b> {
Attr(key, value)
}
}
pub struct NodeSerializer<'a, 'b: 'a, 'w: 'b, W: 'w + Write> {
name: Option<QualName>,
serializer: &'a mut Serializer<'b, 'w, W>,
}
impl<'a, 'b: 'a, 'c: 'b, 'd: 'c, 'w: 'd, W: Write> NodeSerializer<'c, 'd, 'w, W> {
pub fn text(&mut self, text: &str) {
self.serializer.write_text(text);
}
pub fn comment(&mut self, comment: &str) {
self.serializer.write_comment(comment);
}
pub fn element<'i, I, II>(&'a mut self,
name: QualName,
attrs: I)
|
attrs.into_iter().map(|a| {
let a = a.into();
(a.0, a.1)
}));
NodeSerializer {
name: Some(name),
serializer: self.serializer,
}
}
}
impl<'a, 'b, 'w, W: Write> Drop for NodeSerializer<'a, 'b, 'w, W> {
fn drop(&mut self) {
match self.name {
Some(ref name) => self.serializer.end_elem(name.clone()),
None => (),
}
}
}
pub fn new_node_ser<'a, 'b: 'a, 'w: 'b, W>(s: &'a mut Serializer<'b, 'w, W>)
-> NodeSerializer<'a, 'b, 'w, W>
where W: 'w + Write
{
NodeSerializer {
name: None,
serializer: s,
}
}
|
-> NodeSerializer<'a, 'd, 'w, W>
where I: Iterator<Item = II>,
II: Into<Attr<'i>>
{
self.serializer.start_elem(name.clone(),
|
random_line_split
|
node.rs
|
use std::convert::From;
use std::io::Write;
use super::serializer::Serializer;
use string_cache::QualName;
pub struct Attr<'a>(&'a QualName, &'a str);
impl<'a> From<(&'a QualName, &'a str)> for Attr<'a> {
fn from((key, value): (&'a QualName, &'a str)) -> Attr {
Attr(key, value)
}
}
impl<'a, 'b: 'a> From<&'a (&'b QualName, &'b str)> for Attr<'b> {
fn from(&(key, value): &'a (&'b QualName, &'b str)) -> Attr<'b> {
Attr(key, value)
}
}
pub struct NodeSerializer<'a, 'b: 'a, 'w: 'b, W: 'w + Write> {
name: Option<QualName>,
serializer: &'a mut Serializer<'b, 'w, W>,
}
impl<'a, 'b: 'a, 'c: 'b, 'd: 'c, 'w: 'd, W: Write> NodeSerializer<'c, 'd, 'w, W> {
pub fn text(&mut self, text: &str) {
self.serializer.write_text(text);
}
pub fn comment(&mut self, comment: &str) {
self.serializer.write_comment(comment);
}
pub fn element<'i, I, II>(&'a mut self,
name: QualName,
attrs: I)
-> NodeSerializer<'a, 'd, 'w, W>
where I: Iterator<Item = II>,
II: Into<Attr<'i>>
{
self.serializer.start_elem(name.clone(),
attrs.into_iter().map(|a| {
let a = a.into();
(a.0, a.1)
}));
NodeSerializer {
name: Some(name),
serializer: self.serializer,
}
}
}
impl<'a, 'b, 'w, W: Write> Drop for NodeSerializer<'a, 'b, 'w, W> {
fn drop(&mut self)
|
}
pub fn new_node_ser<'a, 'b: 'a, 'w: 'b, W>(s: &'a mut Serializer<'b, 'w, W>)
-> NodeSerializer<'a, 'b, 'w, W>
where W: 'w + Write
{
NodeSerializer {
name: None,
serializer: s,
}
}
|
{
match self.name {
Some(ref name) => self.serializer.end_elem(name.clone()),
None => (),
}
}
|
identifier_body
|
projection-no-regions-closure.rs
|
// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// compile-flags:-Zborrowck=mir -Zverbose
|
#![allow(warnings)]
#![feature(rustc_attrs)]
trait Anything { }
impl<T> Anything for T { }
fn with_signature<'a, T, F>(x: Box<T>, op: F) -> Box<dyn Anything + 'a>
where F: FnOnce(Box<T>) -> Box<dyn Anything + 'a>
{
op(x)
}
#[rustc_regions]
fn no_region<'a, T>(x: Box<T>) -> Box<dyn Anything + 'a>
where
T: Iterator,
{
with_signature(x, |mut y| Box::new(y.next()))
//~^ ERROR the associated type `<T as std::iter::Iterator>::Item` may not live long enough
}
#[rustc_regions]
fn correct_region<'a, T>(x: Box<T>) -> Box<dyn Anything + 'a>
where
T: 'a + Iterator,
{
with_signature(x, |mut y| Box::new(y.next()))
}
#[rustc_regions]
fn wrong_region<'a, 'b, T>(x: Box<T>) -> Box<dyn Anything + 'a>
where
T: 'b + Iterator,
{
with_signature(x, |mut y| Box::new(y.next()))
//~^ ERROR the associated type `<T as std::iter::Iterator>::Item` may not live long enough
}
#[rustc_regions]
fn outlives_region<'a, 'b, T>(x: Box<T>) -> Box<dyn Anything + 'a>
where
T: 'b + Iterator,
'b: 'a,
{
with_signature(x, |mut y| Box::new(y.next()))
}
fn main() {}
|
// Tests closures that propagate an outlives relationship to their
// creator where the subject is a projection with no regions (`<T as
// Iterator>::Item`, to be exact).
|
random_line_split
|
projection-no-regions-closure.rs
|
// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// compile-flags:-Zborrowck=mir -Zverbose
// Tests closures that propagate an outlives relationship to their
// creator where the subject is a projection with no regions (`<T as
// Iterator>::Item`, to be exact).
#![allow(warnings)]
#![feature(rustc_attrs)]
trait Anything { }
impl<T> Anything for T { }
fn
|
<'a, T, F>(x: Box<T>, op: F) -> Box<dyn Anything + 'a>
where F: FnOnce(Box<T>) -> Box<dyn Anything + 'a>
{
op(x)
}
#[rustc_regions]
fn no_region<'a, T>(x: Box<T>) -> Box<dyn Anything + 'a>
where
T: Iterator,
{
with_signature(x, |mut y| Box::new(y.next()))
//~^ ERROR the associated type `<T as std::iter::Iterator>::Item` may not live long enough
}
#[rustc_regions]
fn correct_region<'a, T>(x: Box<T>) -> Box<dyn Anything + 'a>
where
T: 'a + Iterator,
{
with_signature(x, |mut y| Box::new(y.next()))
}
#[rustc_regions]
fn wrong_region<'a, 'b, T>(x: Box<T>) -> Box<dyn Anything + 'a>
where
T: 'b + Iterator,
{
with_signature(x, |mut y| Box::new(y.next()))
//~^ ERROR the associated type `<T as std::iter::Iterator>::Item` may not live long enough
}
#[rustc_regions]
fn outlives_region<'a, 'b, T>(x: Box<T>) -> Box<dyn Anything + 'a>
where
T: 'b + Iterator,
'b: 'a,
{
with_signature(x, |mut y| Box::new(y.next()))
}
fn main() {}
|
with_signature
|
identifier_name
|
projection-no-regions-closure.rs
|
// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// compile-flags:-Zborrowck=mir -Zverbose
// Tests closures that propagate an outlives relationship to their
// creator where the subject is a projection with no regions (`<T as
// Iterator>::Item`, to be exact).
#![allow(warnings)]
#![feature(rustc_attrs)]
trait Anything { }
impl<T> Anything for T { }
fn with_signature<'a, T, F>(x: Box<T>, op: F) -> Box<dyn Anything + 'a>
where F: FnOnce(Box<T>) -> Box<dyn Anything + 'a>
{
op(x)
}
#[rustc_regions]
fn no_region<'a, T>(x: Box<T>) -> Box<dyn Anything + 'a>
where
T: Iterator,
{
with_signature(x, |mut y| Box::new(y.next()))
//~^ ERROR the associated type `<T as std::iter::Iterator>::Item` may not live long enough
}
#[rustc_regions]
fn correct_region<'a, T>(x: Box<T>) -> Box<dyn Anything + 'a>
where
T: 'a + Iterator,
{
with_signature(x, |mut y| Box::new(y.next()))
}
#[rustc_regions]
fn wrong_region<'a, 'b, T>(x: Box<T>) -> Box<dyn Anything + 'a>
where
T: 'b + Iterator,
|
#[rustc_regions]
fn outlives_region<'a, 'b, T>(x: Box<T>) -> Box<dyn Anything + 'a>
where
T: 'b + Iterator,
'b: 'a,
{
with_signature(x, |mut y| Box::new(y.next()))
}
fn main() {}
|
{
with_signature(x, |mut y| Box::new(y.next()))
//~^ ERROR the associated type `<T as std::iter::Iterator>::Item` may not live long enough
}
|
identifier_body
|
unwind.rs
|
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Stack unwinding
// Implementation of Rust stack unwinding
//
// For background on exception handling and stack unwinding please see
// "Exception Handling in LLVM" (llvm.org/docs/ExceptionHandling.html) and
// documents linked from it.
// These are also good reads:
// http://theofilos.cs.columbia.edu/blog/2013/09/22/base_abi/
// http://monoinfinito.wordpress.com/series/exception-handling-in-c/
// http://www.airs.com/blog/index.php?s=exception+frames
//
// ~~~ A brief summary ~~~
// Exception handling happens in two phases: a search phase and a cleanup phase.
//
// In both phases the unwinder walks stack frames from top to bottom using
// information from the stack frame unwind sections of the current process's
// modules ("module" here refers to an OS module, i.e. an executable or a
// dynamic library).
//
// For each stack frame, it invokes the associated "personality routine", whose
// address is also stored in the unwind info section.
//
// In the search phase, the job of a personality routine is to examine exception
// object being thrown, and to decide whether it should be caught at that stack
// frame. Once the handler frame has been identified, cleanup phase begins.
//
// In the cleanup phase, personality routines invoke cleanup code associated
// with their stack frames (i.e. destructors). Once stack has been unwound down
// to the handler frame level, unwinding stops and the last personality routine
// transfers control to its' catch block.
//
// ~~~ Frame unwind info registration ~~~
// Each module has its' own frame unwind info section (usually ".eh_frame"), and
// unwinder needs to know about all of them in order for unwinding to be able to
// cross module boundaries.
//
// On some platforms, like Linux, this is achieved by dynamically enumerating
// currently loaded modules via the dl_iterate_phdr() API and finding all
//.eh_frame sections.
//
// Others, like Windows, require modules to actively register their unwind info
// sections by calling __register_frame_info() API at startup. In the latter
// case it is essential that there is only one copy of the unwinder runtime in
// the process. This is usually achieved by linking to the dynamic version of
// the unwind runtime.
//
// Currently Rust uses unwind runtime provided by libgcc.
use any::{Any, AnyRefExt};
use fmt;
use intrinsics;
use kinds::Send;
use mem;
use option::{Some, None, Option};
use owned::Box;
use prelude::drop;
use ptr::RawPtr;
use result::{Err, Ok};
use rt::backtrace;
use rt::local::Local;
use rt::task::Task;
use str::Str;
use string::String;
use task::TaskResult;
use uw = rt::libunwind;
pub struct Unwinder {
unwinding: bool,
cause: Option<Box<Any:Send>>
}
impl Unwinder {
pub fn new() -> Unwinder {
Unwinder {
unwinding: false,
cause: None,
}
}
pub fn unwinding(&self) -> bool {
self.unwinding
}
pub fn try(&mut self, f: ||) {
use raw::Closure;
use libc::{c_void};
unsafe {
let closure: Closure = mem::transmute(f);
let ep = rust_try(try_fn, closure.code as *c_void,
closure.env as *c_void);
if!ep.is_null() {
rtdebug!("caught {}", (*ep).exception_class);
uw::_Unwind_DeleteException(ep);
}
}
extern fn try_fn(code: *c_void, env: *c_void) {
unsafe {
let closure: || = mem::transmute(Closure {
code: code as *(),
env: env as *(),
});
closure();
}
}
extern {
// Rust's try-catch
// When f(...) returns normally, the return value is null.
// When f(...) throws, the return value is a pointer to the caught
// exception object.
fn rust_try(f: extern "C" fn(*c_void, *c_void),
code: *c_void,
data: *c_void) -> *uw::_Unwind_Exception;
}
}
pub fn begin_unwind(&mut self, cause: Box<Any:Send>) ->! {
rtdebug!("begin_unwind()");
self.unwinding = true;
self.cause = Some(cause);
rust_fail();
// An uninlined, unmangled function upon which to slap yer breakpoints
#[inline(never)]
#[no_mangle]
fn rust_fail() ->! {
unsafe {
let exception = box uw::_Unwind_Exception {
exception_class: rust_exception_class(),
exception_cleanup: exception_cleanup,
private: [0,..uw::unwinder_private_data_size],
};
let error = uw::_Unwind_RaiseException(mem::transmute(exception));
rtabort!("Could not unwind stack, error = {}", error as int)
}
extern "C" fn exception_cleanup(_unwind_code: uw::_Unwind_Reason_Code,
exception: *uw::_Unwind_Exception) {
rtdebug!("exception_cleanup()");
unsafe {
let _: Box<uw::_Unwind_Exception> =
mem::transmute(exception);
}
}
}
}
pub fn result(&mut self) -> TaskResult {
if self.unwinding {
Err(self.cause.take().unwrap())
} else {
Ok(())
}
}
}
// Rust's exception class identifier. This is used by personality routines to
// determine whether the exception was thrown by their own runtime.
fn rust_exception_class() -> uw::_Unwind_Exception_Class {
// M O Z \0 R U S T -- vendor, language
0x4d4f5a_00_52555354
}
// We could implement our personality routine in pure Rust, however exception
// info decoding is tedious. More importantly, personality routines have to
// handle various platform quirks, which are not fun to maintain. For this
// reason, we attempt to reuse personality routine of the C language:
// __gcc_personality_v0.
//
// Since C does not support exception catching, __gcc_personality_v0 simply
// always returns _URC_CONTINUE_UNWIND in search phase, and always returns
// _URC_INSTALL_CONTEXT (i.e. "invoke cleanup code") in cleanup phase.
//
// This is pretty close to Rust's exception handling approach, except that Rust
// does have a single "catch-all" handler at the bottom of each task's stack.
// So we have two versions:
// - rust_eh_personality, used by all cleanup landing pads, which never catches,
// so the behavior of __gcc_personality_v0 is perfectly adequate there, and
// - rust_eh_personality_catch, used only by rust_try(), which always catches.
// This is achieved by overriding the return value in search phase to always
// say "catch!".
#[cfg(not(target_arch = "arm"), not(test))]
#[doc(hidden)]
#[allow(visible_private_types)]
pub mod eabi {
use uw = rt::libunwind;
use libc::c_int;
extern "C" {
fn __gcc_personality_v0(version: c_int,
actions: uw::_Unwind_Action,
exception_class: uw::_Unwind_Exception_Class,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context)
-> uw::_Unwind_Reason_Code;
}
#[lang="eh_personality"]
extern fn eh_personality(
version: c_int,
actions: uw::_Unwind_Action,
exception_class: uw::_Unwind_Exception_Class,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context
) -> uw::_Unwind_Reason_Code
{
unsafe {
__gcc_personality_v0(version, actions, exception_class, ue_header,
context)
}
}
#[no_mangle] // referenced from rust_try.ll
pub extern "C" fn rust_eh_personality_catch(
version: c_int,
actions: uw::_Unwind_Action,
exception_class: uw::_Unwind_Exception_Class,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context
) -> uw::_Unwind_Reason_Code
{
if (actions as c_int & uw::_UA_SEARCH_PHASE as c_int)!= 0 { // search phase
uw::_URC_HANDLER_FOUND // catch!
}
else { // cleanup phase
unsafe {
__gcc_personality_v0(version, actions, exception_class, ue_header,
context)
}
}
}
}
// ARM EHABI uses a slightly different personality routine signature,
// but otherwise works the same.
#[cfg(target_arch = "arm", not(test))]
#[allow(visible_private_types)]
pub mod eabi {
use uw = rt::libunwind;
use libc::c_int;
extern "C" {
fn __gcc_personality_v0(state: uw::_Unwind_State,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context)
-> uw::_Unwind_Reason_Code;
}
#[lang="eh_personality"]
extern "C" fn eh_personality(
state: uw::_Unwind_State,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context
) -> uw::_Unwind_Reason_Code
{
unsafe {
__gcc_personality_v0(state, ue_header, context)
}
}
#[no_mangle] // referenced from rust_try.ll
pub extern "C" fn rust_eh_personality_catch(
state: uw::_Unwind_State,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context
) -> uw::_Unwind_Reason_Code
{
if (state as c_int & uw::_US_ACTION_MASK as c_int)
== uw::_US_VIRTUAL_UNWIND_FRAME as c_int { // search phase
uw::_URC_HANDLER_FOUND // catch!
}
else { // cleanup phase
unsafe {
__gcc_personality_v0(state, ue_header, context)
}
}
}
}
// Entry point of failure from the libcore crate
#[cfg(not(test))]
#[lang = "begin_unwind"]
pub extern fn rust_begin_unwind(msg: &fmt::Arguments,
file: &'static str, line: uint) ->! {
begin_unwind_fmt(msg, file, line)
}
/// The entry point for unwinding with a formatted message.
///
/// This is designed to reduce the amount of code required at the call
/// site as much as possible (so that `fail!()` has as low an impact
/// on (e.g.) the inlining of other functions as possible), by moving
/// the actual formatting into this shared place.
#[inline(never)] #[cold]
pub fn begin_unwind_fmt(msg: &fmt::Arguments, file: &'static str,
line: uint) ->! {
// We do two allocations here, unfortunately. But (a) they're
// required with the current scheme, and (b) we don't handle
// failure + OOM properly anyway (see comment in begin_unwind
// below).
begin_unwind_inner(box fmt::format(msg), file, line)
}
/// This is the entry point of unwinding for fail!() and assert!().
#[inline(never)] #[cold] // avoid code bloat at the call sites as much as possible
pub fn begin_unwind<M: Any + Send>(msg: M, file: &'static str, line: uint) ->! {
// Note that this should be the only allocation performed in this code path.
// Currently this means that fail!() on OOM will invoke this code path,
// but then again we're not really ready for failing on OOM anyway. If
// we do start doing this, then we should propagate this allocation to
// be performed in the parent of this task instead of the task that's
// failing.
// see below for why we do the `Any` coercion here.
begin_unwind_inner(box msg, file, line)
}
/// The core of the unwinding.
///
/// This is non-generic to avoid instantiation bloat in other crates
/// (which makes compilation of small crates noticably slower). (Note:
/// we need the `Any` object anyway, we're not just creating it to
/// avoid being generic.)
///
/// Do this split took the LLVM IR line counts of `fn main() { fail!()
/// }` from ~1900/3700 (-O/no opts) to 180/590.
#[inline(never)] #[cold] // this is the slow path, please never inline this
fn begin_unwind_inner(msg: Box<Any:Send>,
file: &'static str,
line: uint) ->! {
let mut task;
{
let msg_s = match msg.as_ref::<&'static str>() {
Some(s) => *s,
None => match msg.as_ref::<String>() {
Some(s) => s.as_slice(),
None => "Box<Any>",
}
};
// It is assumed that all reasonable rust code will have a local task at
// all times. This means that this `try_take` will succeed almost all of
// the time. There are border cases, however, when the runtime has
// *almost* set up the local task, but hasn't quite gotten there yet. In
// order to get some better diagnostics, we print on failure and
// immediately abort the whole process if there is no local task
// available.
let opt_task: Option<Box<Task>> = Local::try_take();
task = match opt_task {
Some(t) => t,
None => {
rterrln!("failed at '{}', {}:{}", msg_s, file, line);
if backtrace::log_enabled() {
let mut err = ::rt::util::Stderr;
let _err = backtrace::write(&mut err);
} else {
rterrln!("run with `RUST_BACKTRACE=1` to see a backtrace");
}
unsafe { intrinsics::abort() }
}
};
// See comments in io::stdio::with_task_stdout as to why we have to be
// careful when using an arbitrary I/O handle from the task. We
// essentially need to dance to make sure when a task is in TLS when
// running user code.
let name = task.name.take();
{
let n = name.as_ref().map(|n| n.as_slice()).unwrap_or("<unnamed>");
match task.stderr.take() {
Some(mut stderr) => {
Local::put(task);
// FIXME: what to do when the task printing fails?
let _err = write!(stderr,
"task '{}' failed at '{}', {}:{}\n",
n, msg_s, file, line);
if backtrace::log_enabled() {
let _err = backtrace::write(stderr);
}
task = Local::take();
match mem::replace(&mut task.stderr, Some(stderr)) {
Some(prev) => {
Local::put(task);
drop(prev);
task = Local::take();
}
None => {}
}
}
None => {
rterrln!("task '{}' failed at '{}', {}:{}", n, msg_s,
file, line);
if backtrace::log_enabled() {
let mut err = ::rt::util::Stderr;
let _err = backtrace::write(&mut err);
}
}
}
}
task.name = name;
if task.unwinder.unwinding {
// If a task fails while it's already unwinding then we
// have limited options. Currently our preference is to
// just abort. In the future we may consider resuming
// unwinding or otherwise exiting the task cleanly.
rterrln!("task failed during unwinding (double-failure - total drag!)")
rterrln!("rust must abort now. so sorry.");
// Don't print the backtrace twice (it would have already been
// printed if logging was enabled).
if!backtrace::log_enabled() {
let mut err = ::rt::util::Stderr;
let _err = backtrace::write(&mut err);
}
unsafe { intrinsics::abort() }
}
}
// The unwinder won't actually use the task at all, so we put the task back
// into TLS right before we invoke the unwinder, but this means we need an
// unsafe reference back to the unwinder once it's in TLS.
Local::put(task);
unsafe {
let task: *mut Task = Local::unsafe_borrow();
(*task).unwinder.begin_unwind(msg);
}
}
|
// Copyright 2013 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
|
random_line_split
|
|
unwind.rs
|
// Copyright 2013 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Stack unwinding
// Implementation of Rust stack unwinding
//
// For background on exception handling and stack unwinding please see
// "Exception Handling in LLVM" (llvm.org/docs/ExceptionHandling.html) and
// documents linked from it.
// These are also good reads:
// http://theofilos.cs.columbia.edu/blog/2013/09/22/base_abi/
// http://monoinfinito.wordpress.com/series/exception-handling-in-c/
// http://www.airs.com/blog/index.php?s=exception+frames
//
// ~~~ A brief summary ~~~
// Exception handling happens in two phases: a search phase and a cleanup phase.
//
// In both phases the unwinder walks stack frames from top to bottom using
// information from the stack frame unwind sections of the current process's
// modules ("module" here refers to an OS module, i.e. an executable or a
// dynamic library).
//
// For each stack frame, it invokes the associated "personality routine", whose
// address is also stored in the unwind info section.
//
// In the search phase, the job of a personality routine is to examine exception
// object being thrown, and to decide whether it should be caught at that stack
// frame. Once the handler frame has been identified, cleanup phase begins.
//
// In the cleanup phase, personality routines invoke cleanup code associated
// with their stack frames (i.e. destructors). Once stack has been unwound down
// to the handler frame level, unwinding stops and the last personality routine
// transfers control to its' catch block.
//
// ~~~ Frame unwind info registration ~~~
// Each module has its' own frame unwind info section (usually ".eh_frame"), and
// unwinder needs to know about all of them in order for unwinding to be able to
// cross module boundaries.
//
// On some platforms, like Linux, this is achieved by dynamically enumerating
// currently loaded modules via the dl_iterate_phdr() API and finding all
//.eh_frame sections.
//
// Others, like Windows, require modules to actively register their unwind info
// sections by calling __register_frame_info() API at startup. In the latter
// case it is essential that there is only one copy of the unwinder runtime in
// the process. This is usually achieved by linking to the dynamic version of
// the unwind runtime.
//
// Currently Rust uses unwind runtime provided by libgcc.
use any::{Any, AnyRefExt};
use fmt;
use intrinsics;
use kinds::Send;
use mem;
use option::{Some, None, Option};
use owned::Box;
use prelude::drop;
use ptr::RawPtr;
use result::{Err, Ok};
use rt::backtrace;
use rt::local::Local;
use rt::task::Task;
use str::Str;
use string::String;
use task::TaskResult;
use uw = rt::libunwind;
pub struct Unwinder {
unwinding: bool,
cause: Option<Box<Any:Send>>
}
impl Unwinder {
pub fn new() -> Unwinder {
Unwinder {
unwinding: false,
cause: None,
}
}
pub fn unwinding(&self) -> bool {
self.unwinding
}
pub fn try(&mut self, f: ||) {
use raw::Closure;
use libc::{c_void};
unsafe {
let closure: Closure = mem::transmute(f);
let ep = rust_try(try_fn, closure.code as *c_void,
closure.env as *c_void);
if!ep.is_null() {
rtdebug!("caught {}", (*ep).exception_class);
uw::_Unwind_DeleteException(ep);
}
}
extern fn try_fn(code: *c_void, env: *c_void) {
unsafe {
let closure: || = mem::transmute(Closure {
code: code as *(),
env: env as *(),
});
closure();
}
}
extern {
// Rust's try-catch
// When f(...) returns normally, the return value is null.
// When f(...) throws, the return value is a pointer to the caught
// exception object.
fn rust_try(f: extern "C" fn(*c_void, *c_void),
code: *c_void,
data: *c_void) -> *uw::_Unwind_Exception;
}
}
pub fn begin_unwind(&mut self, cause: Box<Any:Send>) ->! {
rtdebug!("begin_unwind()");
self.unwinding = true;
self.cause = Some(cause);
rust_fail();
// An uninlined, unmangled function upon which to slap yer breakpoints
#[inline(never)]
#[no_mangle]
fn rust_fail() ->! {
unsafe {
let exception = box uw::_Unwind_Exception {
exception_class: rust_exception_class(),
exception_cleanup: exception_cleanup,
private: [0,..uw::unwinder_private_data_size],
};
let error = uw::_Unwind_RaiseException(mem::transmute(exception));
rtabort!("Could not unwind stack, error = {}", error as int)
}
extern "C" fn exception_cleanup(_unwind_code: uw::_Unwind_Reason_Code,
exception: *uw::_Unwind_Exception) {
rtdebug!("exception_cleanup()");
unsafe {
let _: Box<uw::_Unwind_Exception> =
mem::transmute(exception);
}
}
}
}
pub fn result(&mut self) -> TaskResult {
if self.unwinding {
Err(self.cause.take().unwrap())
} else {
Ok(())
}
}
}
// Rust's exception class identifier. This is used by personality routines to
// determine whether the exception was thrown by their own runtime.
fn rust_exception_class() -> uw::_Unwind_Exception_Class {
// M O Z \0 R U S T -- vendor, language
0x4d4f5a_00_52555354
}
// We could implement our personality routine in pure Rust, however exception
// info decoding is tedious. More importantly, personality routines have to
// handle various platform quirks, which are not fun to maintain. For this
// reason, we attempt to reuse personality routine of the C language:
// __gcc_personality_v0.
//
// Since C does not support exception catching, __gcc_personality_v0 simply
// always returns _URC_CONTINUE_UNWIND in search phase, and always returns
// _URC_INSTALL_CONTEXT (i.e. "invoke cleanup code") in cleanup phase.
//
// This is pretty close to Rust's exception handling approach, except that Rust
// does have a single "catch-all" handler at the bottom of each task's stack.
// So we have two versions:
// - rust_eh_personality, used by all cleanup landing pads, which never catches,
// so the behavior of __gcc_personality_v0 is perfectly adequate there, and
// - rust_eh_personality_catch, used only by rust_try(), which always catches.
// This is achieved by overriding the return value in search phase to always
// say "catch!".
#[cfg(not(target_arch = "arm"), not(test))]
#[doc(hidden)]
#[allow(visible_private_types)]
pub mod eabi {
use uw = rt::libunwind;
use libc::c_int;
extern "C" {
fn __gcc_personality_v0(version: c_int,
actions: uw::_Unwind_Action,
exception_class: uw::_Unwind_Exception_Class,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context)
-> uw::_Unwind_Reason_Code;
}
#[lang="eh_personality"]
extern fn eh_personality(
version: c_int,
actions: uw::_Unwind_Action,
exception_class: uw::_Unwind_Exception_Class,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context
) -> uw::_Unwind_Reason_Code
{
unsafe {
__gcc_personality_v0(version, actions, exception_class, ue_header,
context)
}
}
#[no_mangle] // referenced from rust_try.ll
pub extern "C" fn rust_eh_personality_catch(
version: c_int,
actions: uw::_Unwind_Action,
exception_class: uw::_Unwind_Exception_Class,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context
) -> uw::_Unwind_Reason_Code
{
if (actions as c_int & uw::_UA_SEARCH_PHASE as c_int)!= 0 { // search phase
uw::_URC_HANDLER_FOUND // catch!
}
else { // cleanup phase
unsafe {
__gcc_personality_v0(version, actions, exception_class, ue_header,
context)
}
}
}
}
// ARM EHABI uses a slightly different personality routine signature,
// but otherwise works the same.
#[cfg(target_arch = "arm", not(test))]
#[allow(visible_private_types)]
pub mod eabi {
use uw = rt::libunwind;
use libc::c_int;
extern "C" {
fn __gcc_personality_v0(state: uw::_Unwind_State,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context)
-> uw::_Unwind_Reason_Code;
}
#[lang="eh_personality"]
extern "C" fn eh_personality(
state: uw::_Unwind_State,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context
) -> uw::_Unwind_Reason_Code
{
unsafe {
__gcc_personality_v0(state, ue_header, context)
}
}
#[no_mangle] // referenced from rust_try.ll
pub extern "C" fn rust_eh_personality_catch(
state: uw::_Unwind_State,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context
) -> uw::_Unwind_Reason_Code
{
if (state as c_int & uw::_US_ACTION_MASK as c_int)
== uw::_US_VIRTUAL_UNWIND_FRAME as c_int { // search phase
uw::_URC_HANDLER_FOUND // catch!
}
else { // cleanup phase
unsafe {
__gcc_personality_v0(state, ue_header, context)
}
}
}
}
// Entry point of failure from the libcore crate
#[cfg(not(test))]
#[lang = "begin_unwind"]
pub extern fn rust_begin_unwind(msg: &fmt::Arguments,
file: &'static str, line: uint) ->! {
begin_unwind_fmt(msg, file, line)
}
/// The entry point for unwinding with a formatted message.
///
/// This is designed to reduce the amount of code required at the call
/// site as much as possible (so that `fail!()` has as low an impact
/// on (e.g.) the inlining of other functions as possible), by moving
/// the actual formatting into this shared place.
#[inline(never)] #[cold]
pub fn begin_unwind_fmt(msg: &fmt::Arguments, file: &'static str,
line: uint) ->! {
// We do two allocations here, unfortunately. But (a) they're
// required with the current scheme, and (b) we don't handle
// failure + OOM properly anyway (see comment in begin_unwind
// below).
begin_unwind_inner(box fmt::format(msg), file, line)
}
/// This is the entry point of unwinding for fail!() and assert!().
#[inline(never)] #[cold] // avoid code bloat at the call sites as much as possible
pub fn begin_unwind<M: Any + Send>(msg: M, file: &'static str, line: uint) ->! {
// Note that this should be the only allocation performed in this code path.
// Currently this means that fail!() on OOM will invoke this code path,
// but then again we're not really ready for failing on OOM anyway. If
// we do start doing this, then we should propagate this allocation to
// be performed in the parent of this task instead of the task that's
// failing.
// see below for why we do the `Any` coercion here.
begin_unwind_inner(box msg, file, line)
}
/// The core of the unwinding.
///
/// This is non-generic to avoid instantiation bloat in other crates
/// (which makes compilation of small crates noticably slower). (Note:
/// we need the `Any` object anyway, we're not just creating it to
/// avoid being generic.)
///
/// Do this split took the LLVM IR line counts of `fn main() { fail!()
/// }` from ~1900/3700 (-O/no opts) to 180/590.
#[inline(never)] #[cold] // this is the slow path, please never inline this
fn
|
(msg: Box<Any:Send>,
file: &'static str,
line: uint) ->! {
let mut task;
{
let msg_s = match msg.as_ref::<&'static str>() {
Some(s) => *s,
None => match msg.as_ref::<String>() {
Some(s) => s.as_slice(),
None => "Box<Any>",
}
};
// It is assumed that all reasonable rust code will have a local task at
// all times. This means that this `try_take` will succeed almost all of
// the time. There are border cases, however, when the runtime has
// *almost* set up the local task, but hasn't quite gotten there yet. In
// order to get some better diagnostics, we print on failure and
// immediately abort the whole process if there is no local task
// available.
let opt_task: Option<Box<Task>> = Local::try_take();
task = match opt_task {
Some(t) => t,
None => {
rterrln!("failed at '{}', {}:{}", msg_s, file, line);
if backtrace::log_enabled() {
let mut err = ::rt::util::Stderr;
let _err = backtrace::write(&mut err);
} else {
rterrln!("run with `RUST_BACKTRACE=1` to see a backtrace");
}
unsafe { intrinsics::abort() }
}
};
// See comments in io::stdio::with_task_stdout as to why we have to be
// careful when using an arbitrary I/O handle from the task. We
// essentially need to dance to make sure when a task is in TLS when
// running user code.
let name = task.name.take();
{
let n = name.as_ref().map(|n| n.as_slice()).unwrap_or("<unnamed>");
match task.stderr.take() {
Some(mut stderr) => {
Local::put(task);
// FIXME: what to do when the task printing fails?
let _err = write!(stderr,
"task '{}' failed at '{}', {}:{}\n",
n, msg_s, file, line);
if backtrace::log_enabled() {
let _err = backtrace::write(stderr);
}
task = Local::take();
match mem::replace(&mut task.stderr, Some(stderr)) {
Some(prev) => {
Local::put(task);
drop(prev);
task = Local::take();
}
None => {}
}
}
None => {
rterrln!("task '{}' failed at '{}', {}:{}", n, msg_s,
file, line);
if backtrace::log_enabled() {
let mut err = ::rt::util::Stderr;
let _err = backtrace::write(&mut err);
}
}
}
}
task.name = name;
if task.unwinder.unwinding {
// If a task fails while it's already unwinding then we
// have limited options. Currently our preference is to
// just abort. In the future we may consider resuming
// unwinding or otherwise exiting the task cleanly.
rterrln!("task failed during unwinding (double-failure - total drag!)")
rterrln!("rust must abort now. so sorry.");
// Don't print the backtrace twice (it would have already been
// printed if logging was enabled).
if!backtrace::log_enabled() {
let mut err = ::rt::util::Stderr;
let _err = backtrace::write(&mut err);
}
unsafe { intrinsics::abort() }
}
}
// The unwinder won't actually use the task at all, so we put the task back
// into TLS right before we invoke the unwinder, but this means we need an
// unsafe reference back to the unwinder once it's in TLS.
Local::put(task);
unsafe {
let task: *mut Task = Local::unsafe_borrow();
(*task).unwinder.begin_unwind(msg);
}
}
|
begin_unwind_inner
|
identifier_name
|
unwind.rs
|
// Copyright 2013 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Stack unwinding
// Implementation of Rust stack unwinding
//
// For background on exception handling and stack unwinding please see
// "Exception Handling in LLVM" (llvm.org/docs/ExceptionHandling.html) and
// documents linked from it.
// These are also good reads:
// http://theofilos.cs.columbia.edu/blog/2013/09/22/base_abi/
// http://monoinfinito.wordpress.com/series/exception-handling-in-c/
// http://www.airs.com/blog/index.php?s=exception+frames
//
// ~~~ A brief summary ~~~
// Exception handling happens in two phases: a search phase and a cleanup phase.
//
// In both phases the unwinder walks stack frames from top to bottom using
// information from the stack frame unwind sections of the current process's
// modules ("module" here refers to an OS module, i.e. an executable or a
// dynamic library).
//
// For each stack frame, it invokes the associated "personality routine", whose
// address is also stored in the unwind info section.
//
// In the search phase, the job of a personality routine is to examine exception
// object being thrown, and to decide whether it should be caught at that stack
// frame. Once the handler frame has been identified, cleanup phase begins.
//
// In the cleanup phase, personality routines invoke cleanup code associated
// with their stack frames (i.e. destructors). Once stack has been unwound down
// to the handler frame level, unwinding stops and the last personality routine
// transfers control to its' catch block.
//
// ~~~ Frame unwind info registration ~~~
// Each module has its' own frame unwind info section (usually ".eh_frame"), and
// unwinder needs to know about all of them in order for unwinding to be able to
// cross module boundaries.
//
// On some platforms, like Linux, this is achieved by dynamically enumerating
// currently loaded modules via the dl_iterate_phdr() API and finding all
//.eh_frame sections.
//
// Others, like Windows, require modules to actively register their unwind info
// sections by calling __register_frame_info() API at startup. In the latter
// case it is essential that there is only one copy of the unwinder runtime in
// the process. This is usually achieved by linking to the dynamic version of
// the unwind runtime.
//
// Currently Rust uses unwind runtime provided by libgcc.
use any::{Any, AnyRefExt};
use fmt;
use intrinsics;
use kinds::Send;
use mem;
use option::{Some, None, Option};
use owned::Box;
use prelude::drop;
use ptr::RawPtr;
use result::{Err, Ok};
use rt::backtrace;
use rt::local::Local;
use rt::task::Task;
use str::Str;
use string::String;
use task::TaskResult;
use uw = rt::libunwind;
pub struct Unwinder {
unwinding: bool,
cause: Option<Box<Any:Send>>
}
impl Unwinder {
pub fn new() -> Unwinder {
Unwinder {
unwinding: false,
cause: None,
}
}
pub fn unwinding(&self) -> bool {
self.unwinding
}
pub fn try(&mut self, f: ||)
|
closure();
}
}
extern {
// Rust's try-catch
// When f(...) returns normally, the return value is null.
// When f(...) throws, the return value is a pointer to the caught
// exception object.
fn rust_try(f: extern "C" fn(*c_void, *c_void),
code: *c_void,
data: *c_void) -> *uw::_Unwind_Exception;
}
}
pub fn begin_unwind(&mut self, cause: Box<Any:Send>) ->! {
rtdebug!("begin_unwind()");
self.unwinding = true;
self.cause = Some(cause);
rust_fail();
// An uninlined, unmangled function upon which to slap yer breakpoints
#[inline(never)]
#[no_mangle]
fn rust_fail() ->! {
unsafe {
let exception = box uw::_Unwind_Exception {
exception_class: rust_exception_class(),
exception_cleanup: exception_cleanup,
private: [0,..uw::unwinder_private_data_size],
};
let error = uw::_Unwind_RaiseException(mem::transmute(exception));
rtabort!("Could not unwind stack, error = {}", error as int)
}
extern "C" fn exception_cleanup(_unwind_code: uw::_Unwind_Reason_Code,
exception: *uw::_Unwind_Exception) {
rtdebug!("exception_cleanup()");
unsafe {
let _: Box<uw::_Unwind_Exception> =
mem::transmute(exception);
}
}
}
}
pub fn result(&mut self) -> TaskResult {
if self.unwinding {
Err(self.cause.take().unwrap())
} else {
Ok(())
}
}
}
// Rust's exception class identifier. This is used by personality routines to
// determine whether the exception was thrown by their own runtime.
fn rust_exception_class() -> uw::_Unwind_Exception_Class {
// M O Z \0 R U S T -- vendor, language
0x4d4f5a_00_52555354
}
// We could implement our personality routine in pure Rust, however exception
// info decoding is tedious. More importantly, personality routines have to
// handle various platform quirks, which are not fun to maintain. For this
// reason, we attempt to reuse personality routine of the C language:
// __gcc_personality_v0.
//
// Since C does not support exception catching, __gcc_personality_v0 simply
// always returns _URC_CONTINUE_UNWIND in search phase, and always returns
// _URC_INSTALL_CONTEXT (i.e. "invoke cleanup code") in cleanup phase.
//
// This is pretty close to Rust's exception handling approach, except that Rust
// does have a single "catch-all" handler at the bottom of each task's stack.
// So we have two versions:
// - rust_eh_personality, used by all cleanup landing pads, which never catches,
// so the behavior of __gcc_personality_v0 is perfectly adequate there, and
// - rust_eh_personality_catch, used only by rust_try(), which always catches.
// This is achieved by overriding the return value in search phase to always
// say "catch!".
#[cfg(not(target_arch = "arm"), not(test))]
#[doc(hidden)]
#[allow(visible_private_types)]
pub mod eabi {
use uw = rt::libunwind;
use libc::c_int;
extern "C" {
fn __gcc_personality_v0(version: c_int,
actions: uw::_Unwind_Action,
exception_class: uw::_Unwind_Exception_Class,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context)
-> uw::_Unwind_Reason_Code;
}
#[lang="eh_personality"]
extern fn eh_personality(
version: c_int,
actions: uw::_Unwind_Action,
exception_class: uw::_Unwind_Exception_Class,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context
) -> uw::_Unwind_Reason_Code
{
unsafe {
__gcc_personality_v0(version, actions, exception_class, ue_header,
context)
}
}
#[no_mangle] // referenced from rust_try.ll
pub extern "C" fn rust_eh_personality_catch(
version: c_int,
actions: uw::_Unwind_Action,
exception_class: uw::_Unwind_Exception_Class,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context
) -> uw::_Unwind_Reason_Code
{
if (actions as c_int & uw::_UA_SEARCH_PHASE as c_int)!= 0 { // search phase
uw::_URC_HANDLER_FOUND // catch!
}
else { // cleanup phase
unsafe {
__gcc_personality_v0(version, actions, exception_class, ue_header,
context)
}
}
}
}
// ARM EHABI uses a slightly different personality routine signature,
// but otherwise works the same.
#[cfg(target_arch = "arm", not(test))]
#[allow(visible_private_types)]
pub mod eabi {
use uw = rt::libunwind;
use libc::c_int;
extern "C" {
fn __gcc_personality_v0(state: uw::_Unwind_State,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context)
-> uw::_Unwind_Reason_Code;
}
#[lang="eh_personality"]
extern "C" fn eh_personality(
state: uw::_Unwind_State,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context
) -> uw::_Unwind_Reason_Code
{
unsafe {
__gcc_personality_v0(state, ue_header, context)
}
}
#[no_mangle] // referenced from rust_try.ll
pub extern "C" fn rust_eh_personality_catch(
state: uw::_Unwind_State,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context
) -> uw::_Unwind_Reason_Code
{
if (state as c_int & uw::_US_ACTION_MASK as c_int)
== uw::_US_VIRTUAL_UNWIND_FRAME as c_int { // search phase
uw::_URC_HANDLER_FOUND // catch!
}
else { // cleanup phase
unsafe {
__gcc_personality_v0(state, ue_header, context)
}
}
}
}
// Entry point of failure from the libcore crate
#[cfg(not(test))]
#[lang = "begin_unwind"]
pub extern fn rust_begin_unwind(msg: &fmt::Arguments,
file: &'static str, line: uint) ->! {
begin_unwind_fmt(msg, file, line)
}
/// The entry point for unwinding with a formatted message.
///
/// This is designed to reduce the amount of code required at the call
/// site as much as possible (so that `fail!()` has as low an impact
/// on (e.g.) the inlining of other functions as possible), by moving
/// the actual formatting into this shared place.
#[inline(never)] #[cold]
pub fn begin_unwind_fmt(msg: &fmt::Arguments, file: &'static str,
line: uint) ->! {
// We do two allocations here, unfortunately. But (a) they're
// required with the current scheme, and (b) we don't handle
// failure + OOM properly anyway (see comment in begin_unwind
// below).
begin_unwind_inner(box fmt::format(msg), file, line)
}
/// This is the entry point of unwinding for fail!() and assert!().
#[inline(never)] #[cold] // avoid code bloat at the call sites as much as possible
pub fn begin_unwind<M: Any + Send>(msg: M, file: &'static str, line: uint) ->! {
// Note that this should be the only allocation performed in this code path.
// Currently this means that fail!() on OOM will invoke this code path,
// but then again we're not really ready for failing on OOM anyway. If
// we do start doing this, then we should propagate this allocation to
// be performed in the parent of this task instead of the task that's
// failing.
// see below for why we do the `Any` coercion here.
begin_unwind_inner(box msg, file, line)
}
/// The core of the unwinding.
///
/// This is non-generic to avoid instantiation bloat in other crates
/// (which makes compilation of small crates noticably slower). (Note:
/// we need the `Any` object anyway, we're not just creating it to
/// avoid being generic.)
///
/// Do this split took the LLVM IR line counts of `fn main() { fail!()
/// }` from ~1900/3700 (-O/no opts) to 180/590.
#[inline(never)] #[cold] // this is the slow path, please never inline this
fn begin_unwind_inner(msg: Box<Any:Send>,
file: &'static str,
line: uint) ->! {
let mut task;
{
let msg_s = match msg.as_ref::<&'static str>() {
Some(s) => *s,
None => match msg.as_ref::<String>() {
Some(s) => s.as_slice(),
None => "Box<Any>",
}
};
// It is assumed that all reasonable rust code will have a local task at
// all times. This means that this `try_take` will succeed almost all of
// the time. There are border cases, however, when the runtime has
// *almost* set up the local task, but hasn't quite gotten there yet. In
// order to get some better diagnostics, we print on failure and
// immediately abort the whole process if there is no local task
// available.
let opt_task: Option<Box<Task>> = Local::try_take();
task = match opt_task {
Some(t) => t,
None => {
rterrln!("failed at '{}', {}:{}", msg_s, file, line);
if backtrace::log_enabled() {
let mut err = ::rt::util::Stderr;
let _err = backtrace::write(&mut err);
} else {
rterrln!("run with `RUST_BACKTRACE=1` to see a backtrace");
}
unsafe { intrinsics::abort() }
}
};
// See comments in io::stdio::with_task_stdout as to why we have to be
// careful when using an arbitrary I/O handle from the task. We
// essentially need to dance to make sure when a task is in TLS when
// running user code.
let name = task.name.take();
{
let n = name.as_ref().map(|n| n.as_slice()).unwrap_or("<unnamed>");
match task.stderr.take() {
Some(mut stderr) => {
Local::put(task);
// FIXME: what to do when the task printing fails?
let _err = write!(stderr,
"task '{}' failed at '{}', {}:{}\n",
n, msg_s, file, line);
if backtrace::log_enabled() {
let _err = backtrace::write(stderr);
}
task = Local::take();
match mem::replace(&mut task.stderr, Some(stderr)) {
Some(prev) => {
Local::put(task);
drop(prev);
task = Local::take();
}
None => {}
}
}
None => {
rterrln!("task '{}' failed at '{}', {}:{}", n, msg_s,
file, line);
if backtrace::log_enabled() {
let mut err = ::rt::util::Stderr;
let _err = backtrace::write(&mut err);
}
}
}
}
task.name = name;
if task.unwinder.unwinding {
// If a task fails while it's already unwinding then we
// have limited options. Currently our preference is to
// just abort. In the future we may consider resuming
// unwinding or otherwise exiting the task cleanly.
rterrln!("task failed during unwinding (double-failure - total drag!)")
rterrln!("rust must abort now. so sorry.");
// Don't print the backtrace twice (it would have already been
// printed if logging was enabled).
if!backtrace::log_enabled() {
let mut err = ::rt::util::Stderr;
let _err = backtrace::write(&mut err);
}
unsafe { intrinsics::abort() }
}
}
// The unwinder won't actually use the task at all, so we put the task back
// into TLS right before we invoke the unwinder, but this means we need an
// unsafe reference back to the unwinder once it's in TLS.
Local::put(task);
unsafe {
let task: *mut Task = Local::unsafe_borrow();
(*task).unwinder.begin_unwind(msg);
}
}
|
{
use raw::Closure;
use libc::{c_void};
unsafe {
let closure: Closure = mem::transmute(f);
let ep = rust_try(try_fn, closure.code as *c_void,
closure.env as *c_void);
if !ep.is_null() {
rtdebug!("caught {}", (*ep).exception_class);
uw::_Unwind_DeleteException(ep);
}
}
extern fn try_fn(code: *c_void, env: *c_void) {
unsafe {
let closure: || = mem::transmute(Closure {
code: code as *(),
env: env as *(),
});
|
identifier_body
|
unwind.rs
|
// Copyright 2013 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Stack unwinding
// Implementation of Rust stack unwinding
//
// For background on exception handling and stack unwinding please see
// "Exception Handling in LLVM" (llvm.org/docs/ExceptionHandling.html) and
// documents linked from it.
// These are also good reads:
// http://theofilos.cs.columbia.edu/blog/2013/09/22/base_abi/
// http://monoinfinito.wordpress.com/series/exception-handling-in-c/
// http://www.airs.com/blog/index.php?s=exception+frames
//
// ~~~ A brief summary ~~~
// Exception handling happens in two phases: a search phase and a cleanup phase.
//
// In both phases the unwinder walks stack frames from top to bottom using
// information from the stack frame unwind sections of the current process's
// modules ("module" here refers to an OS module, i.e. an executable or a
// dynamic library).
//
// For each stack frame, it invokes the associated "personality routine", whose
// address is also stored in the unwind info section.
//
// In the search phase, the job of a personality routine is to examine exception
// object being thrown, and to decide whether it should be caught at that stack
// frame. Once the handler frame has been identified, cleanup phase begins.
//
// In the cleanup phase, personality routines invoke cleanup code associated
// with their stack frames (i.e. destructors). Once stack has been unwound down
// to the handler frame level, unwinding stops and the last personality routine
// transfers control to its' catch block.
//
// ~~~ Frame unwind info registration ~~~
// Each module has its' own frame unwind info section (usually ".eh_frame"), and
// unwinder needs to know about all of them in order for unwinding to be able to
// cross module boundaries.
//
// On some platforms, like Linux, this is achieved by dynamically enumerating
// currently loaded modules via the dl_iterate_phdr() API and finding all
//.eh_frame sections.
//
// Others, like Windows, require modules to actively register their unwind info
// sections by calling __register_frame_info() API at startup. In the latter
// case it is essential that there is only one copy of the unwinder runtime in
// the process. This is usually achieved by linking to the dynamic version of
// the unwind runtime.
//
// Currently Rust uses unwind runtime provided by libgcc.
use any::{Any, AnyRefExt};
use fmt;
use intrinsics;
use kinds::Send;
use mem;
use option::{Some, None, Option};
use owned::Box;
use prelude::drop;
use ptr::RawPtr;
use result::{Err, Ok};
use rt::backtrace;
use rt::local::Local;
use rt::task::Task;
use str::Str;
use string::String;
use task::TaskResult;
use uw = rt::libunwind;
pub struct Unwinder {
unwinding: bool,
cause: Option<Box<Any:Send>>
}
impl Unwinder {
pub fn new() -> Unwinder {
Unwinder {
unwinding: false,
cause: None,
}
}
pub fn unwinding(&self) -> bool {
self.unwinding
}
pub fn try(&mut self, f: ||) {
use raw::Closure;
use libc::{c_void};
unsafe {
let closure: Closure = mem::transmute(f);
let ep = rust_try(try_fn, closure.code as *c_void,
closure.env as *c_void);
if!ep.is_null() {
rtdebug!("caught {}", (*ep).exception_class);
uw::_Unwind_DeleteException(ep);
}
}
extern fn try_fn(code: *c_void, env: *c_void) {
unsafe {
let closure: || = mem::transmute(Closure {
code: code as *(),
env: env as *(),
});
closure();
}
}
extern {
// Rust's try-catch
// When f(...) returns normally, the return value is null.
// When f(...) throws, the return value is a pointer to the caught
// exception object.
fn rust_try(f: extern "C" fn(*c_void, *c_void),
code: *c_void,
data: *c_void) -> *uw::_Unwind_Exception;
}
}
pub fn begin_unwind(&mut self, cause: Box<Any:Send>) ->! {
rtdebug!("begin_unwind()");
self.unwinding = true;
self.cause = Some(cause);
rust_fail();
// An uninlined, unmangled function upon which to slap yer breakpoints
#[inline(never)]
#[no_mangle]
fn rust_fail() ->! {
unsafe {
let exception = box uw::_Unwind_Exception {
exception_class: rust_exception_class(),
exception_cleanup: exception_cleanup,
private: [0,..uw::unwinder_private_data_size],
};
let error = uw::_Unwind_RaiseException(mem::transmute(exception));
rtabort!("Could not unwind stack, error = {}", error as int)
}
extern "C" fn exception_cleanup(_unwind_code: uw::_Unwind_Reason_Code,
exception: *uw::_Unwind_Exception) {
rtdebug!("exception_cleanup()");
unsafe {
let _: Box<uw::_Unwind_Exception> =
mem::transmute(exception);
}
}
}
}
pub fn result(&mut self) -> TaskResult {
if self.unwinding {
Err(self.cause.take().unwrap())
} else {
Ok(())
}
}
}
// Rust's exception class identifier. This is used by personality routines to
// determine whether the exception was thrown by their own runtime.
fn rust_exception_class() -> uw::_Unwind_Exception_Class {
// M O Z \0 R U S T -- vendor, language
0x4d4f5a_00_52555354
}
// We could implement our personality routine in pure Rust, however exception
// info decoding is tedious. More importantly, personality routines have to
// handle various platform quirks, which are not fun to maintain. For this
// reason, we attempt to reuse personality routine of the C language:
// __gcc_personality_v0.
//
// Since C does not support exception catching, __gcc_personality_v0 simply
// always returns _URC_CONTINUE_UNWIND in search phase, and always returns
// _URC_INSTALL_CONTEXT (i.e. "invoke cleanup code") in cleanup phase.
//
// This is pretty close to Rust's exception handling approach, except that Rust
// does have a single "catch-all" handler at the bottom of each task's stack.
// So we have two versions:
// - rust_eh_personality, used by all cleanup landing pads, which never catches,
// so the behavior of __gcc_personality_v0 is perfectly adequate there, and
// - rust_eh_personality_catch, used only by rust_try(), which always catches.
// This is achieved by overriding the return value in search phase to always
// say "catch!".
#[cfg(not(target_arch = "arm"), not(test))]
#[doc(hidden)]
#[allow(visible_private_types)]
pub mod eabi {
use uw = rt::libunwind;
use libc::c_int;
extern "C" {
fn __gcc_personality_v0(version: c_int,
actions: uw::_Unwind_Action,
exception_class: uw::_Unwind_Exception_Class,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context)
-> uw::_Unwind_Reason_Code;
}
#[lang="eh_personality"]
extern fn eh_personality(
version: c_int,
actions: uw::_Unwind_Action,
exception_class: uw::_Unwind_Exception_Class,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context
) -> uw::_Unwind_Reason_Code
{
unsafe {
__gcc_personality_v0(version, actions, exception_class, ue_header,
context)
}
}
#[no_mangle] // referenced from rust_try.ll
pub extern "C" fn rust_eh_personality_catch(
version: c_int,
actions: uw::_Unwind_Action,
exception_class: uw::_Unwind_Exception_Class,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context
) -> uw::_Unwind_Reason_Code
{
if (actions as c_int & uw::_UA_SEARCH_PHASE as c_int)!= 0
|
else { // cleanup phase
unsafe {
__gcc_personality_v0(version, actions, exception_class, ue_header,
context)
}
}
}
}
// ARM EHABI uses a slightly different personality routine signature,
// but otherwise works the same.
#[cfg(target_arch = "arm", not(test))]
#[allow(visible_private_types)]
pub mod eabi {
use uw = rt::libunwind;
use libc::c_int;
extern "C" {
fn __gcc_personality_v0(state: uw::_Unwind_State,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context)
-> uw::_Unwind_Reason_Code;
}
#[lang="eh_personality"]
extern "C" fn eh_personality(
state: uw::_Unwind_State,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context
) -> uw::_Unwind_Reason_Code
{
unsafe {
__gcc_personality_v0(state, ue_header, context)
}
}
#[no_mangle] // referenced from rust_try.ll
pub extern "C" fn rust_eh_personality_catch(
state: uw::_Unwind_State,
ue_header: *uw::_Unwind_Exception,
context: *uw::_Unwind_Context
) -> uw::_Unwind_Reason_Code
{
if (state as c_int & uw::_US_ACTION_MASK as c_int)
== uw::_US_VIRTUAL_UNWIND_FRAME as c_int { // search phase
uw::_URC_HANDLER_FOUND // catch!
}
else { // cleanup phase
unsafe {
__gcc_personality_v0(state, ue_header, context)
}
}
}
}
// Entry point of failure from the libcore crate
#[cfg(not(test))]
#[lang = "begin_unwind"]
pub extern fn rust_begin_unwind(msg: &fmt::Arguments,
file: &'static str, line: uint) ->! {
begin_unwind_fmt(msg, file, line)
}
/// The entry point for unwinding with a formatted message.
///
/// This is designed to reduce the amount of code required at the call
/// site as much as possible (so that `fail!()` has as low an impact
/// on (e.g.) the inlining of other functions as possible), by moving
/// the actual formatting into this shared place.
#[inline(never)] #[cold]
pub fn begin_unwind_fmt(msg: &fmt::Arguments, file: &'static str,
line: uint) ->! {
// We do two allocations here, unfortunately. But (a) they're
// required with the current scheme, and (b) we don't handle
// failure + OOM properly anyway (see comment in begin_unwind
// below).
begin_unwind_inner(box fmt::format(msg), file, line)
}
/// This is the entry point of unwinding for fail!() and assert!().
#[inline(never)] #[cold] // avoid code bloat at the call sites as much as possible
pub fn begin_unwind<M: Any + Send>(msg: M, file: &'static str, line: uint) ->! {
// Note that this should be the only allocation performed in this code path.
// Currently this means that fail!() on OOM will invoke this code path,
// but then again we're not really ready for failing on OOM anyway. If
// we do start doing this, then we should propagate this allocation to
// be performed in the parent of this task instead of the task that's
// failing.
// see below for why we do the `Any` coercion here.
begin_unwind_inner(box msg, file, line)
}
/// The core of the unwinding.
///
/// This is non-generic to avoid instantiation bloat in other crates
/// (which makes compilation of small crates noticably slower). (Note:
/// we need the `Any` object anyway, we're not just creating it to
/// avoid being generic.)
///
/// Do this split took the LLVM IR line counts of `fn main() { fail!()
/// }` from ~1900/3700 (-O/no opts) to 180/590.
#[inline(never)] #[cold] // this is the slow path, please never inline this
fn begin_unwind_inner(msg: Box<Any:Send>,
file: &'static str,
line: uint) ->! {
let mut task;
{
let msg_s = match msg.as_ref::<&'static str>() {
Some(s) => *s,
None => match msg.as_ref::<String>() {
Some(s) => s.as_slice(),
None => "Box<Any>",
}
};
// It is assumed that all reasonable rust code will have a local task at
// all times. This means that this `try_take` will succeed almost all of
// the time. There are border cases, however, when the runtime has
// *almost* set up the local task, but hasn't quite gotten there yet. In
// order to get some better diagnostics, we print on failure and
// immediately abort the whole process if there is no local task
// available.
let opt_task: Option<Box<Task>> = Local::try_take();
task = match opt_task {
Some(t) => t,
None => {
rterrln!("failed at '{}', {}:{}", msg_s, file, line);
if backtrace::log_enabled() {
let mut err = ::rt::util::Stderr;
let _err = backtrace::write(&mut err);
} else {
rterrln!("run with `RUST_BACKTRACE=1` to see a backtrace");
}
unsafe { intrinsics::abort() }
}
};
// See comments in io::stdio::with_task_stdout as to why we have to be
// careful when using an arbitrary I/O handle from the task. We
// essentially need to dance to make sure when a task is in TLS when
// running user code.
let name = task.name.take();
{
let n = name.as_ref().map(|n| n.as_slice()).unwrap_or("<unnamed>");
match task.stderr.take() {
Some(mut stderr) => {
Local::put(task);
// FIXME: what to do when the task printing fails?
let _err = write!(stderr,
"task '{}' failed at '{}', {}:{}\n",
n, msg_s, file, line);
if backtrace::log_enabled() {
let _err = backtrace::write(stderr);
}
task = Local::take();
match mem::replace(&mut task.stderr, Some(stderr)) {
Some(prev) => {
Local::put(task);
drop(prev);
task = Local::take();
}
None => {}
}
}
None => {
rterrln!("task '{}' failed at '{}', {}:{}", n, msg_s,
file, line);
if backtrace::log_enabled() {
let mut err = ::rt::util::Stderr;
let _err = backtrace::write(&mut err);
}
}
}
}
task.name = name;
if task.unwinder.unwinding {
// If a task fails while it's already unwinding then we
// have limited options. Currently our preference is to
// just abort. In the future we may consider resuming
// unwinding or otherwise exiting the task cleanly.
rterrln!("task failed during unwinding (double-failure - total drag!)")
rterrln!("rust must abort now. so sorry.");
// Don't print the backtrace twice (it would have already been
// printed if logging was enabled).
if!backtrace::log_enabled() {
let mut err = ::rt::util::Stderr;
let _err = backtrace::write(&mut err);
}
unsafe { intrinsics::abort() }
}
}
// The unwinder won't actually use the task at all, so we put the task back
// into TLS right before we invoke the unwinder, but this means we need an
// unsafe reference back to the unwinder once it's in TLS.
Local::put(task);
unsafe {
let task: *mut Task = Local::unsafe_borrow();
(*task).unwinder.begin_unwind(msg);
}
}
|
{ // search phase
uw::_URC_HANDLER_FOUND // catch!
}
|
conditional_block
|
static_priv_by_default.rs
|
// Copyright 2013 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
#![crate_type = "lib"]
static private: int = 0;
pub static public: int = 0;
pub struct A(());
impl A {
fn
|
() {}
}
mod foo {
pub static a: int = 0;
pub fn b() {}
pub struct c;
pub enum d {}
pub struct A(());
impl A {
fn foo() {}
}
// these are public so the parent can reexport them.
pub static reexported_a: int = 0;
pub fn reexported_b() {}
pub struct reexported_c;
pub enum reexported_d {}
}
pub mod bar {
pub use foo::reexported_a as e;
pub use foo::reexported_b as f;
pub use foo::reexported_c as g;
pub use foo::reexported_d as h;
}
pub static a: int = 0;
pub fn b() {}
pub struct c;
pub enum d {}
static i: int = 0;
fn j() {}
struct k;
enum l {}
|
foo
|
identifier_name
|
static_priv_by_default.rs
|
// Copyright 2013 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
#![crate_type = "lib"]
static private: int = 0;
pub static public: int = 0;
pub struct A(());
impl A {
fn foo() {}
}
mod foo {
pub static a: int = 0;
pub fn b() {}
pub struct c;
pub enum d {}
pub struct A(());
impl A {
fn foo() {}
}
// these are public so the parent can reexport them.
pub static reexported_a: int = 0;
pub fn reexported_b() {}
pub struct reexported_c;
pub enum reexported_d {}
}
pub mod bar {
pub use foo::reexported_a as e;
pub use foo::reexported_b as f;
pub use foo::reexported_c as g;
|
pub fn b() {}
pub struct c;
pub enum d {}
static i: int = 0;
fn j() {}
struct k;
enum l {}
|
pub use foo::reexported_d as h;
}
pub static a: int = 0;
|
random_line_split
|
issue-10465.rs
|
// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
pub mod a {
pub trait A {
fn foo(&self);
}
}
pub mod b {
use a::A;
pub struct
|
;
impl A for B { fn foo(&self) {} }
pub mod c {
use b::B;
fn foo(b: &B) {
b.foo(); //~ ERROR: does not implement any method in scope named
}
}
}
fn main() {}
|
B
|
identifier_name
|
issue-10465.rs
|
// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
pub mod a {
pub trait A {
fn foo(&self);
}
}
pub mod b {
use a::A;
pub struct B;
|
pub mod c {
use b::B;
fn foo(b: &B) {
b.foo(); //~ ERROR: does not implement any method in scope named
}
}
}
fn main() {}
|
impl A for B { fn foo(&self) {} }
|
random_line_split
|
issue-10465.rs
|
// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
pub mod a {
pub trait A {
fn foo(&self);
}
}
pub mod b {
use a::A;
pub struct B;
impl A for B { fn foo(&self)
|
}
pub mod c {
use b::B;
fn foo(b: &B) {
b.foo(); //~ ERROR: does not implement any method in scope named
}
}
}
fn main() {}
|
{}
|
identifier_body
|
ir.rs
|
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*/
use graphql_ir::*;
use intern::string_key::{StringKey, StringKeyMap, StringKeySet};
use relay_transforms::{DependencyMap, ResolverFieldFinder};
use schema::SDLSchema;
use std::collections::hash_map::Entry;
use std::collections::HashMap;
use std::fmt;
struct Node {
ir: Option<ExecutableDefinition>,
parents: Vec<StringKey>,
children: Vec<StringKey>,
}
impl fmt::Debug for Node {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.write_fmt(format_args!(
"parents: {:?} / children: {:?}",
self.parents, self.children
))
}
}
/// Find the set of executable definitions that are potentially impacted by the
/// set of changed documents declared in `changed_names`. This is achieved by
/// building a dependency graph where edges are either explicit fragment spreads,
/// or "implicit dependencies" such as those created by Relay Resolvers.
///
/// New implicit dependencies are detected by walking the chaged documents,
/// whereas preexisting implicit dependencies must be passed in as
/// `implicit_dependencies`.
pub fn get_reachable_ir(
definitions: Vec<ExecutableDefinition>,
base_definition_names: StringKeySet,
changed_names: StringKeySet,
implicit_dependencies: &DependencyMap,
schema: &SDLSchema,
) -> Vec<ExecutableDefinition> {
if changed_names.is_empty() {
return vec![];
}
// For each executable definition, define a `Node` indicating its parents and children
// Note: There are situations where a name in `changed_names` may not appear
// in `definitions`, and thus would be missing from `dependency_graph`. This can arise
// if you change a file which contains a fragment which is present in the
// base project, but is not reachable from any of the project's own
// queries/mutations.
let mut dependency_graph = build_dependency_graph(definitions);
// Note: Keys found in `resolver_dependencies` should theoretically replace
// those found in `implicit_dependencies`, however that would require either
// getting a mutable copy of `implicit_dependencies` or copying it. For
// simplicity we just add both sets. This means we may mark a few extra
// definitions as reachable (false positives), but it's an edge case and
// the cost is minimal.
let resolver_dependencies =
find_resolver_dependencies(&changed_names, &dependency_graph, schema);
add_dependencies_to_graph(&mut dependency_graph, implicit_dependencies);
add_dependencies_to_graph(&mut dependency_graph, &resolver_dependencies);
let mut visited = Default::default();
let mut filtered_definitions = Default::default();
for key in changed_names.into_iter() {
if dependency_graph.contains_key(&key) {
add_related_nodes(
&mut visited,
&mut filtered_definitions,
&dependency_graph,
&base_definition_names,
key,
);
}
}
filtered_definitions
.drain()
.map(|(_, definition)| definition)
.collect()
}
fn find_resolver_dependencies(
reachable_names: &StringKeySet,
dependency_graph: &StringKeyMap<Node>,
schema: &SDLSchema,
) -> DependencyMap {
let mut dependencies = Default::default();
let mut finder = ResolverFieldFinder::new(&mut dependencies, schema);
for name in reachable_names {
if let Some(node) = dependency_graph.get(name) {
let def = match node.ir.as_ref() {
Some(definition) => definition,
None => panic!("Could not find defintion for {}.", name),
};
match def {
ExecutableDefinition::Fragment(fragment) => finder.visit_fragment(fragment),
ExecutableDefinition::Operation(operation) => finder.visit_operation(operation),
}
}
}
dependencies
}
fn add_dependencies_to_graph(
dependency_graph: &mut StringKeyMap<Node>,
dependencies: &DependencyMap,
) {
for (parent, children) in dependencies.iter() {
if let Some(node) = dependency_graph.get_mut(parent) {
node.children.extend(children.iter());
};
for child in children.iter() {
if let Some(node) = dependency_graph.get_mut(child) {
node.parents.push(*parent);
};
}
}
}
// Build a dependency graph of that nodes are "doubly linked"
fn build_dependency_graph(definitions: Vec<ExecutableDefinition>) -> StringKeyMap<Node> {
let mut dependency_graph: StringKeyMap<Node> =
HashMap::with_capacity_and_hasher(definitions.len(), Default::default());
for definition in definitions.into_iter() {
let name = match &definition {
ExecutableDefinition::Operation(operation) => operation.name.item,
ExecutableDefinition::Fragment(fragment) => fragment.name.item,
};
// Visit the selections of the IR to build it's `children`
let mut children = vec![];
let selections = match &definition {
ExecutableDefinition::Operation(operation) => &operation.selections,
ExecutableDefinition::Fragment(fragment) => &fragment.selections,
};
visit_selections(&mut dependency_graph, selections, name, &mut children);
// Insert or update the representation of the IR in the dependency tree
match dependency_graph.entry(name) {
// Add a new node for current IR to the dependency tree
Entry::Vacant(entry) => {
entry.insert(Node {
ir: Some(definition),
parents: vec![],
children,
});
}
// The node is already created when visiting selections of the IR, but it's `ir` and `children` haven't been set
Entry::Occupied(mut entry) => {
let node = entry.get_mut();
if let Some(def) = &node.ir {
panic!(
"Duplicate definition: had {:?} and found another {:?}.",
def, node.ir
);
}
node.ir = Some(definition);
node.children = children;
}
}
}
dependency_graph
}
// Visit the selections of current IR, set the `children` for the node representing the IR,
// and the `parents` for nodes representing the children IR
fn visit_selections(
dependency_graph: &mut StringKeyMap<Node>,
selections: &[Selection],
parent_name: StringKey,
children: &mut Vec<StringKey>,
) {
for selection in selections {
match selection {
Selection::FragmentSpread(node) => {
let key = node.fragment.item;
match dependency_graph.get_mut(&key) {
None => {
dependency_graph.insert(
key,
Node {
ir: None,
parents: vec![parent_name],
children: vec![],
},
);
}
Some(node) => {
node.parents.push(parent_name);
}
}
children.push(key);
}
Selection::LinkedField(node) => {
visit_selections(dependency_graph, &node.selections, parent_name, children);
}
Selection::InlineFragment(node) => {
visit_selections(dependency_graph, &node.selections, parent_name, children);
}
Selection::ScalarField(_) => {}
Selection::Condition(node) => {
visit_selections(dependency_graph, &node.selections, parent_name, children);
}
}
}
}
// From `key` of changed definition, recusively traverse up the depenency tree, and add all related nodes (ancestors
// of changned definitions which are not from base definitions, and all of their desendants) into the `result`
fn add_related_nodes(
visited: &mut StringKeySet,
result: &mut StringKeyMap<ExecutableDefinition>,
dependency_graph: &StringKeyMap<Node>,
base_definition_names: &StringKeySet,
key: StringKey,
)
|
dependency_graph,
base_definition_names,
*parent,
);
}
}
}
// Recursively add all descendants of current node into the `result`
fn add_descendants(
result: &mut StringKeyMap<ExecutableDefinition>,
dependency_graph: &StringKeyMap<Node>,
key: StringKey,
) {
if result.contains_key(&key) {
return;
}
match dependency_graph.get(&key) {
Some(Node {
ir: Some(def),
children,
..
}) => {
result.insert(key, def.clone());
for child in children {
add_descendants(result, dependency_graph, *child);
}
}
_ => {
panic!("Fragment {:?} not found in IR.", key);
}
}
}
|
{
if !visited.insert(key) {
return;
}
let parents = match dependency_graph.get(&key) {
None => {
panic!("Fragment {:?} not found in IR.", key);
}
Some(node) => &node.parents,
};
if parents.is_empty() {
if !base_definition_names.contains(&key) {
add_descendants(result, dependency_graph, key);
}
} else {
for parent in parents {
add_related_nodes(
visited,
result,
|
identifier_body
|
ir.rs
|
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*/
use graphql_ir::*;
use intern::string_key::{StringKey, StringKeyMap, StringKeySet};
use relay_transforms::{DependencyMap, ResolverFieldFinder};
use schema::SDLSchema;
use std::collections::hash_map::Entry;
use std::collections::HashMap;
use std::fmt;
struct Node {
ir: Option<ExecutableDefinition>,
parents: Vec<StringKey>,
children: Vec<StringKey>,
}
impl fmt::Debug for Node {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.write_fmt(format_args!(
"parents: {:?} / children: {:?}",
self.parents, self.children
))
}
}
/// Find the set of executable definitions that are potentially impacted by the
/// set of changed documents declared in `changed_names`. This is achieved by
/// building a dependency graph where edges are either explicit fragment spreads,
/// or "implicit dependencies" such as those created by Relay Resolvers.
///
/// New implicit dependencies are detected by walking the chaged documents,
/// whereas preexisting implicit dependencies must be passed in as
/// `implicit_dependencies`.
pub fn
|
(
definitions: Vec<ExecutableDefinition>,
base_definition_names: StringKeySet,
changed_names: StringKeySet,
implicit_dependencies: &DependencyMap,
schema: &SDLSchema,
) -> Vec<ExecutableDefinition> {
if changed_names.is_empty() {
return vec![];
}
// For each executable definition, define a `Node` indicating its parents and children
// Note: There are situations where a name in `changed_names` may not appear
// in `definitions`, and thus would be missing from `dependency_graph`. This can arise
// if you change a file which contains a fragment which is present in the
// base project, but is not reachable from any of the project's own
// queries/mutations.
let mut dependency_graph = build_dependency_graph(definitions);
// Note: Keys found in `resolver_dependencies` should theoretically replace
// those found in `implicit_dependencies`, however that would require either
// getting a mutable copy of `implicit_dependencies` or copying it. For
// simplicity we just add both sets. This means we may mark a few extra
// definitions as reachable (false positives), but it's an edge case and
// the cost is minimal.
let resolver_dependencies =
find_resolver_dependencies(&changed_names, &dependency_graph, schema);
add_dependencies_to_graph(&mut dependency_graph, implicit_dependencies);
add_dependencies_to_graph(&mut dependency_graph, &resolver_dependencies);
let mut visited = Default::default();
let mut filtered_definitions = Default::default();
for key in changed_names.into_iter() {
if dependency_graph.contains_key(&key) {
add_related_nodes(
&mut visited,
&mut filtered_definitions,
&dependency_graph,
&base_definition_names,
key,
);
}
}
filtered_definitions
.drain()
.map(|(_, definition)| definition)
.collect()
}
fn find_resolver_dependencies(
reachable_names: &StringKeySet,
dependency_graph: &StringKeyMap<Node>,
schema: &SDLSchema,
) -> DependencyMap {
let mut dependencies = Default::default();
let mut finder = ResolverFieldFinder::new(&mut dependencies, schema);
for name in reachable_names {
if let Some(node) = dependency_graph.get(name) {
let def = match node.ir.as_ref() {
Some(definition) => definition,
None => panic!("Could not find defintion for {}.", name),
};
match def {
ExecutableDefinition::Fragment(fragment) => finder.visit_fragment(fragment),
ExecutableDefinition::Operation(operation) => finder.visit_operation(operation),
}
}
}
dependencies
}
fn add_dependencies_to_graph(
dependency_graph: &mut StringKeyMap<Node>,
dependencies: &DependencyMap,
) {
for (parent, children) in dependencies.iter() {
if let Some(node) = dependency_graph.get_mut(parent) {
node.children.extend(children.iter());
};
for child in children.iter() {
if let Some(node) = dependency_graph.get_mut(child) {
node.parents.push(*parent);
};
}
}
}
// Build a dependency graph of that nodes are "doubly linked"
fn build_dependency_graph(definitions: Vec<ExecutableDefinition>) -> StringKeyMap<Node> {
let mut dependency_graph: StringKeyMap<Node> =
HashMap::with_capacity_and_hasher(definitions.len(), Default::default());
for definition in definitions.into_iter() {
let name = match &definition {
ExecutableDefinition::Operation(operation) => operation.name.item,
ExecutableDefinition::Fragment(fragment) => fragment.name.item,
};
// Visit the selections of the IR to build it's `children`
let mut children = vec![];
let selections = match &definition {
ExecutableDefinition::Operation(operation) => &operation.selections,
ExecutableDefinition::Fragment(fragment) => &fragment.selections,
};
visit_selections(&mut dependency_graph, selections, name, &mut children);
// Insert or update the representation of the IR in the dependency tree
match dependency_graph.entry(name) {
// Add a new node for current IR to the dependency tree
Entry::Vacant(entry) => {
entry.insert(Node {
ir: Some(definition),
parents: vec![],
children,
});
}
// The node is already created when visiting selections of the IR, but it's `ir` and `children` haven't been set
Entry::Occupied(mut entry) => {
let node = entry.get_mut();
if let Some(def) = &node.ir {
panic!(
"Duplicate definition: had {:?} and found another {:?}.",
def, node.ir
);
}
node.ir = Some(definition);
node.children = children;
}
}
}
dependency_graph
}
// Visit the selections of current IR, set the `children` for the node representing the IR,
// and the `parents` for nodes representing the children IR
fn visit_selections(
dependency_graph: &mut StringKeyMap<Node>,
selections: &[Selection],
parent_name: StringKey,
children: &mut Vec<StringKey>,
) {
for selection in selections {
match selection {
Selection::FragmentSpread(node) => {
let key = node.fragment.item;
match dependency_graph.get_mut(&key) {
None => {
dependency_graph.insert(
key,
Node {
ir: None,
parents: vec![parent_name],
children: vec![],
},
);
}
Some(node) => {
node.parents.push(parent_name);
}
}
children.push(key);
}
Selection::LinkedField(node) => {
visit_selections(dependency_graph, &node.selections, parent_name, children);
}
Selection::InlineFragment(node) => {
visit_selections(dependency_graph, &node.selections, parent_name, children);
}
Selection::ScalarField(_) => {}
Selection::Condition(node) => {
visit_selections(dependency_graph, &node.selections, parent_name, children);
}
}
}
}
// From `key` of changed definition, recusively traverse up the depenency tree, and add all related nodes (ancestors
// of changned definitions which are not from base definitions, and all of their desendants) into the `result`
fn add_related_nodes(
visited: &mut StringKeySet,
result: &mut StringKeyMap<ExecutableDefinition>,
dependency_graph: &StringKeyMap<Node>,
base_definition_names: &StringKeySet,
key: StringKey,
) {
if!visited.insert(key) {
return;
}
let parents = match dependency_graph.get(&key) {
None => {
panic!("Fragment {:?} not found in IR.", key);
}
Some(node) => &node.parents,
};
if parents.is_empty() {
if!base_definition_names.contains(&key) {
add_descendants(result, dependency_graph, key);
}
} else {
for parent in parents {
add_related_nodes(
visited,
result,
dependency_graph,
base_definition_names,
*parent,
);
}
}
}
// Recursively add all descendants of current node into the `result`
fn add_descendants(
result: &mut StringKeyMap<ExecutableDefinition>,
dependency_graph: &StringKeyMap<Node>,
key: StringKey,
) {
if result.contains_key(&key) {
return;
}
match dependency_graph.get(&key) {
Some(Node {
ir: Some(def),
children,
..
}) => {
result.insert(key, def.clone());
for child in children {
add_descendants(result, dependency_graph, *child);
}
}
_ => {
panic!("Fragment {:?} not found in IR.", key);
}
}
}
|
get_reachable_ir
|
identifier_name
|
ir.rs
|
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*/
use graphql_ir::*;
use intern::string_key::{StringKey, StringKeyMap, StringKeySet};
use relay_transforms::{DependencyMap, ResolverFieldFinder};
use schema::SDLSchema;
use std::collections::hash_map::Entry;
use std::collections::HashMap;
use std::fmt;
struct Node {
ir: Option<ExecutableDefinition>,
parents: Vec<StringKey>,
children: Vec<StringKey>,
}
impl fmt::Debug for Node {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.write_fmt(format_args!(
"parents: {:?} / children: {:?}",
self.parents, self.children
))
}
}
/// Find the set of executable definitions that are potentially impacted by the
/// set of changed documents declared in `changed_names`. This is achieved by
/// building a dependency graph where edges are either explicit fragment spreads,
/// or "implicit dependencies" such as those created by Relay Resolvers.
///
/// New implicit dependencies are detected by walking the chaged documents,
/// whereas preexisting implicit dependencies must be passed in as
/// `implicit_dependencies`.
pub fn get_reachable_ir(
definitions: Vec<ExecutableDefinition>,
base_definition_names: StringKeySet,
changed_names: StringKeySet,
implicit_dependencies: &DependencyMap,
schema: &SDLSchema,
) -> Vec<ExecutableDefinition> {
if changed_names.is_empty() {
return vec![];
}
// For each executable definition, define a `Node` indicating its parents and children
// Note: There are situations where a name in `changed_names` may not appear
// in `definitions`, and thus would be missing from `dependency_graph`. This can arise
// if you change a file which contains a fragment which is present in the
// base project, but is not reachable from any of the project's own
// queries/mutations.
let mut dependency_graph = build_dependency_graph(definitions);
// Note: Keys found in `resolver_dependencies` should theoretically replace
// those found in `implicit_dependencies`, however that would require either
// getting a mutable copy of `implicit_dependencies` or copying it. For
// simplicity we just add both sets. This means we may mark a few extra
// definitions as reachable (false positives), but it's an edge case and
// the cost is minimal.
let resolver_dependencies =
find_resolver_dependencies(&changed_names, &dependency_graph, schema);
add_dependencies_to_graph(&mut dependency_graph, implicit_dependencies);
add_dependencies_to_graph(&mut dependency_graph, &resolver_dependencies);
let mut visited = Default::default();
let mut filtered_definitions = Default::default();
for key in changed_names.into_iter() {
if dependency_graph.contains_key(&key) {
add_related_nodes(
&mut visited,
&mut filtered_definitions,
&dependency_graph,
&base_definition_names,
key,
);
}
}
filtered_definitions
.drain()
.map(|(_, definition)| definition)
.collect()
}
fn find_resolver_dependencies(
reachable_names: &StringKeySet,
dependency_graph: &StringKeyMap<Node>,
schema: &SDLSchema,
) -> DependencyMap {
let mut dependencies = Default::default();
let mut finder = ResolverFieldFinder::new(&mut dependencies, schema);
for name in reachable_names {
if let Some(node) = dependency_graph.get(name) {
let def = match node.ir.as_ref() {
Some(definition) => definition,
None => panic!("Could not find defintion for {}.", name),
};
match def {
ExecutableDefinition::Fragment(fragment) => finder.visit_fragment(fragment),
ExecutableDefinition::Operation(operation) => finder.visit_operation(operation),
}
}
}
dependencies
}
fn add_dependencies_to_graph(
dependency_graph: &mut StringKeyMap<Node>,
dependencies: &DependencyMap,
) {
for (parent, children) in dependencies.iter() {
if let Some(node) = dependency_graph.get_mut(parent) {
node.children.extend(children.iter());
};
for child in children.iter() {
if let Some(node) = dependency_graph.get_mut(child) {
node.parents.push(*parent);
};
}
}
}
// Build a dependency graph of that nodes are "doubly linked"
fn build_dependency_graph(definitions: Vec<ExecutableDefinition>) -> StringKeyMap<Node> {
let mut dependency_graph: StringKeyMap<Node> =
HashMap::with_capacity_and_hasher(definitions.len(), Default::default());
for definition in definitions.into_iter() {
let name = match &definition {
ExecutableDefinition::Operation(operation) => operation.name.item,
ExecutableDefinition::Fragment(fragment) => fragment.name.item,
};
// Visit the selections of the IR to build it's `children`
let mut children = vec![];
let selections = match &definition {
ExecutableDefinition::Operation(operation) => &operation.selections,
ExecutableDefinition::Fragment(fragment) => &fragment.selections,
};
visit_selections(&mut dependency_graph, selections, name, &mut children);
// Insert or update the representation of the IR in the dependency tree
match dependency_graph.entry(name) {
// Add a new node for current IR to the dependency tree
Entry::Vacant(entry) => {
entry.insert(Node {
ir: Some(definition),
parents: vec![],
children,
});
}
// The node is already created when visiting selections of the IR, but it's `ir` and `children` haven't been set
Entry::Occupied(mut entry) => {
let node = entry.get_mut();
if let Some(def) = &node.ir {
panic!(
"Duplicate definition: had {:?} and found another {:?}.",
def, node.ir
);
}
node.ir = Some(definition);
node.children = children;
}
}
}
dependency_graph
}
// Visit the selections of current IR, set the `children` for the node representing the IR,
// and the `parents` for nodes representing the children IR
fn visit_selections(
dependency_graph: &mut StringKeyMap<Node>,
selections: &[Selection],
parent_name: StringKey,
children: &mut Vec<StringKey>,
) {
for selection in selections {
match selection {
Selection::FragmentSpread(node) => {
let key = node.fragment.item;
match dependency_graph.get_mut(&key) {
None => {
dependency_graph.insert(
key,
Node {
ir: None,
parents: vec![parent_name],
children: vec![],
},
);
}
Some(node) => {
node.parents.push(parent_name);
}
}
children.push(key);
}
Selection::LinkedField(node) => {
visit_selections(dependency_graph, &node.selections, parent_name, children);
}
Selection::InlineFragment(node) => {
visit_selections(dependency_graph, &node.selections, parent_name, children);
}
Selection::ScalarField(_) => {}
Selection::Condition(node) => {
visit_selections(dependency_graph, &node.selections, parent_name, children);
}
}
}
}
// From `key` of changed definition, recusively traverse up the depenency tree, and add all related nodes (ancestors
// of changned definitions which are not from base definitions, and all of their desendants) into the `result`
fn add_related_nodes(
visited: &mut StringKeySet,
result: &mut StringKeyMap<ExecutableDefinition>,
dependency_graph: &StringKeyMap<Node>,
base_definition_names: &StringKeySet,
key: StringKey,
) {
if!visited.insert(key)
|
let parents = match dependency_graph.get(&key) {
None => {
panic!("Fragment {:?} not found in IR.", key);
}
Some(node) => &node.parents,
};
if parents.is_empty() {
if!base_definition_names.contains(&key) {
add_descendants(result, dependency_graph, key);
}
} else {
for parent in parents {
add_related_nodes(
visited,
result,
dependency_graph,
base_definition_names,
*parent,
);
}
}
}
// Recursively add all descendants of current node into the `result`
fn add_descendants(
result: &mut StringKeyMap<ExecutableDefinition>,
dependency_graph: &StringKeyMap<Node>,
key: StringKey,
) {
if result.contains_key(&key) {
return;
}
match dependency_graph.get(&key) {
Some(Node {
ir: Some(def),
children,
..
}) => {
result.insert(key, def.clone());
for child in children {
add_descendants(result, dependency_graph, *child);
}
}
_ => {
panic!("Fragment {:?} not found in IR.", key);
}
}
}
|
{
return;
}
|
conditional_block
|
ir.rs
|
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*/
use graphql_ir::*;
|
use relay_transforms::{DependencyMap, ResolverFieldFinder};
use schema::SDLSchema;
use std::collections::hash_map::Entry;
use std::collections::HashMap;
use std::fmt;
struct Node {
ir: Option<ExecutableDefinition>,
parents: Vec<StringKey>,
children: Vec<StringKey>,
}
impl fmt::Debug for Node {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.write_fmt(format_args!(
"parents: {:?} / children: {:?}",
self.parents, self.children
))
}
}
/// Find the set of executable definitions that are potentially impacted by the
/// set of changed documents declared in `changed_names`. This is achieved by
/// building a dependency graph where edges are either explicit fragment spreads,
/// or "implicit dependencies" such as those created by Relay Resolvers.
///
/// New implicit dependencies are detected by walking the chaged documents,
/// whereas preexisting implicit dependencies must be passed in as
/// `implicit_dependencies`.
pub fn get_reachable_ir(
definitions: Vec<ExecutableDefinition>,
base_definition_names: StringKeySet,
changed_names: StringKeySet,
implicit_dependencies: &DependencyMap,
schema: &SDLSchema,
) -> Vec<ExecutableDefinition> {
if changed_names.is_empty() {
return vec![];
}
// For each executable definition, define a `Node` indicating its parents and children
// Note: There are situations where a name in `changed_names` may not appear
// in `definitions`, and thus would be missing from `dependency_graph`. This can arise
// if you change a file which contains a fragment which is present in the
// base project, but is not reachable from any of the project's own
// queries/mutations.
let mut dependency_graph = build_dependency_graph(definitions);
// Note: Keys found in `resolver_dependencies` should theoretically replace
// those found in `implicit_dependencies`, however that would require either
// getting a mutable copy of `implicit_dependencies` or copying it. For
// simplicity we just add both sets. This means we may mark a few extra
// definitions as reachable (false positives), but it's an edge case and
// the cost is minimal.
let resolver_dependencies =
find_resolver_dependencies(&changed_names, &dependency_graph, schema);
add_dependencies_to_graph(&mut dependency_graph, implicit_dependencies);
add_dependencies_to_graph(&mut dependency_graph, &resolver_dependencies);
let mut visited = Default::default();
let mut filtered_definitions = Default::default();
for key in changed_names.into_iter() {
if dependency_graph.contains_key(&key) {
add_related_nodes(
&mut visited,
&mut filtered_definitions,
&dependency_graph,
&base_definition_names,
key,
);
}
}
filtered_definitions
.drain()
.map(|(_, definition)| definition)
.collect()
}
fn find_resolver_dependencies(
reachable_names: &StringKeySet,
dependency_graph: &StringKeyMap<Node>,
schema: &SDLSchema,
) -> DependencyMap {
let mut dependencies = Default::default();
let mut finder = ResolverFieldFinder::new(&mut dependencies, schema);
for name in reachable_names {
if let Some(node) = dependency_graph.get(name) {
let def = match node.ir.as_ref() {
Some(definition) => definition,
None => panic!("Could not find defintion for {}.", name),
};
match def {
ExecutableDefinition::Fragment(fragment) => finder.visit_fragment(fragment),
ExecutableDefinition::Operation(operation) => finder.visit_operation(operation),
}
}
}
dependencies
}
fn add_dependencies_to_graph(
dependency_graph: &mut StringKeyMap<Node>,
dependencies: &DependencyMap,
) {
for (parent, children) in dependencies.iter() {
if let Some(node) = dependency_graph.get_mut(parent) {
node.children.extend(children.iter());
};
for child in children.iter() {
if let Some(node) = dependency_graph.get_mut(child) {
node.parents.push(*parent);
};
}
}
}
// Build a dependency graph of that nodes are "doubly linked"
fn build_dependency_graph(definitions: Vec<ExecutableDefinition>) -> StringKeyMap<Node> {
let mut dependency_graph: StringKeyMap<Node> =
HashMap::with_capacity_and_hasher(definitions.len(), Default::default());
for definition in definitions.into_iter() {
let name = match &definition {
ExecutableDefinition::Operation(operation) => operation.name.item,
ExecutableDefinition::Fragment(fragment) => fragment.name.item,
};
// Visit the selections of the IR to build it's `children`
let mut children = vec![];
let selections = match &definition {
ExecutableDefinition::Operation(operation) => &operation.selections,
ExecutableDefinition::Fragment(fragment) => &fragment.selections,
};
visit_selections(&mut dependency_graph, selections, name, &mut children);
// Insert or update the representation of the IR in the dependency tree
match dependency_graph.entry(name) {
// Add a new node for current IR to the dependency tree
Entry::Vacant(entry) => {
entry.insert(Node {
ir: Some(definition),
parents: vec![],
children,
});
}
// The node is already created when visiting selections of the IR, but it's `ir` and `children` haven't been set
Entry::Occupied(mut entry) => {
let node = entry.get_mut();
if let Some(def) = &node.ir {
panic!(
"Duplicate definition: had {:?} and found another {:?}.",
def, node.ir
);
}
node.ir = Some(definition);
node.children = children;
}
}
}
dependency_graph
}
// Visit the selections of current IR, set the `children` for the node representing the IR,
// and the `parents` for nodes representing the children IR
fn visit_selections(
dependency_graph: &mut StringKeyMap<Node>,
selections: &[Selection],
parent_name: StringKey,
children: &mut Vec<StringKey>,
) {
for selection in selections {
match selection {
Selection::FragmentSpread(node) => {
let key = node.fragment.item;
match dependency_graph.get_mut(&key) {
None => {
dependency_graph.insert(
key,
Node {
ir: None,
parents: vec![parent_name],
children: vec![],
},
);
}
Some(node) => {
node.parents.push(parent_name);
}
}
children.push(key);
}
Selection::LinkedField(node) => {
visit_selections(dependency_graph, &node.selections, parent_name, children);
}
Selection::InlineFragment(node) => {
visit_selections(dependency_graph, &node.selections, parent_name, children);
}
Selection::ScalarField(_) => {}
Selection::Condition(node) => {
visit_selections(dependency_graph, &node.selections, parent_name, children);
}
}
}
}
// From `key` of changed definition, recusively traverse up the depenency tree, and add all related nodes (ancestors
// of changned definitions which are not from base definitions, and all of their desendants) into the `result`
fn add_related_nodes(
visited: &mut StringKeySet,
result: &mut StringKeyMap<ExecutableDefinition>,
dependency_graph: &StringKeyMap<Node>,
base_definition_names: &StringKeySet,
key: StringKey,
) {
if!visited.insert(key) {
return;
}
let parents = match dependency_graph.get(&key) {
None => {
panic!("Fragment {:?} not found in IR.", key);
}
Some(node) => &node.parents,
};
if parents.is_empty() {
if!base_definition_names.contains(&key) {
add_descendants(result, dependency_graph, key);
}
} else {
for parent in parents {
add_related_nodes(
visited,
result,
dependency_graph,
base_definition_names,
*parent,
);
}
}
}
// Recursively add all descendants of current node into the `result`
fn add_descendants(
result: &mut StringKeyMap<ExecutableDefinition>,
dependency_graph: &StringKeyMap<Node>,
key: StringKey,
) {
if result.contains_key(&key) {
return;
}
match dependency_graph.get(&key) {
Some(Node {
ir: Some(def),
children,
..
}) => {
result.insert(key, def.clone());
for child in children {
add_descendants(result, dependency_graph, *child);
}
}
_ => {
panic!("Fragment {:?} not found in IR.", key);
}
}
}
|
use intern::string_key::{StringKey, StringKeyMap, StringKeySet};
|
random_line_split
|
closure.rs
|
build::*;
use trans::callee::{self, ArgVals, Callee, TraitItem, MethodData};
use trans::cleanup::{CleanupMethods, CustomScope, ScopeId};
|
use trans::common::*;
use trans::datum::{self, Datum, rvalue_scratch_datum, Rvalue, ByValue};
use trans::debuginfo::{self, DebugLoc};
use trans::declare;
use trans::expr;
use trans::monomorphize::{self, MonoId};
use trans::type_of::*;
use middle::ty::{self, ClosureTyper};
use middle::subst::Substs;
use session::config::FullDebugInfo;
use util::ppaux::Repr;
use syntax::abi::RustCall;
use syntax::ast;
use syntax::ast_util;
fn load_closure_environment<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
arg_scope_id: ScopeId,
freevars: &[ty::Freevar])
-> Block<'blk, 'tcx>
{
let _icx = push_ctxt("closure::load_closure_environment");
// Special case for small by-value selfs.
let closure_id = ast_util::local_def(bcx.fcx.id);
let self_type = self_type_for_closure(bcx.ccx(), closure_id,
node_id_type(bcx, closure_id.node));
let kind = kind_for_closure(bcx.ccx(), closure_id);
let llenv = if kind == ty::FnOnceClosureKind &&
!arg_is_indirect(bcx.ccx(), self_type) {
let datum = rvalue_scratch_datum(bcx,
self_type,
"closure_env");
store_ty(bcx, bcx.fcx.llenv.unwrap(), datum.val, self_type);
datum.val
} else {
bcx.fcx.llenv.unwrap()
};
// Store the pointer to closure data in an alloca for debug info because that's what the
// llvm.dbg.declare intrinsic expects
let env_pointer_alloca = if bcx.sess().opts.debuginfo == FullDebugInfo {
let alloc = alloca(bcx, val_ty(llenv), "__debuginfo_env_ptr");
Store(bcx, llenv, alloc);
Some(alloc)
} else {
None
};
for (i, freevar) in freevars.iter().enumerate() {
let upvar_id = ty::UpvarId { var_id: freevar.def.local_node_id(),
closure_expr_id: closure_id.node };
let upvar_capture = bcx.tcx().upvar_capture(upvar_id).unwrap();
let mut upvar_ptr = GEPi(bcx, llenv, &[0, i]);
let captured_by_ref = match upvar_capture {
ty::UpvarCapture::ByValue => false,
ty::UpvarCapture::ByRef(..) => {
upvar_ptr = Load(bcx, upvar_ptr);
true
}
};
let def_id = freevar.def.def_id();
bcx.fcx.llupvars.borrow_mut().insert(def_id.node, upvar_ptr);
if kind == ty::FnOnceClosureKind &&!captured_by_ref {
bcx.fcx.schedule_drop_mem(arg_scope_id,
upvar_ptr,
node_id_type(bcx, def_id.node))
}
if let Some(env_pointer_alloca) = env_pointer_alloca {
debuginfo::create_captured_var_metadata(
bcx,
def_id.node,
env_pointer_alloca,
i,
captured_by_ref,
freevar.span);
}
}
bcx
}
pub enum ClosureEnv<'a> {
NotClosure,
Closure(&'a [ty::Freevar]),
}
impl<'a> ClosureEnv<'a> {
pub fn load<'blk,'tcx>(self, bcx: Block<'blk, 'tcx>, arg_scope: ScopeId)
-> Block<'blk, 'tcx>
{
match self {
ClosureEnv::NotClosure => bcx,
ClosureEnv::Closure(freevars) => {
if freevars.is_empty() {
bcx
} else {
load_closure_environment(bcx, arg_scope, freevars)
}
}
}
}
}
/// Returns the LLVM function declaration for a closure, creating it if
/// necessary. If the ID does not correspond to a closure ID, returns None.
pub fn get_or_create_declaration_if_closure<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>,
closure_id: ast::DefId,
substs: &Substs<'tcx>)
-> Option<Datum<'tcx, Rvalue>> {
if!ccx.tcx().closure_kinds.borrow().contains_key(&closure_id) {
// Not a closure.
return None
}
let function_type = ty::node_id_to_type(ccx.tcx(), closure_id.node);
let function_type = monomorphize::apply_param_substs(ccx.tcx(), substs, &function_type);
// Normalize type so differences in regions and typedefs don't cause
// duplicate declarations
let function_type = erase_regions(ccx.tcx(), &function_type);
let params = match function_type.sty {
ty::ty_closure(_, substs) => &substs.types,
_ => unreachable!()
};
let mono_id = MonoId {
def: closure_id,
params: params
};
match ccx.closure_vals().borrow().get(&mono_id) {
Some(&llfn) => {
debug!("get_or_create_declaration_if_closure(): found closure");
return Some(Datum::new(llfn, function_type, Rvalue::new(ByValue)))
}
None => {}
}
let symbol = ccx.tcx().map.with_path(closure_id.node, |path| {
mangle_internal_name_by_path_and_seq(path, "closure")
});
// Currently there’s only a single user of get_or_create_declaration_if_closure and it
// unconditionally defines the function, therefore we use define_* here.
let llfn = declare::define_internal_rust_fn(ccx, &symbol[..], function_type).unwrap_or_else(||{
ccx.sess().bug(&format!("symbol `{}` already defined", symbol));
});
// set an inline hint for all closures
attributes::inline(llfn, attributes::InlineAttr::Hint);
debug!("get_or_create_declaration_if_closure(): inserting new \
closure {:?} (type {})",
mono_id,
ccx.tn().type_to_string(val_ty(llfn)));
ccx.closure_vals().borrow_mut().insert(mono_id, llfn);
Some(Datum::new(llfn, function_type, Rvalue::new(ByValue)))
}
pub enum Dest<'a, 'tcx: 'a> {
SaveIn(Block<'a, 'tcx>, ValueRef),
Ignore(&'a CrateContext<'a, 'tcx>)
}
pub fn trans_closure_expr<'a, 'tcx>(dest: Dest<'a, 'tcx>,
decl: &ast::FnDecl,
body: &ast::Block,
id: ast::NodeId,
param_substs: &'tcx Substs<'tcx>)
-> Option<Block<'a, 'tcx>>
{
let ccx = match dest {
Dest::SaveIn(bcx, _) => bcx.ccx(),
Dest::Ignore(ccx) => ccx
};
let tcx = ccx.tcx();
let _icx = push_ctxt("closure::trans_closure");
debug!("trans_closure()");
let closure_id = ast_util::local_def(id);
let llfn = get_or_create_declaration_if_closure(
ccx,
closure_id,
param_substs).unwrap();
// Get the type of this closure. Use the current `param_substs` as
// the closure substitutions. This makes sense because the closure
// takes the same set of type arguments as the enclosing fn, and
// this function (`trans_closure`) is invoked at the point
// of the closure expression.
let typer = NormalizingClosureTyper::new(tcx);
let function_type = typer.closure_type(closure_id, param_substs);
let freevars: Vec<ty::Freevar> =
ty::with_freevars(tcx, id, |fv| fv.iter().cloned().collect());
let sig = ty::erase_late_bound_regions(tcx, &function_type.sig);
trans_closure(ccx,
decl,
body,
llfn.val,
param_substs,
id,
&[],
sig.output,
function_type.abi,
ClosureEnv::Closure(&freevars[..]));
// Don't hoist this to the top of the function. It's perfectly legitimate
// to have a zero-size closure (in which case dest will be `Ignore`) and
// we must still generate the closure body.
let (mut bcx, dest_addr) = match dest {
Dest::SaveIn(bcx, p) => (bcx, p),
Dest::Ignore(_) => {
debug!("trans_closure() ignoring result");
return None;
}
};
let repr = adt::represent_type(ccx, node_id_type(bcx, id));
// Create the closure.
for (i, freevar) in freevars.iter().enumerate() {
let datum = expr::trans_local_var(bcx, freevar.def);
let upvar_slot_dest = adt::trans_field_ptr(bcx, &*repr, dest_addr, 0, i);
let upvar_id = ty::UpvarId { var_id: freevar.def.local_node_id(),
closure_expr_id: id };
match tcx.upvar_capture(upvar_id).unwrap() {
ty::UpvarCapture::ByValue => {
bcx = datum.store_to(bcx, upvar_slot_dest);
}
ty::UpvarCapture::ByRef(..) => {
Store(bcx, datum.to_llref(), upvar_slot_dest);
}
}
}
adt::trans_set_discr(bcx, &*repr, dest_addr, 0);
Some(bcx)
}
pub fn trans_closure_method<'a, 'tcx>(ccx: &'a CrateContext<'a, 'tcx>,
closure_def_id: ast::DefId,
substs: Substs<'tcx>,
node: ExprOrMethodCall,
param_substs: &'tcx Substs<'tcx>,
trait_closure_kind: ty::ClosureKind)
-> ValueRef
{
// The substitutions should have no type parameters remaining
// after passing through fulfill_obligation
let llfn = callee::trans_fn_ref_with_substs(ccx,
closure_def_id,
node,
param_substs,
substs.clone()).val;
// If the closure is a Fn closure, but a FnOnce is needed (etc),
// then adapt the self type
let closure_kind = ccx.tcx().closure_kind(closure_def_id);
trans_closure_adapter_shim(ccx,
closure_def_id,
substs,
closure_kind,
trait_closure_kind,
llfn)
}
fn trans_closure_adapter_shim<'a, 'tcx>(
ccx: &'a CrateContext<'a, 'tcx>,
closure_def_id: ast::DefId,
substs: Substs<'tcx>,
llfn_closure_kind: ty::ClosureKind,
trait_closure_kind: ty::ClosureKind,
llfn: ValueRef)
-> ValueRef
{
let _icx = push_ctxt("trans_closure_adapter_shim");
let tcx = ccx.tcx();
debug!("trans_closure_adapter_shim(llfn_closure_kind={:?}, \
trait_closure_kind={:?}, \
llfn={})",
llfn_closure_kind,
trait_closure_kind,
ccx.tn().val_to_string(llfn));
match (llfn_closure_kind, trait_closure_kind) {
(ty::FnClosureKind, ty::FnClosureKind) |
(ty::FnMutClosureKind, ty::FnMutClosureKind) |
(ty::FnOnceClosureKind, ty::FnOnceClosureKind) => {
// No adapter needed.
llfn
}
(ty::FnClosureKind, ty::FnMutClosureKind) => {
// The closure fn `llfn` is a `fn(&self,...)`. We want a
// `fn(&mut self,...)`. In fact, at trans time, these are
// basically the same thing, so we can just return llfn.
llfn
}
(ty::FnClosureKind, ty::FnOnceClosureKind) |
(ty::FnMutClosureKind, ty::FnOnceClosureKind) => {
// The closure fn `llfn` is a `fn(&self,...)` or `fn(&mut
// self,...)`. We want a `fn(self,...)`. We can produce
// this by doing something like:
//
// fn call_once(self,...) { call_mut(&self,...) }
// fn call_once(mut self,...) { call_mut(&mut self,...) }
//
// These are both the same at trans time.
trans_fn_once_adapter_shim(ccx, closure_def_id, substs, llfn)
}
_ => {
tcx.sess.bug(&format!("trans_closure_adapter_shim: cannot convert {:?} to {:?}",
llfn_closure_kind,
trait_closure_kind));
}
}
}
fn trans_fn_once_adapter_shim<'a, 'tcx>(
ccx: &'a CrateContext<'a, 'tcx>,
closure_def_id: ast::DefId,
substs: Substs<'tcx>,
llreffn: ValueRef)
-> ValueRef
{
debug!("trans_fn_once_adapter_shim(closure_def_id={}, substs={}, llreffn={})",
closure_def_id.repr(ccx.tcx()),
substs.repr(ccx.tcx()),
ccx.tn().val_to_string(llreffn));
let tcx = ccx.tcx();
let typer = NormalizingClosureTyper::new(tcx);
// Find a version of the closure type. Substitute static for the
// region since it doesn't really matter.
let substs = tcx.mk_substs(substs);
let closure_ty = ty::mk_closure(tcx, closure_def_id, substs);
let ref_closure_ty = ty::mk_imm_rptr(tcx, tcx.mk_region(ty::ReStatic), closure_ty);
// Make a version with the type of by-ref closure.
let ty::ClosureTy { unsafety, abi, mut sig } = typer.closure_type(closure_def_id, substs);
sig.0.inputs.insert(0, ref_closure_ty); // sig has no self type as of yet
let llref_bare_fn_ty = tcx.mk_bare_fn(ty::BareFnTy { unsafety: unsafety,
abi: abi,
sig: sig.clone() });
let llref_fn_ty = ty::mk_bare_fn(tcx, None, llref_bare_fn_ty);
debug!("trans_fn_once_adapter_shim: llref_fn_ty={}",
llref_fn_ty.repr(tcx));
// Make a version of the closure type with the same arguments, but
// with argument #0 being by value.
assert_eq!(abi, RustCall);
sig.0.inputs[0] = closure_ty;
let llonce_bare_fn_ty = tcx.mk_bare_fn(ty::BareFnTy { unsafety: unsafety,
abi: abi,
sig: sig });
let llonce_fn_ty = ty::mk_bare_fn(tcx, None, llonce_bare_fn_ty);
// Create the by-value helper.
let function_name = link::mangle_internal_name_by_type_and_seq(ccx, llonce_fn_ty, "once_shim");
let lloncefn = declare::define_internal_rust_fn(ccx, &function_name[..], llonce_fn_ty)
.unwrap_or_else(||{
ccx.sess().bug(&format!("symbol `{}` already defined", function_name));
});
let sig = ty::erase_late_bound_regions(tcx, &llonce_bare_fn_ty.sig);
let (block_arena, fcx): (TypedArena<_>, FunctionContext);
block_arena = TypedArena::new();
fcx = new_fn_ctxt(ccx,
lloncefn,
ast::DUMMY_NODE_ID,
false,
sig.output,
substs,
None,
&block_arena);
let mut bcx = init_function(&fcx, false, sig.output);
// the first argument (`self`) will be the (by value) closure env.
let self_scope = fcx.push_custom_cleanup_scope();
let self_scope_id = CustomScope(self_scope);
let rvalue_mode = datum::appropriate_rvalue_mode(ccx, closure_ty);
let llself = get_param(lloncefn, fcx.arg_pos(0) as u32);
let env_datum = Datum::new(llself, closure_ty, Rvalue::new(rvalue_mode));
let env_datum = unpack_datum!(bcx,
env_datum.to_lvalue_datum_in_scope(bcx, "self",
self_scope_id));
debug!("trans_fn_once_adapter_shim: env_datum={}",
bcx.val_to_string(env_datum.val));
// the remaining arguments will be packed up in a tuple.
let input_tys = match sig.inputs[1].sty {
ty::ty_tup(ref tys) => &**tys,
_ => bcx.sess().bug(&format!("trans_fn_once_adapter_shim: not rust-call! \
closure_def_id={}",
closure_def_id.repr(tcx)))
};
let llargs: Vec<_> =
input_tys.iter()
.enumerate()
.map(|(i, _)| get_param(lloncefn, fcx.arg_pos(i+1)
|
random_line_split
|
|
closure.rs
|
::*;
use trans::callee::{self, ArgVals, Callee, TraitItem, MethodData};
use trans::cleanup::{CleanupMethods, CustomScope, ScopeId};
use trans::common::*;
use trans::datum::{self, Datum, rvalue_scratch_datum, Rvalue, ByValue};
use trans::debuginfo::{self, DebugLoc};
use trans::declare;
use trans::expr;
use trans::monomorphize::{self, MonoId};
use trans::type_of::*;
use middle::ty::{self, ClosureTyper};
use middle::subst::Substs;
use session::config::FullDebugInfo;
use util::ppaux::Repr;
use syntax::abi::RustCall;
use syntax::ast;
use syntax::ast_util;
fn load_closure_environment<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
arg_scope_id: ScopeId,
freevars: &[ty::Freevar])
-> Block<'blk, 'tcx>
{
let _icx = push_ctxt("closure::load_closure_environment");
// Special case for small by-value selfs.
let closure_id = ast_util::local_def(bcx.fcx.id);
let self_type = self_type_for_closure(bcx.ccx(), closure_id,
node_id_type(bcx, closure_id.node));
let kind = kind_for_closure(bcx.ccx(), closure_id);
let llenv = if kind == ty::FnOnceClosureKind &&
!arg_is_indirect(bcx.ccx(), self_type) {
let datum = rvalue_scratch_datum(bcx,
self_type,
"closure_env");
store_ty(bcx, bcx.fcx.llenv.unwrap(), datum.val, self_type);
datum.val
} else {
bcx.fcx.llenv.unwrap()
};
// Store the pointer to closure data in an alloca for debug info because that's what the
// llvm.dbg.declare intrinsic expects
let env_pointer_alloca = if bcx.sess().opts.debuginfo == FullDebugInfo {
let alloc = alloca(bcx, val_ty(llenv), "__debuginfo_env_ptr");
Store(bcx, llenv, alloc);
Some(alloc)
} else {
None
};
for (i, freevar) in freevars.iter().enumerate() {
let upvar_id = ty::UpvarId { var_id: freevar.def.local_node_id(),
closure_expr_id: closure_id.node };
let upvar_capture = bcx.tcx().upvar_capture(upvar_id).unwrap();
let mut upvar_ptr = GEPi(bcx, llenv, &[0, i]);
let captured_by_ref = match upvar_capture {
ty::UpvarCapture::ByValue => false,
ty::UpvarCapture::ByRef(..) => {
upvar_ptr = Load(bcx, upvar_ptr);
true
}
};
let def_id = freevar.def.def_id();
bcx.fcx.llupvars.borrow_mut().insert(def_id.node, upvar_ptr);
if kind == ty::FnOnceClosureKind &&!captured_by_ref {
bcx.fcx.schedule_drop_mem(arg_scope_id,
upvar_ptr,
node_id_type(bcx, def_id.node))
}
if let Some(env_pointer_alloca) = env_pointer_alloca {
debuginfo::create_captured_var_metadata(
bcx,
def_id.node,
env_pointer_alloca,
i,
captured_by_ref,
freevar.span);
}
}
bcx
}
pub enum ClosureEnv<'a> {
NotClosure,
Closure(&'a [ty::Freevar]),
}
impl<'a> ClosureEnv<'a> {
pub fn
|
<'blk,'tcx>(self, bcx: Block<'blk, 'tcx>, arg_scope: ScopeId)
-> Block<'blk, 'tcx>
{
match self {
ClosureEnv::NotClosure => bcx,
ClosureEnv::Closure(freevars) => {
if freevars.is_empty() {
bcx
} else {
load_closure_environment(bcx, arg_scope, freevars)
}
}
}
}
}
/// Returns the LLVM function declaration for a closure, creating it if
/// necessary. If the ID does not correspond to a closure ID, returns None.
pub fn get_or_create_declaration_if_closure<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>,
closure_id: ast::DefId,
substs: &Substs<'tcx>)
-> Option<Datum<'tcx, Rvalue>> {
if!ccx.tcx().closure_kinds.borrow().contains_key(&closure_id) {
// Not a closure.
return None
}
let function_type = ty::node_id_to_type(ccx.tcx(), closure_id.node);
let function_type = monomorphize::apply_param_substs(ccx.tcx(), substs, &function_type);
// Normalize type so differences in regions and typedefs don't cause
// duplicate declarations
let function_type = erase_regions(ccx.tcx(), &function_type);
let params = match function_type.sty {
ty::ty_closure(_, substs) => &substs.types,
_ => unreachable!()
};
let mono_id = MonoId {
def: closure_id,
params: params
};
match ccx.closure_vals().borrow().get(&mono_id) {
Some(&llfn) => {
debug!("get_or_create_declaration_if_closure(): found closure");
return Some(Datum::new(llfn, function_type, Rvalue::new(ByValue)))
}
None => {}
}
let symbol = ccx.tcx().map.with_path(closure_id.node, |path| {
mangle_internal_name_by_path_and_seq(path, "closure")
});
// Currently there’s only a single user of get_or_create_declaration_if_closure and it
// unconditionally defines the function, therefore we use define_* here.
let llfn = declare::define_internal_rust_fn(ccx, &symbol[..], function_type).unwrap_or_else(||{
ccx.sess().bug(&format!("symbol `{}` already defined", symbol));
});
// set an inline hint for all closures
attributes::inline(llfn, attributes::InlineAttr::Hint);
debug!("get_or_create_declaration_if_closure(): inserting new \
closure {:?} (type {})",
mono_id,
ccx.tn().type_to_string(val_ty(llfn)));
ccx.closure_vals().borrow_mut().insert(mono_id, llfn);
Some(Datum::new(llfn, function_type, Rvalue::new(ByValue)))
}
pub enum Dest<'a, 'tcx: 'a> {
SaveIn(Block<'a, 'tcx>, ValueRef),
Ignore(&'a CrateContext<'a, 'tcx>)
}
pub fn trans_closure_expr<'a, 'tcx>(dest: Dest<'a, 'tcx>,
decl: &ast::FnDecl,
body: &ast::Block,
id: ast::NodeId,
param_substs: &'tcx Substs<'tcx>)
-> Option<Block<'a, 'tcx>>
{
let ccx = match dest {
Dest::SaveIn(bcx, _) => bcx.ccx(),
Dest::Ignore(ccx) => ccx
};
let tcx = ccx.tcx();
let _icx = push_ctxt("closure::trans_closure");
debug!("trans_closure()");
let closure_id = ast_util::local_def(id);
let llfn = get_or_create_declaration_if_closure(
ccx,
closure_id,
param_substs).unwrap();
// Get the type of this closure. Use the current `param_substs` as
// the closure substitutions. This makes sense because the closure
// takes the same set of type arguments as the enclosing fn, and
// this function (`trans_closure`) is invoked at the point
// of the closure expression.
let typer = NormalizingClosureTyper::new(tcx);
let function_type = typer.closure_type(closure_id, param_substs);
let freevars: Vec<ty::Freevar> =
ty::with_freevars(tcx, id, |fv| fv.iter().cloned().collect());
let sig = ty::erase_late_bound_regions(tcx, &function_type.sig);
trans_closure(ccx,
decl,
body,
llfn.val,
param_substs,
id,
&[],
sig.output,
function_type.abi,
ClosureEnv::Closure(&freevars[..]));
// Don't hoist this to the top of the function. It's perfectly legitimate
// to have a zero-size closure (in which case dest will be `Ignore`) and
// we must still generate the closure body.
let (mut bcx, dest_addr) = match dest {
Dest::SaveIn(bcx, p) => (bcx, p),
Dest::Ignore(_) => {
debug!("trans_closure() ignoring result");
return None;
}
};
let repr = adt::represent_type(ccx, node_id_type(bcx, id));
// Create the closure.
for (i, freevar) in freevars.iter().enumerate() {
let datum = expr::trans_local_var(bcx, freevar.def);
let upvar_slot_dest = adt::trans_field_ptr(bcx, &*repr, dest_addr, 0, i);
let upvar_id = ty::UpvarId { var_id: freevar.def.local_node_id(),
closure_expr_id: id };
match tcx.upvar_capture(upvar_id).unwrap() {
ty::UpvarCapture::ByValue => {
bcx = datum.store_to(bcx, upvar_slot_dest);
}
ty::UpvarCapture::ByRef(..) => {
Store(bcx, datum.to_llref(), upvar_slot_dest);
}
}
}
adt::trans_set_discr(bcx, &*repr, dest_addr, 0);
Some(bcx)
}
pub fn trans_closure_method<'a, 'tcx>(ccx: &'a CrateContext<'a, 'tcx>,
closure_def_id: ast::DefId,
substs: Substs<'tcx>,
node: ExprOrMethodCall,
param_substs: &'tcx Substs<'tcx>,
trait_closure_kind: ty::ClosureKind)
-> ValueRef
{
// The substitutions should have no type parameters remaining
// after passing through fulfill_obligation
let llfn = callee::trans_fn_ref_with_substs(ccx,
closure_def_id,
node,
param_substs,
substs.clone()).val;
// If the closure is a Fn closure, but a FnOnce is needed (etc),
// then adapt the self type
let closure_kind = ccx.tcx().closure_kind(closure_def_id);
trans_closure_adapter_shim(ccx,
closure_def_id,
substs,
closure_kind,
trait_closure_kind,
llfn)
}
fn trans_closure_adapter_shim<'a, 'tcx>(
ccx: &'a CrateContext<'a, 'tcx>,
closure_def_id: ast::DefId,
substs: Substs<'tcx>,
llfn_closure_kind: ty::ClosureKind,
trait_closure_kind: ty::ClosureKind,
llfn: ValueRef)
-> ValueRef
{
let _icx = push_ctxt("trans_closure_adapter_shim");
let tcx = ccx.tcx();
debug!("trans_closure_adapter_shim(llfn_closure_kind={:?}, \
trait_closure_kind={:?}, \
llfn={})",
llfn_closure_kind,
trait_closure_kind,
ccx.tn().val_to_string(llfn));
match (llfn_closure_kind, trait_closure_kind) {
(ty::FnClosureKind, ty::FnClosureKind) |
(ty::FnMutClosureKind, ty::FnMutClosureKind) |
(ty::FnOnceClosureKind, ty::FnOnceClosureKind) => {
// No adapter needed.
llfn
}
(ty::FnClosureKind, ty::FnMutClosureKind) => {
// The closure fn `llfn` is a `fn(&self,...)`. We want a
// `fn(&mut self,...)`. In fact, at trans time, these are
// basically the same thing, so we can just return llfn.
llfn
}
(ty::FnClosureKind, ty::FnOnceClosureKind) |
(ty::FnMutClosureKind, ty::FnOnceClosureKind) => {
// The closure fn `llfn` is a `fn(&self,...)` or `fn(&mut
// self,...)`. We want a `fn(self,...)`. We can produce
// this by doing something like:
//
// fn call_once(self,...) { call_mut(&self,...) }
// fn call_once(mut self,...) { call_mut(&mut self,...) }
//
// These are both the same at trans time.
trans_fn_once_adapter_shim(ccx, closure_def_id, substs, llfn)
}
_ => {
tcx.sess.bug(&format!("trans_closure_adapter_shim: cannot convert {:?} to {:?}",
llfn_closure_kind,
trait_closure_kind));
}
}
}
fn trans_fn_once_adapter_shim<'a, 'tcx>(
ccx: &'a CrateContext<'a, 'tcx>,
closure_def_id: ast::DefId,
substs: Substs<'tcx>,
llreffn: ValueRef)
-> ValueRef
{
debug!("trans_fn_once_adapter_shim(closure_def_id={}, substs={}, llreffn={})",
closure_def_id.repr(ccx.tcx()),
substs.repr(ccx.tcx()),
ccx.tn().val_to_string(llreffn));
let tcx = ccx.tcx();
let typer = NormalizingClosureTyper::new(tcx);
// Find a version of the closure type. Substitute static for the
// region since it doesn't really matter.
let substs = tcx.mk_substs(substs);
let closure_ty = ty::mk_closure(tcx, closure_def_id, substs);
let ref_closure_ty = ty::mk_imm_rptr(tcx, tcx.mk_region(ty::ReStatic), closure_ty);
// Make a version with the type of by-ref closure.
let ty::ClosureTy { unsafety, abi, mut sig } = typer.closure_type(closure_def_id, substs);
sig.0.inputs.insert(0, ref_closure_ty); // sig has no self type as of yet
let llref_bare_fn_ty = tcx.mk_bare_fn(ty::BareFnTy { unsafety: unsafety,
abi: abi,
sig: sig.clone() });
let llref_fn_ty = ty::mk_bare_fn(tcx, None, llref_bare_fn_ty);
debug!("trans_fn_once_adapter_shim: llref_fn_ty={}",
llref_fn_ty.repr(tcx));
// Make a version of the closure type with the same arguments, but
// with argument #0 being by value.
assert_eq!(abi, RustCall);
sig.0.inputs[0] = closure_ty;
let llonce_bare_fn_ty = tcx.mk_bare_fn(ty::BareFnTy { unsafety: unsafety,
abi: abi,
sig: sig });
let llonce_fn_ty = ty::mk_bare_fn(tcx, None, llonce_bare_fn_ty);
// Create the by-value helper.
let function_name = link::mangle_internal_name_by_type_and_seq(ccx, llonce_fn_ty, "once_shim");
let lloncefn = declare::define_internal_rust_fn(ccx, &function_name[..], llonce_fn_ty)
.unwrap_or_else(||{
ccx.sess().bug(&format!("symbol `{}` already defined", function_name));
});
let sig = ty::erase_late_bound_regions(tcx, &llonce_bare_fn_ty.sig);
let (block_arena, fcx): (TypedArena<_>, FunctionContext);
block_arena = TypedArena::new();
fcx = new_fn_ctxt(ccx,
lloncefn,
ast::DUMMY_NODE_ID,
false,
sig.output,
substs,
None,
&block_arena);
let mut bcx = init_function(&fcx, false, sig.output);
// the first argument (`self`) will be the (by value) closure env.
let self_scope = fcx.push_custom_cleanup_scope();
let self_scope_id = CustomScope(self_scope);
let rvalue_mode = datum::appropriate_rvalue_mode(ccx, closure_ty);
let llself = get_param(lloncefn, fcx.arg_pos(0) as u32);
let env_datum = Datum::new(llself, closure_ty, Rvalue::new(rvalue_mode));
let env_datum = unpack_datum!(bcx,
env_datum.to_lvalue_datum_in_scope(bcx, "self",
self_scope_id));
debug!("trans_fn_once_adapter_shim: env_datum={}",
bcx.val_to_string(env_datum.val));
// the remaining arguments will be packed up in a tuple.
let input_tys = match sig.inputs[1].sty {
ty::ty_tup(ref tys) => &**tys,
_ => bcx.sess().bug(&format!("trans_fn_once_adapter_shim: not rust-call! \
closure_def_id={}",
closure_def_id.repr(tcx)))
};
let llargs: Vec<_> =
input_tys.iter()
.enumerate()
.map(|(i, _)| get_param(lloncefn, fcx.arg_pos(i+1
|
load
|
identifier_name
|
pixel_buffer.rs
|
/*!
Pixel buffers are buffers that contain two-dimensional texture data.
Contrary to textures, pixel buffers are stored in a client-defined format. They are used
to transfer data to or from the video memory, before or after being turned into a texture.
*/
use std::borrow::Cow;
use std::cell::Cell;
use std::ops::{Deref, DerefMut};
use backend::Facade;
use GlObject;
use BufferViewExt;
use buffer::{ReadError, BufferView, BufferType};
use gl;
use texture::PixelValue;
use texture::Texture2dDataSink;
/// Buffer that stores the content of a texture.
///
/// The generic type represents the type of pixels that the buffer contains.
pub struct PixelBuffer<T> where T: PixelValue {
buffer: BufferView<[T]>,
dimensions: Cell<Option<(u32, u32)>>,
}
impl<T> PixelBuffer<T> where T: PixelValue {
/// Builds a new buffer with an uninitialized content.
pub fn new_empty<F>(facade: &F, capacity: usize) -> PixelBuffer<T> where F: Facade {
PixelBuffer {
buffer: BufferView::empty_array(facade, BufferType::PixelPackBuffer, capacity,
false).unwrap(),
dimensions: Cell::new(None),
}
}
/// Reads the content of the pixel buffer.
pub fn read_as_texture_2d<S>(&self) -> Result<S, ReadError> where S: Texture2dDataSink<T> {
let dimensions = self.dimensions.get().expect("The pixel buffer is empty");
let data = try!(self.read());
Ok(S::from_raw(Cow::Owned(data), dimensions.0, dimensions.1))
}
}
impl<T> Deref for PixelBuffer<T> where T: PixelValue {
type Target = BufferView<[T]>;
fn deref(&self) -> &BufferView<[T]> {
&self.buffer
}
}
impl<T> DerefMut for PixelBuffer<T> where T: PixelValue {
fn deref_mut(&mut self) -> &mut BufferView<[T]> {
&mut self.buffer
}
}
// TODO: rework this
impl<T> GlObject for PixelBuffer<T> where T: PixelValue {
type Id = gl::types::GLuint;
fn get_id(&self) -> gl::types::GLuint
|
}
// TODO: remove this hack
#[doc(hidden)]
pub fn store_infos<T>(b: &PixelBuffer<T>, dimensions: (u32, u32)) where T: PixelValue {
b.dimensions.set(Some(dimensions));
}
|
{
self.buffer.get_buffer_id()
}
|
identifier_body
|
pixel_buffer.rs
|
/*!
Pixel buffers are buffers that contain two-dimensional texture data.
Contrary to textures, pixel buffers are stored in a client-defined format. They are used
to transfer data to or from the video memory, before or after being turned into a texture.
*/
use std::borrow::Cow;
use std::cell::Cell;
use std::ops::{Deref, DerefMut};
use backend::Facade;
use GlObject;
use BufferViewExt;
use buffer::{ReadError, BufferView, BufferType};
use gl;
use texture::PixelValue;
use texture::Texture2dDataSink;
/// Buffer that stores the content of a texture.
///
/// The generic type represents the type of pixels that the buffer contains.
pub struct PixelBuffer<T> where T: PixelValue {
buffer: BufferView<[T]>,
dimensions: Cell<Option<(u32, u32)>>,
}
impl<T> PixelBuffer<T> where T: PixelValue {
/// Builds a new buffer with an uninitialized content.
pub fn new_empty<F>(facade: &F, capacity: usize) -> PixelBuffer<T> where F: Facade {
PixelBuffer {
buffer: BufferView::empty_array(facade, BufferType::PixelPackBuffer, capacity,
false).unwrap(),
dimensions: Cell::new(None),
}
}
/// Reads the content of the pixel buffer.
pub fn read_as_texture_2d<S>(&self) -> Result<S, ReadError> where S: Texture2dDataSink<T> {
let dimensions = self.dimensions.get().expect("The pixel buffer is empty");
let data = try!(self.read());
Ok(S::from_raw(Cow::Owned(data), dimensions.0, dimensions.1))
}
}
impl<T> Deref for PixelBuffer<T> where T: PixelValue {
type Target = BufferView<[T]>;
fn deref(&self) -> &BufferView<[T]> {
&self.buffer
}
}
impl<T> DerefMut for PixelBuffer<T> where T: PixelValue {
fn
|
(&mut self) -> &mut BufferView<[T]> {
&mut self.buffer
}
}
// TODO: rework this
impl<T> GlObject for PixelBuffer<T> where T: PixelValue {
type Id = gl::types::GLuint;
fn get_id(&self) -> gl::types::GLuint {
self.buffer.get_buffer_id()
}
}
// TODO: remove this hack
#[doc(hidden)]
pub fn store_infos<T>(b: &PixelBuffer<T>, dimensions: (u32, u32)) where T: PixelValue {
b.dimensions.set(Some(dimensions));
}
|
deref_mut
|
identifier_name
|
pixel_buffer.rs
|
/*!
Pixel buffers are buffers that contain two-dimensional texture data.
Contrary to textures, pixel buffers are stored in a client-defined format. They are used
to transfer data to or from the video memory, before or after being turned into a texture.
*/
use std::borrow::Cow;
use std::cell::Cell;
use std::ops::{Deref, DerefMut};
use backend::Facade;
use GlObject;
use BufferViewExt;
use buffer::{ReadError, BufferView, BufferType};
use gl;
use texture::PixelValue;
use texture::Texture2dDataSink;
|
pub struct PixelBuffer<T> where T: PixelValue {
buffer: BufferView<[T]>,
dimensions: Cell<Option<(u32, u32)>>,
}
impl<T> PixelBuffer<T> where T: PixelValue {
/// Builds a new buffer with an uninitialized content.
pub fn new_empty<F>(facade: &F, capacity: usize) -> PixelBuffer<T> where F: Facade {
PixelBuffer {
buffer: BufferView::empty_array(facade, BufferType::PixelPackBuffer, capacity,
false).unwrap(),
dimensions: Cell::new(None),
}
}
/// Reads the content of the pixel buffer.
pub fn read_as_texture_2d<S>(&self) -> Result<S, ReadError> where S: Texture2dDataSink<T> {
let dimensions = self.dimensions.get().expect("The pixel buffer is empty");
let data = try!(self.read());
Ok(S::from_raw(Cow::Owned(data), dimensions.0, dimensions.1))
}
}
impl<T> Deref for PixelBuffer<T> where T: PixelValue {
type Target = BufferView<[T]>;
fn deref(&self) -> &BufferView<[T]> {
&self.buffer
}
}
impl<T> DerefMut for PixelBuffer<T> where T: PixelValue {
fn deref_mut(&mut self) -> &mut BufferView<[T]> {
&mut self.buffer
}
}
// TODO: rework this
impl<T> GlObject for PixelBuffer<T> where T: PixelValue {
type Id = gl::types::GLuint;
fn get_id(&self) -> gl::types::GLuint {
self.buffer.get_buffer_id()
}
}
// TODO: remove this hack
#[doc(hidden)]
pub fn store_infos<T>(b: &PixelBuffer<T>, dimensions: (u32, u32)) where T: PixelValue {
b.dimensions.set(Some(dimensions));
}
|
/// Buffer that stores the content of a texture.
///
/// The generic type represents the type of pixels that the buffer contains.
|
random_line_split
|
vtable_writer.rs
|
/*
|
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::ptr::write_bytes;
use crate::endian_scalar::{emplace_scalar, read_scalar_at};
use crate::primitives::*;
/// VTableWriter compartmentalizes actions needed to create a vtable.
#[derive(Debug)]
pub struct VTableWriter<'a> {
buf: &'a mut [u8],
}
impl<'a> VTableWriter<'a> {
#[inline(always)]
pub fn init(buf: &'a mut [u8]) -> Self {
VTableWriter { buf }
}
/// Writes the vtable length (in bytes) into the vtable.
///
/// Note that callers already need to have computed this to initialize
/// a VTableWriter.
///
/// In debug mode, asserts that the length of the underlying data is equal
/// to the provided value.
#[inline(always)]
pub fn write_vtable_byte_length(&mut self, n: VOffsetT) {
emplace_scalar::<VOffsetT>(&mut self.buf[..SIZE_VOFFSET], n);
debug_assert_eq!(n as usize, self.buf.len());
}
/// Writes an object length (in bytes) into the vtable.
#[inline(always)]
pub fn write_object_inline_size(&mut self, n: VOffsetT) {
emplace_scalar::<VOffsetT>(&mut self.buf[SIZE_VOFFSET..2 * SIZE_VOFFSET], n);
}
/// Gets an object field offset from the vtable. Only used for debugging.
///
/// Note that this expects field offsets (which are like pointers), not
/// field ids (which are like array indices).
#[inline(always)]
pub fn get_field_offset(&self, vtable_offset: VOffsetT) -> VOffsetT {
let idx = vtable_offset as usize;
read_scalar_at::<VOffsetT>(&self.buf, idx)
}
/// Writes an object field offset into the vtable.
///
/// Note that this expects field offsets (which are like pointers), not
/// field ids (which are like array indices).
#[inline(always)]
pub fn write_field_offset(&mut self, vtable_offset: VOffsetT, object_data_offset: VOffsetT) {
let idx = vtable_offset as usize;
emplace_scalar::<VOffsetT>(&mut self.buf[idx..idx + SIZE_VOFFSET], object_data_offset);
}
/// Clears all data in this VTableWriter. Used to cleanly undo a
/// vtable write.
#[inline(always)]
pub fn clear(&mut self) {
// This is the closest thing to memset in Rust right now.
let len = self.buf.len();
let p = self.buf.as_mut_ptr() as *mut u8;
unsafe {
write_bytes(p, 0, len);
}
}
}
|
* Copyright 2018 Google Inc. All rights reserved.
|
random_line_split
|
vtable_writer.rs
|
/*
* Copyright 2018 Google Inc. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::ptr::write_bytes;
use crate::endian_scalar::{emplace_scalar, read_scalar_at};
use crate::primitives::*;
/// VTableWriter compartmentalizes actions needed to create a vtable.
#[derive(Debug)]
pub struct VTableWriter<'a> {
buf: &'a mut [u8],
}
impl<'a> VTableWriter<'a> {
#[inline(always)]
pub fn init(buf: &'a mut [u8]) -> Self {
VTableWriter { buf }
}
/// Writes the vtable length (in bytes) into the vtable.
///
/// Note that callers already need to have computed this to initialize
/// a VTableWriter.
///
/// In debug mode, asserts that the length of the underlying data is equal
/// to the provided value.
#[inline(always)]
pub fn write_vtable_byte_length(&mut self, n: VOffsetT) {
emplace_scalar::<VOffsetT>(&mut self.buf[..SIZE_VOFFSET], n);
debug_assert_eq!(n as usize, self.buf.len());
}
/// Writes an object length (in bytes) into the vtable.
#[inline(always)]
pub fn write_object_inline_size(&mut self, n: VOffsetT)
|
/// Gets an object field offset from the vtable. Only used for debugging.
///
/// Note that this expects field offsets (which are like pointers), not
/// field ids (which are like array indices).
#[inline(always)]
pub fn get_field_offset(&self, vtable_offset: VOffsetT) -> VOffsetT {
let idx = vtable_offset as usize;
read_scalar_at::<VOffsetT>(&self.buf, idx)
}
/// Writes an object field offset into the vtable.
///
/// Note that this expects field offsets (which are like pointers), not
/// field ids (which are like array indices).
#[inline(always)]
pub fn write_field_offset(&mut self, vtable_offset: VOffsetT, object_data_offset: VOffsetT) {
let idx = vtable_offset as usize;
emplace_scalar::<VOffsetT>(&mut self.buf[idx..idx + SIZE_VOFFSET], object_data_offset);
}
/// Clears all data in this VTableWriter. Used to cleanly undo a
/// vtable write.
#[inline(always)]
pub fn clear(&mut self) {
// This is the closest thing to memset in Rust right now.
let len = self.buf.len();
let p = self.buf.as_mut_ptr() as *mut u8;
unsafe {
write_bytes(p, 0, len);
}
}
}
|
{
emplace_scalar::<VOffsetT>(&mut self.buf[SIZE_VOFFSET..2 * SIZE_VOFFSET], n);
}
|
identifier_body
|
vtable_writer.rs
|
/*
* Copyright 2018 Google Inc. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::ptr::write_bytes;
use crate::endian_scalar::{emplace_scalar, read_scalar_at};
use crate::primitives::*;
/// VTableWriter compartmentalizes actions needed to create a vtable.
#[derive(Debug)]
pub struct VTableWriter<'a> {
buf: &'a mut [u8],
}
impl<'a> VTableWriter<'a> {
#[inline(always)]
pub fn init(buf: &'a mut [u8]) -> Self {
VTableWriter { buf }
}
/// Writes the vtable length (in bytes) into the vtable.
///
/// Note that callers already need to have computed this to initialize
/// a VTableWriter.
///
/// In debug mode, asserts that the length of the underlying data is equal
/// to the provided value.
#[inline(always)]
pub fn write_vtable_byte_length(&mut self, n: VOffsetT) {
emplace_scalar::<VOffsetT>(&mut self.buf[..SIZE_VOFFSET], n);
debug_assert_eq!(n as usize, self.buf.len());
}
/// Writes an object length (in bytes) into the vtable.
#[inline(always)]
pub fn
|
(&mut self, n: VOffsetT) {
emplace_scalar::<VOffsetT>(&mut self.buf[SIZE_VOFFSET..2 * SIZE_VOFFSET], n);
}
/// Gets an object field offset from the vtable. Only used for debugging.
///
/// Note that this expects field offsets (which are like pointers), not
/// field ids (which are like array indices).
#[inline(always)]
pub fn get_field_offset(&self, vtable_offset: VOffsetT) -> VOffsetT {
let idx = vtable_offset as usize;
read_scalar_at::<VOffsetT>(&self.buf, idx)
}
/// Writes an object field offset into the vtable.
///
/// Note that this expects field offsets (which are like pointers), not
/// field ids (which are like array indices).
#[inline(always)]
pub fn write_field_offset(&mut self, vtable_offset: VOffsetT, object_data_offset: VOffsetT) {
let idx = vtable_offset as usize;
emplace_scalar::<VOffsetT>(&mut self.buf[idx..idx + SIZE_VOFFSET], object_data_offset);
}
/// Clears all data in this VTableWriter. Used to cleanly undo a
/// vtable write.
#[inline(always)]
pub fn clear(&mut self) {
// This is the closest thing to memset in Rust right now.
let len = self.buf.len();
let p = self.buf.as_mut_ptr() as *mut u8;
unsafe {
write_bytes(p, 0, len);
}
}
}
|
write_object_inline_size
|
identifier_name
|
LPS.rs
|
pub fn longest_palindrome(s: String) -> String {
let mut s = s.as_bytes().to_vec();
let mut d = s.clone();
d.reverse();
let mut largest_len = 0;
let mut result = String::new();
loop {
if s.len() == 0 || s.len() < largest_len {
break;
}
let (a, b) = inner_loop(&s, &d);
if a > largest_len {
largest_len = a;
result = b
}
s = s.drain(1..).collect();
d.pop();
}
result
}
/// find palindrome, return length, start index
fn inner_loop(s: &Vec<u8>, d: &Vec<u8>) -> (usize, String) {
//println!("here: {:?}, {:?}", s, d);
let mut length = s.len();
let mut offset = 0;
let mut flag = true;
loop {
flag = true;
if length == 0 {
break;
}
for ind in 0..(length / 2 + 1) {
if s[ind]!= d[ind + offset] {
flag = false;
break;
}
}
if flag {
return (length, String::from_utf8(s[0..length].to_vec()).unwrap());
}
length -= 1;
offset += 1;
}
(0, String::new())
}
/////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////
pub fn longest_palindrome2(s: String) -> String {
let ss = s.as_bytes();
let length = s.len();
if length <= 1 {
return s;
}
let (mut record_start, mut largest_size) = (0, 0);
for start in 0..length - 1 {
if length - start < largest_size {
break;
}
for end in (start + 1..length).rev() {
if end - start + 1 < largest_size
|
let (b, size) = inner_loop2(ss, start, end);
if b {
if size.unwrap() > largest_size {
largest_size = size.unwrap();
record_start = start;
}
}
}
}
//println!("{}, {}", record_start, largest_size);
String::from_utf8(ss[record_start..record_start + largest_size + 1].to_vec()).unwrap()
}
pub fn inner_loop2(s: &[u8], start: usize, end: usize) -> (bool, Option<usize>) {
//println!("here {},{}", start, end);
if end - start == 0 {
return (true, Some(end - start));
}
for i in 0..(end - start) {
if s[start + i]!= s[end - i] {
return (false, None);
}
}
(true, Some(end - start))
}
fn main() {
assert_eq!("bab", longest_palindrome("babad".to_string()));
assert_eq!("bb", longest_palindrome("cbbd".to_string()));
assert_eq!("", longest_palindrome("".to_string()));
assert_eq!("bab", longest_palindrome2("babad".to_string()));
assert_eq!("bb", longest_palindrome2("cbbd".to_string()));
assert_eq!("", longest_palindrome2("".to_string()));
}
|
{
break;
}
|
conditional_block
|
LPS.rs
|
pub fn longest_palindrome(s: String) -> String
|
}
result
}
/// find palindrome, return length, start index
fn inner_loop(s: &Vec<u8>, d: &Vec<u8>) -> (usize, String) {
//println!("here: {:?}, {:?}", s, d);
let mut length = s.len();
let mut offset = 0;
let mut flag = true;
loop {
flag = true;
if length == 0 {
break;
}
for ind in 0..(length / 2 + 1) {
if s[ind]!= d[ind + offset] {
flag = false;
break;
}
}
if flag {
return (length, String::from_utf8(s[0..length].to_vec()).unwrap());
}
length -= 1;
offset += 1;
}
(0, String::new())
}
/////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////
pub fn longest_palindrome2(s: String) -> String {
let ss = s.as_bytes();
let length = s.len();
if length <= 1 {
return s;
}
let (mut record_start, mut largest_size) = (0, 0);
for start in 0..length - 1 {
if length - start < largest_size {
break;
}
for end in (start + 1..length).rev() {
if end - start + 1 < largest_size {
break;
}
let (b, size) = inner_loop2(ss, start, end);
if b {
if size.unwrap() > largest_size {
largest_size = size.unwrap();
record_start = start;
}
}
}
}
//println!("{}, {}", record_start, largest_size);
String::from_utf8(ss[record_start..record_start + largest_size + 1].to_vec()).unwrap()
}
pub fn inner_loop2(s: &[u8], start: usize, end: usize) -> (bool, Option<usize>) {
//println!("here {},{}", start, end);
if end - start == 0 {
return (true, Some(end - start));
}
for i in 0..(end - start) {
if s[start + i]!= s[end - i] {
return (false, None);
}
}
(true, Some(end - start))
}
fn main() {
assert_eq!("bab", longest_palindrome("babad".to_string()));
assert_eq!("bb", longest_palindrome("cbbd".to_string()));
assert_eq!("", longest_palindrome("".to_string()));
assert_eq!("bab", longest_palindrome2("babad".to_string()));
assert_eq!("bb", longest_palindrome2("cbbd".to_string()));
assert_eq!("", longest_palindrome2("".to_string()));
}
|
{
let mut s = s.as_bytes().to_vec();
let mut d = s.clone();
d.reverse();
let mut largest_len = 0;
let mut result = String::new();
loop {
if s.len() == 0 || s.len() < largest_len {
break;
}
let (a, b) = inner_loop(&s, &d);
if a > largest_len {
largest_len = a;
result = b
}
s = s.drain(1..).collect();
d.pop();
|
identifier_body
|
LPS.rs
|
pub fn
|
(s: String) -> String {
let mut s = s.as_bytes().to_vec();
let mut d = s.clone();
d.reverse();
let mut largest_len = 0;
let mut result = String::new();
loop {
if s.len() == 0 || s.len() < largest_len {
break;
}
let (a, b) = inner_loop(&s, &d);
if a > largest_len {
largest_len = a;
result = b
}
s = s.drain(1..).collect();
d.pop();
}
result
}
/// find palindrome, return length, start index
fn inner_loop(s: &Vec<u8>, d: &Vec<u8>) -> (usize, String) {
//println!("here: {:?}, {:?}", s, d);
let mut length = s.len();
let mut offset = 0;
let mut flag = true;
loop {
flag = true;
if length == 0 {
break;
}
for ind in 0..(length / 2 + 1) {
if s[ind]!= d[ind + offset] {
flag = false;
break;
}
}
if flag {
return (length, String::from_utf8(s[0..length].to_vec()).unwrap());
}
length -= 1;
offset += 1;
}
(0, String::new())
}
/////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////
pub fn longest_palindrome2(s: String) -> String {
let ss = s.as_bytes();
let length = s.len();
if length <= 1 {
return s;
}
let (mut record_start, mut largest_size) = (0, 0);
for start in 0..length - 1 {
if length - start < largest_size {
break;
}
for end in (start + 1..length).rev() {
if end - start + 1 < largest_size {
break;
}
let (b, size) = inner_loop2(ss, start, end);
if b {
if size.unwrap() > largest_size {
largest_size = size.unwrap();
record_start = start;
}
}
}
}
//println!("{}, {}", record_start, largest_size);
String::from_utf8(ss[record_start..record_start + largest_size + 1].to_vec()).unwrap()
}
pub fn inner_loop2(s: &[u8], start: usize, end: usize) -> (bool, Option<usize>) {
//println!("here {},{}", start, end);
if end - start == 0 {
return (true, Some(end - start));
}
for i in 0..(end - start) {
if s[start + i]!= s[end - i] {
return (false, None);
}
}
(true, Some(end - start))
}
fn main() {
assert_eq!("bab", longest_palindrome("babad".to_string()));
assert_eq!("bb", longest_palindrome("cbbd".to_string()));
assert_eq!("", longest_palindrome("".to_string()));
assert_eq!("bab", longest_palindrome2("babad".to_string()));
assert_eq!("bb", longest_palindrome2("cbbd".to_string()));
assert_eq!("", longest_palindrome2("".to_string()));
}
|
longest_palindrome
|
identifier_name
|
LPS.rs
|
pub fn longest_palindrome(s: String) -> String {
let mut s = s.as_bytes().to_vec();
let mut d = s.clone();
d.reverse();
let mut largest_len = 0;
let mut result = String::new();
loop {
if s.len() == 0 || s.len() < largest_len {
break;
}
let (a, b) = inner_loop(&s, &d);
if a > largest_len {
largest_len = a;
result = b
}
s = s.drain(1..).collect();
d.pop();
}
result
}
/// find palindrome, return length, start index
fn inner_loop(s: &Vec<u8>, d: &Vec<u8>) -> (usize, String) {
//println!("here: {:?}, {:?}", s, d);
let mut length = s.len();
let mut offset = 0;
let mut flag = true;
loop {
flag = true;
if length == 0 {
break;
}
for ind in 0..(length / 2 + 1) {
if s[ind]!= d[ind + offset] {
flag = false;
break;
}
}
if flag {
return (length, String::from_utf8(s[0..length].to_vec()).unwrap());
}
length -= 1;
offset += 1;
}
(0, String::new())
}
/////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////
pub fn longest_palindrome2(s: String) -> String {
let ss = s.as_bytes();
let length = s.len();
if length <= 1 {
return s;
}
let (mut record_start, mut largest_size) = (0, 0);
for start in 0..length - 1 {
if length - start < largest_size {
break;
}
for end in (start + 1..length).rev() {
if end - start + 1 < largest_size {
break;
}
let (b, size) = inner_loop2(ss, start, end);
if b {
if size.unwrap() > largest_size {
largest_size = size.unwrap();
record_start = start;
}
}
}
}
//println!("{}, {}", record_start, largest_size);
String::from_utf8(ss[record_start..record_start + largest_size + 1].to_vec()).unwrap()
}
pub fn inner_loop2(s: &[u8], start: usize, end: usize) -> (bool, Option<usize>) {
//println!("here {},{}", start, end);
if end - start == 0 {
return (true, Some(end - start));
}
for i in 0..(end - start) {
if s[start + i]!= s[end - i] {
return (false, None);
}
}
(true, Some(end - start))
}
|
assert_eq!("bab", longest_palindrome("babad".to_string()));
assert_eq!("bb", longest_palindrome("cbbd".to_string()));
assert_eq!("", longest_palindrome("".to_string()));
assert_eq!("bab", longest_palindrome2("babad".to_string()));
assert_eq!("bb", longest_palindrome2("cbbd".to_string()));
assert_eq!("", longest_palindrome2("".to_string()));
}
|
fn main() {
|
random_line_split
|
mod.rs
|
pub mod display;
pub mod device;
pub mod state;
pub mod termios;
mod err;
use std::os::unix::io::AsRawFd;
use std::io::{self, Write};
use std::mem;
use std::fmt;
use ::libc;
use ::child::exec;
use ::pty::prelude as pty;
use self::device::Device;
use self::termios::Termios;
pub use self::state::ShellState;
pub use self::err::ShellError;
use self::display::Display;
pub use self::display::winsz::Winszed;
pub use super::parent::Parent;
/// The struct `Shell` is the speudo terminal interface.
#[derive(Debug)]
pub struct Shell {
pid: libc::pid_t,
#[allow(dead_code)]
config: Termios,
speudo: pty::Master,
device: Device,
state: ShellState,
screen: Display,
}
impl Shell {
/// The constructor method `new` returns a shell interface according to
/// the command's option and a configured mode Line by Line.
pub fn new (
repeat: Option<i64>,
interval: Option<i64>,
command: Option<&str>,
windows: Option<Winszed>,
) -> Result<Self, ShellError> {
unsafe {
let winsz: Winszed =
windows.and_then(|winsz| {
let _ = Winszed::from_winsized(libc::STDIN_FILENO, &winsz);
Some(winsz)
})
.or_else(|| Winszed::new(libc::STDIN_FILENO).ok())
.unwrap_or_default();
match pty::Fork::from_ptmx() {
Err(cause) => Err(ShellError::ForkFail(cause)),
Ok(fork) => match fork {
pty::Fork::Child(_) => {
libc::ioctl(libc::STDIN_FILENO, libc::TIOCSWINSZ, &winsz);
exec(command.unwrap_or("/bin/bash"))
},
pty::Fork::Parent(pid, master) => {
mem::forget(fork);
Ok(Shell {
pid: pid,
config: Termios::default(),
speudo: master,
device: Device::from_speudo(master, libc::getpid()),
state: ShellState::new(repeat, interval),
screen: Display::from_winszed(winsz),
})
},
}
}
}
}
}
impl Parent for Shell {
/// The accessor method `get_pid` returns the pid from the master.
fn get_pid(&self) -> libc::pid_t {
self.pid
}
/// The accessor method `get_speudo` returns the master interface.
fn get_speudo(&self) -> &pty::Master {
&self.speudo
}
/// The accessor method `get_screen` returns a reference on the Display interface.
fn get_screen(&self) -> &Display {
&self.screen
}
/// The accessor method `get_window_size` returns the window size.
fn get_window_size(&self) -> &Winszed {
self.screen.get_window_size()
}
/// The mutator method `set_window_size` redimentionnes the window
/// with a default size.
fn set_window_size(&mut self) {
if let Ok(size) = Winszed::new(libc::STDOUT_FILENO) {
self.set_window_size_with(&size);
}
}
/// The mutator method `set_window_size` redimentionnes the window
/// with a argument size.
fn set_window_size_with(&mut self, size: &Winszed) {
self.screen.set_window_size(size);
unsafe {
libc::ioctl(self.speudo.as_raw_fd(), libc::TIOCSWINSZ, size);
libc::kill(self.pid, libc::SIGWINCH);
}
}
/// The mutator method `write` set a buffer to the display
/// without needing to print it
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.screen.write(buf)
}
/// The mutator method `next` updates the event and returns
/// the new state.
fn next(&mut self, event: state::DeviceState) -> ShellState {
match () {
#[cfg(feature = "auto-resize")]
() =>
|
,
#[cfg(not(feature = "auto-resize"))]
() => {
self.state.update_from(&mut self.screen, event);
self.state
},
}
}
}
impl Iterator for Shell {
type Item = ShellState;
fn next(&mut self) -> Option<ShellState> {
match self.device.next() {
None => None,
#[cfg(feature = "auto-resize")]
Some(event) => {
self.state.update_from(&mut self.screen, event);
if let Some(size) = self.state.is_resized() {
self.set_window_size_with(&size);
}
Some(self.state)
},
#[cfg(not(feature = "auto-resize"))]
Some(event) => {
self.state.update_from(&mut self.screen, event);
Some(self.state)
},
}
}
}
impl Write for Shell {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.speudo.write(buf)
}
fn flush(&mut self) -> io::Result<()> {
self.speudo.flush()
}
}
impl fmt::Display for Shell {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{}", self.screen)
}
}
impl Drop for Shell {
fn drop(&mut self) {
unsafe {
assert_ne!(libc::close(self.speudo.as_raw_fd()), -1);
libc::kill(self.pid, libc::SIGKILL);
}
}
}
impl Default for Shell {
fn default() -> Shell {
unsafe {
let master: pty::Master = mem::zeroed();
Shell {
pid: 0,
config: mem::zeroed(),
speudo: master,
device: Device::from_speudo(master, 0),
state: ShellState::default(),
screen: Display::default(),
}
}
}
}
|
{
self.state.update_from(&mut self.screen, event);
if let Some(size) = self.state.is_resized() {
self.set_window_size_with(&size);
}
self.state
}
|
conditional_block
|
mod.rs
|
pub mod display;
pub mod device;
pub mod state;
pub mod termios;
mod err;
use std::os::unix::io::AsRawFd;
use std::io::{self, Write};
use std::mem;
use std::fmt;
use ::libc;
use ::child::exec;
use ::pty::prelude as pty;
use self::device::Device;
use self::termios::Termios;
pub use self::state::ShellState;
pub use self::err::ShellError;
use self::display::Display;
pub use self::display::winsz::Winszed;
pub use super::parent::Parent;
/// The struct `Shell` is the speudo terminal interface.
#[derive(Debug)]
pub struct Shell {
pid: libc::pid_t,
#[allow(dead_code)]
config: Termios,
speudo: pty::Master,
device: Device,
state: ShellState,
screen: Display,
}
impl Shell {
/// The constructor method `new` returns a shell interface according to
/// the command's option and a configured mode Line by Line.
pub fn new (
repeat: Option<i64>,
interval: Option<i64>,
command: Option<&str>,
windows: Option<Winszed>,
) -> Result<Self, ShellError> {
unsafe {
let winsz: Winszed =
windows.and_then(|winsz| {
let _ = Winszed::from_winsized(libc::STDIN_FILENO, &winsz);
Some(winsz)
})
.or_else(|| Winszed::new(libc::STDIN_FILENO).ok())
.unwrap_or_default();
match pty::Fork::from_ptmx() {
Err(cause) => Err(ShellError::ForkFail(cause)),
Ok(fork) => match fork {
pty::Fork::Child(_) => {
libc::ioctl(libc::STDIN_FILENO, libc::TIOCSWINSZ, &winsz);
exec(command.unwrap_or("/bin/bash"))
},
pty::Fork::Parent(pid, master) => {
mem::forget(fork);
Ok(Shell {
pid: pid,
config: Termios::default(),
speudo: master,
device: Device::from_speudo(master, libc::getpid()),
state: ShellState::new(repeat, interval),
screen: Display::from_winszed(winsz),
})
},
}
}
}
}
}
impl Parent for Shell {
/// The accessor method `get_pid` returns the pid from the master.
fn get_pid(&self) -> libc::pid_t {
self.pid
}
/// The accessor method `get_speudo` returns the master interface.
fn get_speudo(&self) -> &pty::Master {
&self.speudo
}
/// The accessor method `get_screen` returns a reference on the Display interface.
fn get_screen(&self) -> &Display {
&self.screen
}
/// The accessor method `get_window_size` returns the window size.
fn get_window_size(&self) -> &Winszed {
self.screen.get_window_size()
}
/// The mutator method `set_window_size` redimentionnes the window
/// with a default size.
fn set_window_size(&mut self) {
if let Ok(size) = Winszed::new(libc::STDOUT_FILENO) {
self.set_window_size_with(&size);
}
}
/// The mutator method `set_window_size` redimentionnes the window
/// with a argument size.
fn set_window_size_with(&mut self, size: &Winszed) {
self.screen.set_window_size(size);
unsafe {
libc::ioctl(self.speudo.as_raw_fd(), libc::TIOCSWINSZ, size);
libc::kill(self.pid, libc::SIGWINCH);
}
}
/// The mutator method `write` set a buffer to the display
/// without needing to print it
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.screen.write(buf)
}
/// The mutator method `next` updates the event and returns
/// the new state.
fn next(&mut self, event: state::DeviceState) -> ShellState {
match () {
#[cfg(feature = "auto-resize")]
() => {
self.state.update_from(&mut self.screen, event);
if let Some(size) = self.state.is_resized() {
self.set_window_size_with(&size);
}
self.state
},
#[cfg(not(feature = "auto-resize"))]
() => {
self.state.update_from(&mut self.screen, event);
self.state
},
}
}
}
impl Iterator for Shell {
type Item = ShellState;
fn next(&mut self) -> Option<ShellState> {
match self.device.next() {
None => None,
#[cfg(feature = "auto-resize")]
Some(event) => {
self.state.update_from(&mut self.screen, event);
if let Some(size) = self.state.is_resized() {
self.set_window_size_with(&size);
}
Some(self.state)
},
#[cfg(not(feature = "auto-resize"))]
Some(event) => {
self.state.update_from(&mut self.screen, event);
Some(self.state)
},
}
}
}
impl Write for Shell {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.speudo.write(buf)
}
fn flush(&mut self) -> io::Result<()> {
self.speudo.flush()
}
}
impl fmt::Display for Shell {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{}", self.screen)
}
}
impl Drop for Shell {
fn drop(&mut self) {
unsafe {
assert_ne!(libc::close(self.speudo.as_raw_fd()), -1);
libc::kill(self.pid, libc::SIGKILL);
}
}
}
impl Default for Shell {
fn default() -> Shell {
unsafe {
let master: pty::Master = mem::zeroed();
Shell {
pid: 0,
config: mem::zeroed(),
speudo: master,
device: Device::from_speudo(master, 0),
state: ShellState::default(),
screen: Display::default(),
}
|
}
|
}
}
|
random_line_split
|
mod.rs
|
pub mod display;
pub mod device;
pub mod state;
pub mod termios;
mod err;
use std::os::unix::io::AsRawFd;
use std::io::{self, Write};
use std::mem;
use std::fmt;
use ::libc;
use ::child::exec;
use ::pty::prelude as pty;
use self::device::Device;
use self::termios::Termios;
pub use self::state::ShellState;
pub use self::err::ShellError;
use self::display::Display;
pub use self::display::winsz::Winszed;
pub use super::parent::Parent;
/// The struct `Shell` is the speudo terminal interface.
#[derive(Debug)]
pub struct Shell {
pid: libc::pid_t,
#[allow(dead_code)]
config: Termios,
speudo: pty::Master,
device: Device,
state: ShellState,
screen: Display,
}
impl Shell {
/// The constructor method `new` returns a shell interface according to
/// the command's option and a configured mode Line by Line.
pub fn new (
repeat: Option<i64>,
interval: Option<i64>,
command: Option<&str>,
windows: Option<Winszed>,
) -> Result<Self, ShellError> {
unsafe {
let winsz: Winszed =
windows.and_then(|winsz| {
let _ = Winszed::from_winsized(libc::STDIN_FILENO, &winsz);
Some(winsz)
})
.or_else(|| Winszed::new(libc::STDIN_FILENO).ok())
.unwrap_or_default();
match pty::Fork::from_ptmx() {
Err(cause) => Err(ShellError::ForkFail(cause)),
Ok(fork) => match fork {
pty::Fork::Child(_) => {
libc::ioctl(libc::STDIN_FILENO, libc::TIOCSWINSZ, &winsz);
exec(command.unwrap_or("/bin/bash"))
},
pty::Fork::Parent(pid, master) => {
mem::forget(fork);
Ok(Shell {
pid: pid,
config: Termios::default(),
speudo: master,
device: Device::from_speudo(master, libc::getpid()),
state: ShellState::new(repeat, interval),
screen: Display::from_winszed(winsz),
})
},
}
}
}
}
}
impl Parent for Shell {
/// The accessor method `get_pid` returns the pid from the master.
fn get_pid(&self) -> libc::pid_t {
self.pid
}
/// The accessor method `get_speudo` returns the master interface.
fn get_speudo(&self) -> &pty::Master {
&self.speudo
}
/// The accessor method `get_screen` returns a reference on the Display interface.
fn get_screen(&self) -> &Display {
&self.screen
}
/// The accessor method `get_window_size` returns the window size.
fn get_window_size(&self) -> &Winszed {
self.screen.get_window_size()
}
/// The mutator method `set_window_size` redimentionnes the window
/// with a default size.
fn
|
(&mut self) {
if let Ok(size) = Winszed::new(libc::STDOUT_FILENO) {
self.set_window_size_with(&size);
}
}
/// The mutator method `set_window_size` redimentionnes the window
/// with a argument size.
fn set_window_size_with(&mut self, size: &Winszed) {
self.screen.set_window_size(size);
unsafe {
libc::ioctl(self.speudo.as_raw_fd(), libc::TIOCSWINSZ, size);
libc::kill(self.pid, libc::SIGWINCH);
}
}
/// The mutator method `write` set a buffer to the display
/// without needing to print it
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.screen.write(buf)
}
/// The mutator method `next` updates the event and returns
/// the new state.
fn next(&mut self, event: state::DeviceState) -> ShellState {
match () {
#[cfg(feature = "auto-resize")]
() => {
self.state.update_from(&mut self.screen, event);
if let Some(size) = self.state.is_resized() {
self.set_window_size_with(&size);
}
self.state
},
#[cfg(not(feature = "auto-resize"))]
() => {
self.state.update_from(&mut self.screen, event);
self.state
},
}
}
}
impl Iterator for Shell {
type Item = ShellState;
fn next(&mut self) -> Option<ShellState> {
match self.device.next() {
None => None,
#[cfg(feature = "auto-resize")]
Some(event) => {
self.state.update_from(&mut self.screen, event);
if let Some(size) = self.state.is_resized() {
self.set_window_size_with(&size);
}
Some(self.state)
},
#[cfg(not(feature = "auto-resize"))]
Some(event) => {
self.state.update_from(&mut self.screen, event);
Some(self.state)
},
}
}
}
impl Write for Shell {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.speudo.write(buf)
}
fn flush(&mut self) -> io::Result<()> {
self.speudo.flush()
}
}
impl fmt::Display for Shell {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{}", self.screen)
}
}
impl Drop for Shell {
fn drop(&mut self) {
unsafe {
assert_ne!(libc::close(self.speudo.as_raw_fd()), -1);
libc::kill(self.pid, libc::SIGKILL);
}
}
}
impl Default for Shell {
fn default() -> Shell {
unsafe {
let master: pty::Master = mem::zeroed();
Shell {
pid: 0,
config: mem::zeroed(),
speudo: master,
device: Device::from_speudo(master, 0),
state: ShellState::default(),
screen: Display::default(),
}
}
}
}
|
set_window_size
|
identifier_name
|
mod.rs
|
pub mod display;
pub mod device;
pub mod state;
pub mod termios;
mod err;
use std::os::unix::io::AsRawFd;
use std::io::{self, Write};
use std::mem;
use std::fmt;
use ::libc;
use ::child::exec;
use ::pty::prelude as pty;
use self::device::Device;
use self::termios::Termios;
pub use self::state::ShellState;
pub use self::err::ShellError;
use self::display::Display;
pub use self::display::winsz::Winszed;
pub use super::parent::Parent;
/// The struct `Shell` is the speudo terminal interface.
#[derive(Debug)]
pub struct Shell {
pid: libc::pid_t,
#[allow(dead_code)]
config: Termios,
speudo: pty::Master,
device: Device,
state: ShellState,
screen: Display,
}
impl Shell {
/// The constructor method `new` returns a shell interface according to
/// the command's option and a configured mode Line by Line.
pub fn new (
repeat: Option<i64>,
interval: Option<i64>,
command: Option<&str>,
windows: Option<Winszed>,
) -> Result<Self, ShellError> {
unsafe {
let winsz: Winszed =
windows.and_then(|winsz| {
let _ = Winszed::from_winsized(libc::STDIN_FILENO, &winsz);
Some(winsz)
})
.or_else(|| Winszed::new(libc::STDIN_FILENO).ok())
.unwrap_or_default();
match pty::Fork::from_ptmx() {
Err(cause) => Err(ShellError::ForkFail(cause)),
Ok(fork) => match fork {
pty::Fork::Child(_) => {
libc::ioctl(libc::STDIN_FILENO, libc::TIOCSWINSZ, &winsz);
exec(command.unwrap_or("/bin/bash"))
},
pty::Fork::Parent(pid, master) => {
mem::forget(fork);
Ok(Shell {
pid: pid,
config: Termios::default(),
speudo: master,
device: Device::from_speudo(master, libc::getpid()),
state: ShellState::new(repeat, interval),
screen: Display::from_winszed(winsz),
})
},
}
}
}
}
}
impl Parent for Shell {
/// The accessor method `get_pid` returns the pid from the master.
fn get_pid(&self) -> libc::pid_t {
self.pid
}
/// The accessor method `get_speudo` returns the master interface.
fn get_speudo(&self) -> &pty::Master {
&self.speudo
}
/// The accessor method `get_screen` returns a reference on the Display interface.
fn get_screen(&self) -> &Display {
&self.screen
}
/// The accessor method `get_window_size` returns the window size.
fn get_window_size(&self) -> &Winszed {
self.screen.get_window_size()
}
/// The mutator method `set_window_size` redimentionnes the window
/// with a default size.
fn set_window_size(&mut self) {
if let Ok(size) = Winszed::new(libc::STDOUT_FILENO) {
self.set_window_size_with(&size);
}
}
/// The mutator method `set_window_size` redimentionnes the window
/// with a argument size.
fn set_window_size_with(&mut self, size: &Winszed)
|
/// The mutator method `write` set a buffer to the display
/// without needing to print it
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.screen.write(buf)
}
/// The mutator method `next` updates the event and returns
/// the new state.
fn next(&mut self, event: state::DeviceState) -> ShellState {
match () {
#[cfg(feature = "auto-resize")]
() => {
self.state.update_from(&mut self.screen, event);
if let Some(size) = self.state.is_resized() {
self.set_window_size_with(&size);
}
self.state
},
#[cfg(not(feature = "auto-resize"))]
() => {
self.state.update_from(&mut self.screen, event);
self.state
},
}
}
}
impl Iterator for Shell {
type Item = ShellState;
fn next(&mut self) -> Option<ShellState> {
match self.device.next() {
None => None,
#[cfg(feature = "auto-resize")]
Some(event) => {
self.state.update_from(&mut self.screen, event);
if let Some(size) = self.state.is_resized() {
self.set_window_size_with(&size);
}
Some(self.state)
},
#[cfg(not(feature = "auto-resize"))]
Some(event) => {
self.state.update_from(&mut self.screen, event);
Some(self.state)
},
}
}
}
impl Write for Shell {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.speudo.write(buf)
}
fn flush(&mut self) -> io::Result<()> {
self.speudo.flush()
}
}
impl fmt::Display for Shell {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{}", self.screen)
}
}
impl Drop for Shell {
fn drop(&mut self) {
unsafe {
assert_ne!(libc::close(self.speudo.as_raw_fd()), -1);
libc::kill(self.pid, libc::SIGKILL);
}
}
}
impl Default for Shell {
fn default() -> Shell {
unsafe {
let master: pty::Master = mem::zeroed();
Shell {
pid: 0,
config: mem::zeroed(),
speudo: master,
device: Device::from_speudo(master, 0),
state: ShellState::default(),
screen: Display::default(),
}
}
}
}
|
{
self.screen.set_window_size(size);
unsafe {
libc::ioctl(self.speudo.as_raw_fd(), libc::TIOCSWINSZ, size);
libc::kill(self.pid, libc::SIGWINCH);
}
}
|
identifier_body
|
storage.rs
|
//! Abstract definition of a matrix data storage.
use std::fmt::Debug;
use std::mem;
use crate::base::allocator::{Allocator, SameShapeC, SameShapeR};
use crate::base::default_allocator::DefaultAllocator;
use crate::base::dimension::{Dim, U1};
use crate::base::Scalar;
/*
* Aliases for allocation results.
*/
/// The data storage for the sum of two matrices with dimensions `(R1, C1)` and `(R2, C2)`.
pub type SameShapeStorage<N, R1, C1, R2, C2> =
<DefaultAllocator as Allocator<N, SameShapeR<R1, R2>, SameShapeC<C1, C2>>>::Buffer;
// FIXME: better name than Owned?
/// The owned data storage that can be allocated from `S`.
pub type Owned<N, R, C = U1> = <DefaultAllocator as Allocator<N, R, C>>::Buffer;
/// The row-stride of the owned data storage for a buffer of dimension `(R, C)`.
pub type RStride<N, R, C = U1> =
<<DefaultAllocator as Allocator<N, R, C>>::Buffer as Storage<N, R, C>>::RStride;
/// The column-stride of the owned data storage for a buffer of dimension `(R, C)`.
pub type CStride<N, R, C = U1> =
<<DefaultAllocator as Allocator<N, R, C>>::Buffer as Storage<N, R, C>>::CStride;
/// The trait shared by all matrix data storage.
///
/// FIXME: doc
///
/// Note that `Self` must always have a number of elements compatible with the matrix length (given
/// by `R` and `C` if they are known at compile-time). For example, implementors of this trait
/// should **not** allow the user to modify the size of the underlying buffer with safe methods
/// (for example the `VecStorage::data_mut` method is unsafe because the user could change the
/// vector's size so that it no longer contains enough elements: this will lead to UB.
pub unsafe trait Storage<N: Scalar, R: Dim, C: Dim = U1>: Debug + Sized {
/// The static stride of this storage's rows.
type RStride: Dim;
/// The static stride of this storage's columns.
type CStride: Dim;
/// The matrix data pointer.
fn ptr(&self) -> *const N;
/// The dimension of the matrix at run-time. Arr length of zero indicates the additive identity
/// element of any dimension. Must be equal to `Self::dimension()` if it is not `None`.
fn shape(&self) -> (R, C);
/// The spacing between consecutive row elements and consecutive column elements.
///
/// For example this returns `(1, 5)` for a row-major matrix with 5 columns.
fn strides(&self) -> (Self::RStride, Self::CStride);
/// Compute the index corresponding to the irow-th row and icol-th column of this matrix. The
/// index must be such that the following holds:
///
/// ```.ignore
/// let lindex = self.linear_index(irow, icol);
/// assert!(*self.get_unchecked(irow, icol) == *self.get_unchecked_linear(lindex)
/// ```
#[inline]
fn linear_index(&self, irow: usize, icol: usize) -> usize {
let (rstride, cstride) = self.strides();
irow * rstride.value() + icol * cstride.value()
}
/// Gets the address of the i-th matrix component without performing bound-checking.
#[inline]
unsafe fn get_address_unchecked_linear(&self, i: usize) -> *const N {
self.ptr().wrapping_offset(i as isize)
}
/// Gets the address of the i-th matrix component without performing bound-checking.
#[inline]
unsafe fn get_address_unchecked(&self, irow: usize, icol: usize) -> *const N {
self.get_address_unchecked_linear(self.linear_index(irow, icol))
}
/// Retrieves a reference to the i-th element without bound-checking.
#[inline]
unsafe fn get_unchecked_linear(&self, i: usize) -> &N {
&*self.get_address_unchecked_linear(i)
}
/// Retrieves a reference to the i-th element without bound-checking.
#[inline]
unsafe fn get_unchecked(&self, irow: usize, icol: usize) -> &N {
self.get_unchecked_linear(self.linear_index(irow, icol))
}
/// Indicates whether this data buffer stores its elements contiguously.
fn is_contiguous(&self) -> bool;
/// Retrieves the data buffer as a contiguous slice.
///
/// The matrix components may not be stored in a contiguous way, depending on the strides.
fn as_slice(&self) -> &[N];
/// Builds a matrix data storage that does not contain any reference.
fn into_owned(self) -> Owned<N, R, C>
where
DefaultAllocator: Allocator<N, R, C>;
/// Clones this data storage to one that does not contain any reference.
fn clone_owned(&self) -> Owned<N, R, C>
where
DefaultAllocator: Allocator<N, R, C>;
}
/// Trait implemented by matrix data storage that can provide a mutable access to its elements.
///
/// Note that a mutable access does not mean that the matrix owns its data. For example, a mutable
/// matrix slice can provide mutable access to its elements even if it does not own its data (it
/// contains only an internal reference to them).
pub unsafe trait StorageMut<N: Scalar, R: Dim, C: Dim = U1>: Storage<N, R, C> {
/// The matrix mutable data pointer.
fn ptr_mut(&mut self) -> *mut N;
/// Gets the mutable address of the i-th matrix component without performing bound-checking.
#[inline]
unsafe fn get_address_unchecked_linear_mut(&mut self, i: usize) -> *mut N {
self.ptr_mut().wrapping_offset(i as isize)
}
/// Gets the mutable address of the i-th matrix component without performing bound-checking.
#[inline]
unsafe fn get_address_unchecked_mut(&mut self, irow: usize, icol: usize) -> *mut N {
let lid = self.linear_index(irow, icol);
self.get_address_unchecked_linear_mut(lid)
}
/// Retrieves a mutable reference to the i-th element without bound-checking.
unsafe fn get_unchecked_linear_mut(&mut self, i: usize) -> &mut N {
&mut *self.get_address_unchecked_linear_mut(i)
}
/// Retrieves a mutable reference to the element at `(irow, icol)` without bound-checking.
#[inline]
unsafe fn get_unchecked_mut(&mut self, irow: usize, icol: usize) -> &mut N {
&mut *self.get_address_unchecked_mut(irow, icol)
}
/// Swaps two elements using their linear index without bound-checking.
#[inline]
unsafe fn swap_unchecked_linear(&mut self, i1: usize, i2: usize) {
let a = self.get_address_unchecked_linear_mut(i1);
let b = self.get_address_unchecked_linear_mut(i2);
mem::swap(&mut *a, &mut *b);
}
/// Swaps two elements without bound-checking.
#[inline]
unsafe fn swap_unchecked(&mut self, row_col1: (usize, usize), row_col2: (usize, usize))
|
/// Retrieves the mutable data buffer as a contiguous slice.
///
/// Matrix components may not be contiguous, depending on its strides.
fn as_mut_slice(&mut self) -> &mut [N];
}
/// A matrix storage that is stored contiguously in memory.
///
/// The storage requirement means that for any value of `i` in `[0, nrows * ncols[`, the value
/// `.get_unchecked_linear` returns one of the matrix component. This trait is unsafe because
/// failing to comply to this may cause Undefined Behaviors.
pub unsafe trait ContiguousStorage<N: Scalar, R: Dim, C: Dim = U1>:
Storage<N, R, C>
{
}
/// A mutable matrix storage that is stored contiguously in memory.
///
/// The storage requirement means that for any value of `i` in `[0, nrows * ncols[`, the value
/// `.get_unchecked_linear` returns one of the matrix component. This trait is unsafe because
/// failing to comply to this may cause Undefined Behaviors.
pub unsafe trait ContiguousStorageMut<N: Scalar, R: Dim, C: Dim = U1>:
ContiguousStorage<N, R, C> + StorageMut<N, R, C>
{
}
|
{
let lid1 = self.linear_index(row_col1.0, row_col1.1);
let lid2 = self.linear_index(row_col2.0, row_col2.1);
self.swap_unchecked_linear(lid1, lid2)
}
|
identifier_body
|
storage.rs
|
//! Abstract definition of a matrix data storage.
use std::fmt::Debug;
use std::mem;
use crate::base::allocator::{Allocator, SameShapeC, SameShapeR};
use crate::base::default_allocator::DefaultAllocator;
use crate::base::dimension::{Dim, U1};
use crate::base::Scalar;
/*
* Aliases for allocation results.
*/
/// The data storage for the sum of two matrices with dimensions `(R1, C1)` and `(R2, C2)`.
pub type SameShapeStorage<N, R1, C1, R2, C2> =
<DefaultAllocator as Allocator<N, SameShapeR<R1, R2>, SameShapeC<C1, C2>>>::Buffer;
// FIXME: better name than Owned?
/// The owned data storage that can be allocated from `S`.
pub type Owned<N, R, C = U1> = <DefaultAllocator as Allocator<N, R, C>>::Buffer;
/// The row-stride of the owned data storage for a buffer of dimension `(R, C)`.
pub type RStride<N, R, C = U1> =
<<DefaultAllocator as Allocator<N, R, C>>::Buffer as Storage<N, R, C>>::RStride;
/// The column-stride of the owned data storage for a buffer of dimension `(R, C)`.
pub type CStride<N, R, C = U1> =
<<DefaultAllocator as Allocator<N, R, C>>::Buffer as Storage<N, R, C>>::CStride;
/// The trait shared by all matrix data storage.
///
/// FIXME: doc
///
/// Note that `Self` must always have a number of elements compatible with the matrix length (given
/// by `R` and `C` if they are known at compile-time). For example, implementors of this trait
/// should **not** allow the user to modify the size of the underlying buffer with safe methods
/// (for example the `VecStorage::data_mut` method is unsafe because the user could change the
/// vector's size so that it no longer contains enough elements: this will lead to UB.
pub unsafe trait Storage<N: Scalar, R: Dim, C: Dim = U1>: Debug + Sized {
/// The static stride of this storage's rows.
type RStride: Dim;
/// The static stride of this storage's columns.
type CStride: Dim;
/// The matrix data pointer.
fn ptr(&self) -> *const N;
/// The dimension of the matrix at run-time. Arr length of zero indicates the additive identity
/// element of any dimension. Must be equal to `Self::dimension()` if it is not `None`.
fn shape(&self) -> (R, C);
/// The spacing between consecutive row elements and consecutive column elements.
///
/// For example this returns `(1, 5)` for a row-major matrix with 5 columns.
fn strides(&self) -> (Self::RStride, Self::CStride);
/// Compute the index corresponding to the irow-th row and icol-th column of this matrix. The
/// index must be such that the following holds:
///
/// ```.ignore
/// let lindex = self.linear_index(irow, icol);
/// assert!(*self.get_unchecked(irow, icol) == *self.get_unchecked_linear(lindex)
/// ```
#[inline]
fn linear_index(&self, irow: usize, icol: usize) -> usize {
let (rstride, cstride) = self.strides();
irow * rstride.value() + icol * cstride.value()
}
/// Gets the address of the i-th matrix component without performing bound-checking.
#[inline]
unsafe fn get_address_unchecked_linear(&self, i: usize) -> *const N {
self.ptr().wrapping_offset(i as isize)
}
/// Gets the address of the i-th matrix component without performing bound-checking.
#[inline]
unsafe fn get_address_unchecked(&self, irow: usize, icol: usize) -> *const N {
self.get_address_unchecked_linear(self.linear_index(irow, icol))
}
/// Retrieves a reference to the i-th element without bound-checking.
#[inline]
unsafe fn get_unchecked_linear(&self, i: usize) -> &N {
&*self.get_address_unchecked_linear(i)
}
/// Retrieves a reference to the i-th element without bound-checking.
#[inline]
unsafe fn get_unchecked(&self, irow: usize, icol: usize) -> &N {
self.get_unchecked_linear(self.linear_index(irow, icol))
}
/// Indicates whether this data buffer stores its elements contiguously.
fn is_contiguous(&self) -> bool;
/// Retrieves the data buffer as a contiguous slice.
///
/// The matrix components may not be stored in a contiguous way, depending on the strides.
fn as_slice(&self) -> &[N];
/// Builds a matrix data storage that does not contain any reference.
fn into_owned(self) -> Owned<N, R, C>
where
DefaultAllocator: Allocator<N, R, C>;
/// Clones this data storage to one that does not contain any reference.
fn clone_owned(&self) -> Owned<N, R, C>
where
DefaultAllocator: Allocator<N, R, C>;
}
/// Trait implemented by matrix data storage that can provide a mutable access to its elements.
///
/// Note that a mutable access does not mean that the matrix owns its data. For example, a mutable
/// matrix slice can provide mutable access to its elements even if it does not own its data (it
/// contains only an internal reference to them).
pub unsafe trait StorageMut<N: Scalar, R: Dim, C: Dim = U1>: Storage<N, R, C> {
/// The matrix mutable data pointer.
fn ptr_mut(&mut self) -> *mut N;
/// Gets the mutable address of the i-th matrix component without performing bound-checking.
#[inline]
unsafe fn get_address_unchecked_linear_mut(&mut self, i: usize) -> *mut N {
self.ptr_mut().wrapping_offset(i as isize)
}
/// Gets the mutable address of the i-th matrix component without performing bound-checking.
#[inline]
unsafe fn get_address_unchecked_mut(&mut self, irow: usize, icol: usize) -> *mut N {
let lid = self.linear_index(irow, icol);
self.get_address_unchecked_linear_mut(lid)
}
/// Retrieves a mutable reference to the i-th element without bound-checking.
unsafe fn get_unchecked_linear_mut(&mut self, i: usize) -> &mut N {
&mut *self.get_address_unchecked_linear_mut(i)
}
/// Retrieves a mutable reference to the element at `(irow, icol)` without bound-checking.
#[inline]
unsafe fn get_unchecked_mut(&mut self, irow: usize, icol: usize) -> &mut N {
&mut *self.get_address_unchecked_mut(irow, icol)
}
/// Swaps two elements using their linear index without bound-checking.
#[inline]
unsafe fn
|
(&mut self, i1: usize, i2: usize) {
let a = self.get_address_unchecked_linear_mut(i1);
let b = self.get_address_unchecked_linear_mut(i2);
mem::swap(&mut *a, &mut *b);
}
/// Swaps two elements without bound-checking.
#[inline]
unsafe fn swap_unchecked(&mut self, row_col1: (usize, usize), row_col2: (usize, usize)) {
let lid1 = self.linear_index(row_col1.0, row_col1.1);
let lid2 = self.linear_index(row_col2.0, row_col2.1);
self.swap_unchecked_linear(lid1, lid2)
}
/// Retrieves the mutable data buffer as a contiguous slice.
///
/// Matrix components may not be contiguous, depending on its strides.
fn as_mut_slice(&mut self) -> &mut [N];
}
/// A matrix storage that is stored contiguously in memory.
///
/// The storage requirement means that for any value of `i` in `[0, nrows * ncols[`, the value
/// `.get_unchecked_linear` returns one of the matrix component. This trait is unsafe because
/// failing to comply to this may cause Undefined Behaviors.
pub unsafe trait ContiguousStorage<N: Scalar, R: Dim, C: Dim = U1>:
Storage<N, R, C>
{
}
/// A mutable matrix storage that is stored contiguously in memory.
///
/// The storage requirement means that for any value of `i` in `[0, nrows * ncols[`, the value
/// `.get_unchecked_linear` returns one of the matrix component. This trait is unsafe because
/// failing to comply to this may cause Undefined Behaviors.
pub unsafe trait ContiguousStorageMut<N: Scalar, R: Dim, C: Dim = U1>:
ContiguousStorage<N, R, C> + StorageMut<N, R, C>
{
}
|
swap_unchecked_linear
|
identifier_name
|
storage.rs
|
//! Abstract definition of a matrix data storage.
use std::fmt::Debug;
use std::mem;
use crate::base::allocator::{Allocator, SameShapeC, SameShapeR};
use crate::base::default_allocator::DefaultAllocator;
use crate::base::dimension::{Dim, U1};
use crate::base::Scalar;
/*
* Aliases for allocation results.
*/
/// The data storage for the sum of two matrices with dimensions `(R1, C1)` and `(R2, C2)`.
pub type SameShapeStorage<N, R1, C1, R2, C2> =
<DefaultAllocator as Allocator<N, SameShapeR<R1, R2>, SameShapeC<C1, C2>>>::Buffer;
// FIXME: better name than Owned?
/// The owned data storage that can be allocated from `S`.
pub type Owned<N, R, C = U1> = <DefaultAllocator as Allocator<N, R, C>>::Buffer;
/// The row-stride of the owned data storage for a buffer of dimension `(R, C)`.
pub type RStride<N, R, C = U1> =
<<DefaultAllocator as Allocator<N, R, C>>::Buffer as Storage<N, R, C>>::RStride;
/// The column-stride of the owned data storage for a buffer of dimension `(R, C)`.
pub type CStride<N, R, C = U1> =
<<DefaultAllocator as Allocator<N, R, C>>::Buffer as Storage<N, R, C>>::CStride;
/// The trait shared by all matrix data storage.
///
/// FIXME: doc
///
/// Note that `Self` must always have a number of elements compatible with the matrix length (given
/// by `R` and `C` if they are known at compile-time). For example, implementors of this trait
/// should **not** allow the user to modify the size of the underlying buffer with safe methods
/// (for example the `VecStorage::data_mut` method is unsafe because the user could change the
/// vector's size so that it no longer contains enough elements: this will lead to UB.
pub unsafe trait Storage<N: Scalar, R: Dim, C: Dim = U1>: Debug + Sized {
/// The static stride of this storage's rows.
type RStride: Dim;
/// The static stride of this storage's columns.
type CStride: Dim;
/// The matrix data pointer.
fn ptr(&self) -> *const N;
/// The dimension of the matrix at run-time. Arr length of zero indicates the additive identity
/// element of any dimension. Must be equal to `Self::dimension()` if it is not `None`.
fn shape(&self) -> (R, C);
/// The spacing between consecutive row elements and consecutive column elements.
///
/// For example this returns `(1, 5)` for a row-major matrix with 5 columns.
fn strides(&self) -> (Self::RStride, Self::CStride);
/// Compute the index corresponding to the irow-th row and icol-th column of this matrix. The
/// index must be such that the following holds:
///
/// ```.ignore
/// let lindex = self.linear_index(irow, icol);
/// assert!(*self.get_unchecked(irow, icol) == *self.get_unchecked_linear(lindex)
/// ```
#[inline]
fn linear_index(&self, irow: usize, icol: usize) -> usize {
let (rstride, cstride) = self.strides();
irow * rstride.value() + icol * cstride.value()
}
/// Gets the address of the i-th matrix component without performing bound-checking.
#[inline]
unsafe fn get_address_unchecked_linear(&self, i: usize) -> *const N {
self.ptr().wrapping_offset(i as isize)
}
/// Gets the address of the i-th matrix component without performing bound-checking.
#[inline]
unsafe fn get_address_unchecked(&self, irow: usize, icol: usize) -> *const N {
self.get_address_unchecked_linear(self.linear_index(irow, icol))
}
/// Retrieves a reference to the i-th element without bound-checking.
#[inline]
unsafe fn get_unchecked_linear(&self, i: usize) -> &N {
&*self.get_address_unchecked_linear(i)
}
/// Retrieves a reference to the i-th element without bound-checking.
#[inline]
unsafe fn get_unchecked(&self, irow: usize, icol: usize) -> &N {
self.get_unchecked_linear(self.linear_index(irow, icol))
}
/// Indicates whether this data buffer stores its elements contiguously.
fn is_contiguous(&self) -> bool;
/// Retrieves the data buffer as a contiguous slice.
///
/// The matrix components may not be stored in a contiguous way, depending on the strides.
fn as_slice(&self) -> &[N];
/// Builds a matrix data storage that does not contain any reference.
fn into_owned(self) -> Owned<N, R, C>
where
DefaultAllocator: Allocator<N, R, C>;
/// Clones this data storage to one that does not contain any reference.
fn clone_owned(&self) -> Owned<N, R, C>
where
DefaultAllocator: Allocator<N, R, C>;
}
/// Trait implemented by matrix data storage that can provide a mutable access to its elements.
///
/// Note that a mutable access does not mean that the matrix owns its data. For example, a mutable
/// matrix slice can provide mutable access to its elements even if it does not own its data (it
/// contains only an internal reference to them).
pub unsafe trait StorageMut<N: Scalar, R: Dim, C: Dim = U1>: Storage<N, R, C> {
/// The matrix mutable data pointer.
fn ptr_mut(&mut self) -> *mut N;
/// Gets the mutable address of the i-th matrix component without performing bound-checking.
#[inline]
unsafe fn get_address_unchecked_linear_mut(&mut self, i: usize) -> *mut N {
self.ptr_mut().wrapping_offset(i as isize)
}
/// Gets the mutable address of the i-th matrix component without performing bound-checking.
#[inline]
unsafe fn get_address_unchecked_mut(&mut self, irow: usize, icol: usize) -> *mut N {
let lid = self.linear_index(irow, icol);
self.get_address_unchecked_linear_mut(lid)
}
/// Retrieves a mutable reference to the i-th element without bound-checking.
unsafe fn get_unchecked_linear_mut(&mut self, i: usize) -> &mut N {
&mut *self.get_address_unchecked_linear_mut(i)
}
/// Retrieves a mutable reference to the element at `(irow, icol)` without bound-checking.
#[inline]
unsafe fn get_unchecked_mut(&mut self, irow: usize, icol: usize) -> &mut N {
&mut *self.get_address_unchecked_mut(irow, icol)
}
/// Swaps two elements using their linear index without bound-checking.
#[inline]
unsafe fn swap_unchecked_linear(&mut self, i1: usize, i2: usize) {
let a = self.get_address_unchecked_linear_mut(i1);
let b = self.get_address_unchecked_linear_mut(i2);
mem::swap(&mut *a, &mut *b);
}
/// Swaps two elements without bound-checking.
#[inline]
unsafe fn swap_unchecked(&mut self, row_col1: (usize, usize), row_col2: (usize, usize)) {
let lid1 = self.linear_index(row_col1.0, row_col1.1);
let lid2 = self.linear_index(row_col2.0, row_col2.1);
self.swap_unchecked_linear(lid1, lid2)
}
/// Retrieves the mutable data buffer as a contiguous slice.
///
/// Matrix components may not be contiguous, depending on its strides.
fn as_mut_slice(&mut self) -> &mut [N];
}
/// A matrix storage that is stored contiguously in memory.
///
/// The storage requirement means that for any value of `i` in `[0, nrows * ncols[`, the value
|
/// failing to comply to this may cause Undefined Behaviors.
pub unsafe trait ContiguousStorage<N: Scalar, R: Dim, C: Dim = U1>:
Storage<N, R, C>
{
}
/// A mutable matrix storage that is stored contiguously in memory.
///
/// The storage requirement means that for any value of `i` in `[0, nrows * ncols[`, the value
/// `.get_unchecked_linear` returns one of the matrix component. This trait is unsafe because
/// failing to comply to this may cause Undefined Behaviors.
pub unsafe trait ContiguousStorageMut<N: Scalar, R: Dim, C: Dim = U1>:
ContiguousStorage<N, R, C> + StorageMut<N, R, C>
{
}
|
/// `.get_unchecked_linear` returns one of the matrix component. This trait is unsafe because
|
random_line_split
|
lib.rs
|
// Copyright (c) 2016-2017 Chef Software Inc. and/or applicable contributors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
|
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![cfg_attr(feature="clippy", feature(plugin))]
#![cfg_attr(feature="clippy", plugin(clippy))]
extern crate hab;
extern crate habitat_core as hcore;
extern crate habitat_common as common;
extern crate habitat_butterfly as butterfly;
#[macro_use]
extern crate clap;
extern crate log;
extern crate toml;
pub use hab::config;
pub use hab::error;
pub use hab::analytics;
pub mod cli;
pub mod command;
pub const PRODUCT: &'static str = "hab-butterfly";
pub const VERSION: &'static str = include_str!(concat!(env!("OUT_DIR"), "/VERSION"));
|
//
|
random_line_split
|
id_type.rs
|
// Copyright 2015 MaidSafe.net limited.
//
// This SAFE Network Software is licensed to you under (1) the MaidSafe.net Commercial License,
// version 1.0 or later, or (2) The General Public License (GPL), version 3, depending on which
// licence you accepted on initial access to the Software (the "Licences").
//
// By contributing code to the SAFE Network Software, or to this project generally, you agree to be
// bound by the terms of the MaidSafe Contributor Agreement, version 1.0. This, along with the
// Licenses can be found in the root directory of this project at LICENSE, COPYING and CONTRIBUTOR.
//
// Unless required by applicable law or agreed to in writing, the SAFE Network Software distributed
// under the GPL Licence is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied.
//
// Please review the Licences for the specific language governing permissions and limitations
// relating to use of the SAFE Network Software.
use cbor::CborTagEncode;
use rustc_serialize::{Decodable, Decoder, Encodable, Encoder};
use sodiumoxide::crypto;
use helper::*;
use super::revocation_id_type::*;
use std::fmt;
use routing::NameType;
/// IdType
///
/// #Examples
/// ```
/// use maidsafe_types::{IdType, RevocationIdType, MaidTypeTags};
/// // Creating new IdType
/// let maid: IdType = IdType::new(&RevocationIdType::new::<MaidTypeTags>());
///
/// ```
#[derive(Clone)]
pub struct IdType {
type_tag: u64,
public_keys: (crypto::sign::PublicKey, crypto::asymmetricbox::PublicKey),
secret_keys: (crypto::sign::SecretKey, crypto::asymmetricbox::SecretKey)
}
impl IdType {
/// Invoked to create an instance of IdType
pub fn new(revocation_id: &RevocationIdType) -> IdType {
let asym_keys = crypto::asymmetricbox::gen_keypair();
let signing_keys = crypto::sign::gen_keypair();
IdType {
type_tag: revocation_id.type_tags().1,
public_keys: (signing_keys.0, asym_keys.0),
secret_keys: (signing_keys.1, asym_keys.1)
}
}
/// Returns name
pub fn name(&self) -> NameType {
let combined_iter = (&self.public_keys.0).0.into_iter().chain((&self.public_keys.1).0.into_iter());
let mut combined: Vec<u8> = Vec::new();
for iter in combined_iter {
combined.push(*iter);
}
for i in self.type_tag.to_string().into_bytes().into_iter() {
combined.push(i);
}
NameType(crypto::hash::sha512::hash(&combined).0)
}
/// Returns the PublicKeys
pub fn public_keys(&self) -> &(crypto::sign::PublicKey, crypto::asymmetricbox::PublicKey){
&self.public_keys
}
/// Returns the PublicKeys
pub fn secret_keys(&self) -> &(crypto::sign::SecretKey, crypto::asymmetricbox::SecretKey) {
&self.secret_keys
}
/// Signs the data with the SecretKey and returns the Signed data
pub fn sign(&self, data : &[u8]) -> Vec<u8> {
return crypto::sign::sign(&data, &self.secret_keys.0)
}
/// Encrypts and authenticates data. It returns a ciphertext and the Nonce.
pub fn seal(&self, data : &[u8], to : &crypto::asymmetricbox::PublicKey) -> (Vec<u8>, crypto::asymmetricbox::Nonce) {
let nonce = crypto::asymmetricbox::gen_nonce();
let sealed = crypto::asymmetricbox::seal(data, &nonce, &to, &self.secret_keys.1);
return (sealed, nonce);
}
/// Verifies and decrypts the data
pub fn open(
&self,
data : &[u8],
nonce : &crypto::asymmetricbox::Nonce,
from : &crypto::asymmetricbox::PublicKey) -> Result<Vec<u8>, ::CryptoError> {
return crypto::asymmetricbox::open(&data, &nonce, &from, &self.secret_keys.1).ok_or(::CryptoError::Unknown);
}
}
impl PartialEq for IdType {
fn eq(&self, other: &IdType) -> bool {
// Private keys are mathematically linked, so just check public keys
&self.type_tag == &other.type_tag &&
slice_equal(&self.public_keys.0.0, &other.public_keys.0.0) &&
slice_equal(&self.public_keys.1.0, &other.public_keys.1.0)
}
}
impl fmt::Debug for IdType {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "IdType {{ type_tag:{}, public_keys: ({:?}, {:?}) }}", self.type_tag, self.public_keys.0.0.to_vec(), self.public_keys.1.0.to_vec())
}
}
impl Encodable for IdType {
fn encode<E: Encoder>(&self, e: &mut E)->Result<(), E::Error> {
let (crypto::sign::PublicKey(pub_sign_vec), crypto::asymmetricbox::PublicKey(pub_asym_vec)) = self.public_keys;
let (crypto::sign::SecretKey(sec_sign_vec), crypto::asymmetricbox::SecretKey(sec_asym_vec)) = self.secret_keys;
let type_vec = self.type_tag.to_string().into_bytes();
CborTagEncode::new(5483_001, &(
type_vec,
pub_sign_vec.as_ref(),
pub_asym_vec.as_ref(),
sec_sign_vec.as_ref(),
sec_asym_vec.as_ref())).encode(e)
}
}
impl Decodable for IdType {
fn decode<D: Decoder>(d: &mut D)-> Result<IdType, D::Error> {
try!(d.read_u64());
let (tag_type_vec, pub_sign_vec, pub_asym_vec, sec_sign_vec, sec_asym_vec) : (Vec<u8>, Vec<u8>, Vec<u8>, Vec<u8>, Vec<u8>) = try!(Decodable::decode(d));
let pub_sign_arr = convert_to_array!(pub_sign_vec, crypto::sign::PUBLICKEYBYTES);
let pub_asym_arr = convert_to_array!(pub_asym_vec, crypto::asymmetricbox::PUBLICKEYBYTES);
let sec_sign_arr = convert_to_array!(sec_sign_vec, crypto::sign::SECRETKEYBYTES);
let sec_asym_arr = convert_to_array!(sec_asym_vec, crypto::asymmetricbox::SECRETKEYBYTES);
if pub_sign_arr.is_none() || pub_asym_arr.is_none() || sec_sign_arr.is_none() || sec_asym_arr.is_none() {
return Err(d.error("Bad IdType size"));
}
let type_tag: u64 = match String::from_utf8(tag_type_vec) {
Ok(string) => {
match string.parse::<u64>() {
Ok(type_tag) => type_tag,
Err(_) => return Err(d.error("Bad Tag Type"))
}
},
Err(_) => return Err(d.error("Bad Tag Type"))
};
Ok(IdType{ type_tag: type_tag,
public_keys:(crypto::sign::PublicKey(pub_sign_arr.unwrap()), crypto::asymmetricbox::PublicKey(pub_asym_arr.unwrap())),
secret_keys: (crypto::sign::SecretKey(sec_sign_arr.unwrap()), crypto::asymmetricbox::SecretKey(sec_asym_arr.unwrap())) })
}
}
#[cfg(test)]
mod test {
extern crate rand;
use super::*;
use self::rand::Rng;
use cbor;
use super::super::RevocationIdType;
use sodiumoxide::crypto;
use Random;
use MaidTypeTags;
impl Random for IdType {
fn
|
() -> IdType {
IdType::new(&RevocationIdType::new::<MaidTypeTags>())
}
}
#[test]
fn serialisation_maid() {
use helper::*;
let obj_before = IdType::generate_random();
let mut e = cbor::Encoder::from_memory();
e.encode(&[&obj_before]).unwrap();
let mut d = cbor::Decoder::from_bytes(e.as_bytes());
let obj_after: IdType = d.decode().next().unwrap().unwrap();
let &(crypto::sign::PublicKey(pub_sign_arr_before), crypto::asymmetricbox::PublicKey(pub_asym_arr_before)) = obj_before.public_keys();
let &(crypto::sign::PublicKey(pub_sign_arr_after), crypto::asymmetricbox::PublicKey(pub_asym_arr_after)) = obj_after.public_keys();
let &(crypto::sign::SecretKey(sec_sign_arr_before), crypto::asymmetricbox::SecretKey(sec_asym_arr_before)) = &obj_before.secret_keys;
let &(crypto::sign::SecretKey(sec_sign_arr_after), crypto::asymmetricbox::SecretKey(sec_asym_arr_after)) = &obj_after.secret_keys;
assert_eq!(pub_sign_arr_before, pub_sign_arr_after);
assert_eq!(pub_asym_arr_before, pub_asym_arr_after);
assert!(slice_equal(&sec_sign_arr_before, &sec_sign_arr_after));
assert_eq!(sec_asym_arr_before, sec_asym_arr_after);
}
#[test]
fn generation() {
let maid1 = IdType::generate_random();
let maid2 = IdType::generate_random();
let maid2_clone = maid2.clone();
assert_eq!(maid2, maid2_clone);
assert!(!(maid2!= maid2_clone));
assert!(maid1!= maid2);
let random_bytes = rand::thread_rng().gen_iter::<u8>().take(100).collect::<Vec<u8>>();
{
let sign1 = maid1.sign(&random_bytes);
let sign2 = maid2.sign(&random_bytes);
assert!(sign1!= sign2);
assert!(crypto::sign::verify(&sign1, &maid1.public_keys().0).is_some());
assert!(crypto::sign::verify(&sign2, &maid1.public_keys().0).is_none());
assert!(crypto::sign::verify(&sign2, &maid2.public_keys().0).is_some());
assert!(crypto::sign::verify(&sign2, &maid1.public_keys().0).is_none());
}
{
let maid3 = IdType::generate_random();
let encrypt1 = maid1.seal(&random_bytes, &maid3.public_keys().1);
let encrypt2 = maid2.seal(&random_bytes, &maid3.public_keys().1);
assert!(encrypt1.0!= encrypt2.0);
assert!(maid3.open(&encrypt1.0, &encrypt1.1, &maid1.public_keys().1).is_ok());
assert!(maid3.open(&encrypt1.0, &encrypt1.1, &maid2.public_keys().1).is_err());
assert!(maid3.open(&encrypt2.0, &encrypt2.1, &maid2.public_keys().1).is_ok());
assert!(maid3.open(&encrypt2.0, &encrypt2.1, &maid1.public_keys().1).is_err());
}
}
}
|
generate_random
|
identifier_name
|
id_type.rs
|
// Copyright 2015 MaidSafe.net limited.
//
// This SAFE Network Software is licensed to you under (1) the MaidSafe.net Commercial License,
// version 1.0 or later, or (2) The General Public License (GPL), version 3, depending on which
// licence you accepted on initial access to the Software (the "Licences").
//
// By contributing code to the SAFE Network Software, or to this project generally, you agree to be
// bound by the terms of the MaidSafe Contributor Agreement, version 1.0. This, along with the
// Licenses can be found in the root directory of this project at LICENSE, COPYING and CONTRIBUTOR.
//
// Unless required by applicable law or agreed to in writing, the SAFE Network Software distributed
// under the GPL Licence is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied.
//
// Please review the Licences for the specific language governing permissions and limitations
// relating to use of the SAFE Network Software.
use cbor::CborTagEncode;
use rustc_serialize::{Decodable, Decoder, Encodable, Encoder};
use sodiumoxide::crypto;
use helper::*;
use super::revocation_id_type::*;
use std::fmt;
use routing::NameType;
/// IdType
///
/// #Examples
/// ```
/// use maidsafe_types::{IdType, RevocationIdType, MaidTypeTags};
/// // Creating new IdType
/// let maid: IdType = IdType::new(&RevocationIdType::new::<MaidTypeTags>());
///
/// ```
#[derive(Clone)]
pub struct IdType {
type_tag: u64,
public_keys: (crypto::sign::PublicKey, crypto::asymmetricbox::PublicKey),
secret_keys: (crypto::sign::SecretKey, crypto::asymmetricbox::SecretKey)
}
impl IdType {
/// Invoked to create an instance of IdType
pub fn new(revocation_id: &RevocationIdType) -> IdType {
let asym_keys = crypto::asymmetricbox::gen_keypair();
let signing_keys = crypto::sign::gen_keypair();
IdType {
type_tag: revocation_id.type_tags().1,
public_keys: (signing_keys.0, asym_keys.0),
secret_keys: (signing_keys.1, asym_keys.1)
}
}
/// Returns name
pub fn name(&self) -> NameType {
let combined_iter = (&self.public_keys.0).0.into_iter().chain((&self.public_keys.1).0.into_iter());
let mut combined: Vec<u8> = Vec::new();
for iter in combined_iter {
combined.push(*iter);
}
for i in self.type_tag.to_string().into_bytes().into_iter() {
combined.push(i);
}
NameType(crypto::hash::sha512::hash(&combined).0)
}
/// Returns the PublicKeys
pub fn public_keys(&self) -> &(crypto::sign::PublicKey, crypto::asymmetricbox::PublicKey){
&self.public_keys
}
/// Returns the PublicKeys
pub fn secret_keys(&self) -> &(crypto::sign::SecretKey, crypto::asymmetricbox::SecretKey) {
&self.secret_keys
}
/// Signs the data with the SecretKey and returns the Signed data
pub fn sign(&self, data : &[u8]) -> Vec<u8> {
return crypto::sign::sign(&data, &self.secret_keys.0)
}
/// Encrypts and authenticates data. It returns a ciphertext and the Nonce.
pub fn seal(&self, data : &[u8], to : &crypto::asymmetricbox::PublicKey) -> (Vec<u8>, crypto::asymmetricbox::Nonce) {
let nonce = crypto::asymmetricbox::gen_nonce();
let sealed = crypto::asymmetricbox::seal(data, &nonce, &to, &self.secret_keys.1);
return (sealed, nonce);
}
/// Verifies and decrypts the data
pub fn open(
&self,
data : &[u8],
nonce : &crypto::asymmetricbox::Nonce,
from : &crypto::asymmetricbox::PublicKey) -> Result<Vec<u8>, ::CryptoError> {
return crypto::asymmetricbox::open(&data, &nonce, &from, &self.secret_keys.1).ok_or(::CryptoError::Unknown);
}
}
impl PartialEq for IdType {
fn eq(&self, other: &IdType) -> bool {
// Private keys are mathematically linked, so just check public keys
&self.type_tag == &other.type_tag &&
slice_equal(&self.public_keys.0.0, &other.public_keys.0.0) &&
slice_equal(&self.public_keys.1.0, &other.public_keys.1.0)
}
}
impl fmt::Debug for IdType {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "IdType {{ type_tag:{}, public_keys: ({:?}, {:?}) }}", self.type_tag, self.public_keys.0.0.to_vec(), self.public_keys.1.0.to_vec())
}
}
impl Encodable for IdType {
fn encode<E: Encoder>(&self, e: &mut E)->Result<(), E::Error> {
let (crypto::sign::PublicKey(pub_sign_vec), crypto::asymmetricbox::PublicKey(pub_asym_vec)) = self.public_keys;
let (crypto::sign::SecretKey(sec_sign_vec), crypto::asymmetricbox::SecretKey(sec_asym_vec)) = self.secret_keys;
let type_vec = self.type_tag.to_string().into_bytes();
CborTagEncode::new(5483_001, &(
type_vec,
pub_sign_vec.as_ref(),
pub_asym_vec.as_ref(),
sec_sign_vec.as_ref(),
sec_asym_vec.as_ref())).encode(e)
}
}
impl Decodable for IdType {
fn decode<D: Decoder>(d: &mut D)-> Result<IdType, D::Error> {
try!(d.read_u64());
let (tag_type_vec, pub_sign_vec, pub_asym_vec, sec_sign_vec, sec_asym_vec) : (Vec<u8>, Vec<u8>, Vec<u8>, Vec<u8>, Vec<u8>) = try!(Decodable::decode(d));
let pub_sign_arr = convert_to_array!(pub_sign_vec, crypto::sign::PUBLICKEYBYTES);
let pub_asym_arr = convert_to_array!(pub_asym_vec, crypto::asymmetricbox::PUBLICKEYBYTES);
let sec_sign_arr = convert_to_array!(sec_sign_vec, crypto::sign::SECRETKEYBYTES);
let sec_asym_arr = convert_to_array!(sec_asym_vec, crypto::asymmetricbox::SECRETKEYBYTES);
if pub_sign_arr.is_none() || pub_asym_arr.is_none() || sec_sign_arr.is_none() || sec_asym_arr.is_none()
|
let type_tag: u64 = match String::from_utf8(tag_type_vec) {
Ok(string) => {
match string.parse::<u64>() {
Ok(type_tag) => type_tag,
Err(_) => return Err(d.error("Bad Tag Type"))
}
},
Err(_) => return Err(d.error("Bad Tag Type"))
};
Ok(IdType{ type_tag: type_tag,
public_keys:(crypto::sign::PublicKey(pub_sign_arr.unwrap()), crypto::asymmetricbox::PublicKey(pub_asym_arr.unwrap())),
secret_keys: (crypto::sign::SecretKey(sec_sign_arr.unwrap()), crypto::asymmetricbox::SecretKey(sec_asym_arr.unwrap())) })
}
}
#[cfg(test)]
mod test {
extern crate rand;
use super::*;
use self::rand::Rng;
use cbor;
use super::super::RevocationIdType;
use sodiumoxide::crypto;
use Random;
use MaidTypeTags;
impl Random for IdType {
fn generate_random() -> IdType {
IdType::new(&RevocationIdType::new::<MaidTypeTags>())
}
}
#[test]
fn serialisation_maid() {
use helper::*;
let obj_before = IdType::generate_random();
let mut e = cbor::Encoder::from_memory();
e.encode(&[&obj_before]).unwrap();
let mut d = cbor::Decoder::from_bytes(e.as_bytes());
let obj_after: IdType = d.decode().next().unwrap().unwrap();
let &(crypto::sign::PublicKey(pub_sign_arr_before), crypto::asymmetricbox::PublicKey(pub_asym_arr_before)) = obj_before.public_keys();
let &(crypto::sign::PublicKey(pub_sign_arr_after), crypto::asymmetricbox::PublicKey(pub_asym_arr_after)) = obj_after.public_keys();
let &(crypto::sign::SecretKey(sec_sign_arr_before), crypto::asymmetricbox::SecretKey(sec_asym_arr_before)) = &obj_before.secret_keys;
let &(crypto::sign::SecretKey(sec_sign_arr_after), crypto::asymmetricbox::SecretKey(sec_asym_arr_after)) = &obj_after.secret_keys;
assert_eq!(pub_sign_arr_before, pub_sign_arr_after);
assert_eq!(pub_asym_arr_before, pub_asym_arr_after);
assert!(slice_equal(&sec_sign_arr_before, &sec_sign_arr_after));
assert_eq!(sec_asym_arr_before, sec_asym_arr_after);
}
#[test]
fn generation() {
let maid1 = IdType::generate_random();
let maid2 = IdType::generate_random();
let maid2_clone = maid2.clone();
assert_eq!(maid2, maid2_clone);
assert!(!(maid2!= maid2_clone));
assert!(maid1!= maid2);
let random_bytes = rand::thread_rng().gen_iter::<u8>().take(100).collect::<Vec<u8>>();
{
let sign1 = maid1.sign(&random_bytes);
let sign2 = maid2.sign(&random_bytes);
assert!(sign1!= sign2);
assert!(crypto::sign::verify(&sign1, &maid1.public_keys().0).is_some());
assert!(crypto::sign::verify(&sign2, &maid1.public_keys().0).is_none());
assert!(crypto::sign::verify(&sign2, &maid2.public_keys().0).is_some());
assert!(crypto::sign::verify(&sign2, &maid1.public_keys().0).is_none());
}
{
let maid3 = IdType::generate_random();
let encrypt1 = maid1.seal(&random_bytes, &maid3.public_keys().1);
let encrypt2 = maid2.seal(&random_bytes, &maid3.public_keys().1);
assert!(encrypt1.0!= encrypt2.0);
assert!(maid3.open(&encrypt1.0, &encrypt1.1, &maid1.public_keys().1).is_ok());
assert!(maid3.open(&encrypt1.0, &encrypt1.1, &maid2.public_keys().1).is_err());
assert!(maid3.open(&encrypt2.0, &encrypt2.1, &maid2.public_keys().1).is_ok());
assert!(maid3.open(&encrypt2.0, &encrypt2.1, &maid1.public_keys().1).is_err());
}
}
}
|
{
return Err(d.error("Bad IdType size"));
}
|
conditional_block
|
id_type.rs
|
// Copyright 2015 MaidSafe.net limited.
//
// This SAFE Network Software is licensed to you under (1) the MaidSafe.net Commercial License,
// version 1.0 or later, or (2) The General Public License (GPL), version 3, depending on which
// licence you accepted on initial access to the Software (the "Licences").
//
// By contributing code to the SAFE Network Software, or to this project generally, you agree to be
|
//
// Unless required by applicable law or agreed to in writing, the SAFE Network Software distributed
// under the GPL Licence is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied.
//
// Please review the Licences for the specific language governing permissions and limitations
// relating to use of the SAFE Network Software.
use cbor::CborTagEncode;
use rustc_serialize::{Decodable, Decoder, Encodable, Encoder};
use sodiumoxide::crypto;
use helper::*;
use super::revocation_id_type::*;
use std::fmt;
use routing::NameType;
/// IdType
///
/// #Examples
/// ```
/// use maidsafe_types::{IdType, RevocationIdType, MaidTypeTags};
/// // Creating new IdType
/// let maid: IdType = IdType::new(&RevocationIdType::new::<MaidTypeTags>());
///
/// ```
#[derive(Clone)]
pub struct IdType {
type_tag: u64,
public_keys: (crypto::sign::PublicKey, crypto::asymmetricbox::PublicKey),
secret_keys: (crypto::sign::SecretKey, crypto::asymmetricbox::SecretKey)
}
impl IdType {
/// Invoked to create an instance of IdType
pub fn new(revocation_id: &RevocationIdType) -> IdType {
let asym_keys = crypto::asymmetricbox::gen_keypair();
let signing_keys = crypto::sign::gen_keypair();
IdType {
type_tag: revocation_id.type_tags().1,
public_keys: (signing_keys.0, asym_keys.0),
secret_keys: (signing_keys.1, asym_keys.1)
}
}
/// Returns name
pub fn name(&self) -> NameType {
let combined_iter = (&self.public_keys.0).0.into_iter().chain((&self.public_keys.1).0.into_iter());
let mut combined: Vec<u8> = Vec::new();
for iter in combined_iter {
combined.push(*iter);
}
for i in self.type_tag.to_string().into_bytes().into_iter() {
combined.push(i);
}
NameType(crypto::hash::sha512::hash(&combined).0)
}
/// Returns the PublicKeys
pub fn public_keys(&self) -> &(crypto::sign::PublicKey, crypto::asymmetricbox::PublicKey){
&self.public_keys
}
/// Returns the PublicKeys
pub fn secret_keys(&self) -> &(crypto::sign::SecretKey, crypto::asymmetricbox::SecretKey) {
&self.secret_keys
}
/// Signs the data with the SecretKey and returns the Signed data
pub fn sign(&self, data : &[u8]) -> Vec<u8> {
return crypto::sign::sign(&data, &self.secret_keys.0)
}
/// Encrypts and authenticates data. It returns a ciphertext and the Nonce.
pub fn seal(&self, data : &[u8], to : &crypto::asymmetricbox::PublicKey) -> (Vec<u8>, crypto::asymmetricbox::Nonce) {
let nonce = crypto::asymmetricbox::gen_nonce();
let sealed = crypto::asymmetricbox::seal(data, &nonce, &to, &self.secret_keys.1);
return (sealed, nonce);
}
/// Verifies and decrypts the data
pub fn open(
&self,
data : &[u8],
nonce : &crypto::asymmetricbox::Nonce,
from : &crypto::asymmetricbox::PublicKey) -> Result<Vec<u8>, ::CryptoError> {
return crypto::asymmetricbox::open(&data, &nonce, &from, &self.secret_keys.1).ok_or(::CryptoError::Unknown);
}
}
impl PartialEq for IdType {
fn eq(&self, other: &IdType) -> bool {
// Private keys are mathematically linked, so just check public keys
&self.type_tag == &other.type_tag &&
slice_equal(&self.public_keys.0.0, &other.public_keys.0.0) &&
slice_equal(&self.public_keys.1.0, &other.public_keys.1.0)
}
}
impl fmt::Debug for IdType {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "IdType {{ type_tag:{}, public_keys: ({:?}, {:?}) }}", self.type_tag, self.public_keys.0.0.to_vec(), self.public_keys.1.0.to_vec())
}
}
impl Encodable for IdType {
fn encode<E: Encoder>(&self, e: &mut E)->Result<(), E::Error> {
let (crypto::sign::PublicKey(pub_sign_vec), crypto::asymmetricbox::PublicKey(pub_asym_vec)) = self.public_keys;
let (crypto::sign::SecretKey(sec_sign_vec), crypto::asymmetricbox::SecretKey(sec_asym_vec)) = self.secret_keys;
let type_vec = self.type_tag.to_string().into_bytes();
CborTagEncode::new(5483_001, &(
type_vec,
pub_sign_vec.as_ref(),
pub_asym_vec.as_ref(),
sec_sign_vec.as_ref(),
sec_asym_vec.as_ref())).encode(e)
}
}
impl Decodable for IdType {
fn decode<D: Decoder>(d: &mut D)-> Result<IdType, D::Error> {
try!(d.read_u64());
let (tag_type_vec, pub_sign_vec, pub_asym_vec, sec_sign_vec, sec_asym_vec) : (Vec<u8>, Vec<u8>, Vec<u8>, Vec<u8>, Vec<u8>) = try!(Decodable::decode(d));
let pub_sign_arr = convert_to_array!(pub_sign_vec, crypto::sign::PUBLICKEYBYTES);
let pub_asym_arr = convert_to_array!(pub_asym_vec, crypto::asymmetricbox::PUBLICKEYBYTES);
let sec_sign_arr = convert_to_array!(sec_sign_vec, crypto::sign::SECRETKEYBYTES);
let sec_asym_arr = convert_to_array!(sec_asym_vec, crypto::asymmetricbox::SECRETKEYBYTES);
if pub_sign_arr.is_none() || pub_asym_arr.is_none() || sec_sign_arr.is_none() || sec_asym_arr.is_none() {
return Err(d.error("Bad IdType size"));
}
let type_tag: u64 = match String::from_utf8(tag_type_vec) {
Ok(string) => {
match string.parse::<u64>() {
Ok(type_tag) => type_tag,
Err(_) => return Err(d.error("Bad Tag Type"))
}
},
Err(_) => return Err(d.error("Bad Tag Type"))
};
Ok(IdType{ type_tag: type_tag,
public_keys:(crypto::sign::PublicKey(pub_sign_arr.unwrap()), crypto::asymmetricbox::PublicKey(pub_asym_arr.unwrap())),
secret_keys: (crypto::sign::SecretKey(sec_sign_arr.unwrap()), crypto::asymmetricbox::SecretKey(sec_asym_arr.unwrap())) })
}
}
#[cfg(test)]
mod test {
extern crate rand;
use super::*;
use self::rand::Rng;
use cbor;
use super::super::RevocationIdType;
use sodiumoxide::crypto;
use Random;
use MaidTypeTags;
impl Random for IdType {
fn generate_random() -> IdType {
IdType::new(&RevocationIdType::new::<MaidTypeTags>())
}
}
#[test]
fn serialisation_maid() {
use helper::*;
let obj_before = IdType::generate_random();
let mut e = cbor::Encoder::from_memory();
e.encode(&[&obj_before]).unwrap();
let mut d = cbor::Decoder::from_bytes(e.as_bytes());
let obj_after: IdType = d.decode().next().unwrap().unwrap();
let &(crypto::sign::PublicKey(pub_sign_arr_before), crypto::asymmetricbox::PublicKey(pub_asym_arr_before)) = obj_before.public_keys();
let &(crypto::sign::PublicKey(pub_sign_arr_after), crypto::asymmetricbox::PublicKey(pub_asym_arr_after)) = obj_after.public_keys();
let &(crypto::sign::SecretKey(sec_sign_arr_before), crypto::asymmetricbox::SecretKey(sec_asym_arr_before)) = &obj_before.secret_keys;
let &(crypto::sign::SecretKey(sec_sign_arr_after), crypto::asymmetricbox::SecretKey(sec_asym_arr_after)) = &obj_after.secret_keys;
assert_eq!(pub_sign_arr_before, pub_sign_arr_after);
assert_eq!(pub_asym_arr_before, pub_asym_arr_after);
assert!(slice_equal(&sec_sign_arr_before, &sec_sign_arr_after));
assert_eq!(sec_asym_arr_before, sec_asym_arr_after);
}
#[test]
fn generation() {
let maid1 = IdType::generate_random();
let maid2 = IdType::generate_random();
let maid2_clone = maid2.clone();
assert_eq!(maid2, maid2_clone);
assert!(!(maid2!= maid2_clone));
assert!(maid1!= maid2);
let random_bytes = rand::thread_rng().gen_iter::<u8>().take(100).collect::<Vec<u8>>();
{
let sign1 = maid1.sign(&random_bytes);
let sign2 = maid2.sign(&random_bytes);
assert!(sign1!= sign2);
assert!(crypto::sign::verify(&sign1, &maid1.public_keys().0).is_some());
assert!(crypto::sign::verify(&sign2, &maid1.public_keys().0).is_none());
assert!(crypto::sign::verify(&sign2, &maid2.public_keys().0).is_some());
assert!(crypto::sign::verify(&sign2, &maid1.public_keys().0).is_none());
}
{
let maid3 = IdType::generate_random();
let encrypt1 = maid1.seal(&random_bytes, &maid3.public_keys().1);
let encrypt2 = maid2.seal(&random_bytes, &maid3.public_keys().1);
assert!(encrypt1.0!= encrypt2.0);
assert!(maid3.open(&encrypt1.0, &encrypt1.1, &maid1.public_keys().1).is_ok());
assert!(maid3.open(&encrypt1.0, &encrypt1.1, &maid2.public_keys().1).is_err());
assert!(maid3.open(&encrypt2.0, &encrypt2.1, &maid2.public_keys().1).is_ok());
assert!(maid3.open(&encrypt2.0, &encrypt2.1, &maid1.public_keys().1).is_err());
}
}
}
|
// bound by the terms of the MaidSafe Contributor Agreement, version 1.0. This, along with the
// Licenses can be found in the root directory of this project at LICENSE, COPYING and CONTRIBUTOR.
|
random_line_split
|
id_type.rs
|
// Copyright 2015 MaidSafe.net limited.
//
// This SAFE Network Software is licensed to you under (1) the MaidSafe.net Commercial License,
// version 1.0 or later, or (2) The General Public License (GPL), version 3, depending on which
// licence you accepted on initial access to the Software (the "Licences").
//
// By contributing code to the SAFE Network Software, or to this project generally, you agree to be
// bound by the terms of the MaidSafe Contributor Agreement, version 1.0. This, along with the
// Licenses can be found in the root directory of this project at LICENSE, COPYING and CONTRIBUTOR.
//
// Unless required by applicable law or agreed to in writing, the SAFE Network Software distributed
// under the GPL Licence is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied.
//
// Please review the Licences for the specific language governing permissions and limitations
// relating to use of the SAFE Network Software.
use cbor::CborTagEncode;
use rustc_serialize::{Decodable, Decoder, Encodable, Encoder};
use sodiumoxide::crypto;
use helper::*;
use super::revocation_id_type::*;
use std::fmt;
use routing::NameType;
/// IdType
///
/// #Examples
/// ```
/// use maidsafe_types::{IdType, RevocationIdType, MaidTypeTags};
/// // Creating new IdType
/// let maid: IdType = IdType::new(&RevocationIdType::new::<MaidTypeTags>());
///
/// ```
#[derive(Clone)]
pub struct IdType {
type_tag: u64,
public_keys: (crypto::sign::PublicKey, crypto::asymmetricbox::PublicKey),
secret_keys: (crypto::sign::SecretKey, crypto::asymmetricbox::SecretKey)
}
impl IdType {
/// Invoked to create an instance of IdType
pub fn new(revocation_id: &RevocationIdType) -> IdType {
let asym_keys = crypto::asymmetricbox::gen_keypair();
let signing_keys = crypto::sign::gen_keypair();
IdType {
type_tag: revocation_id.type_tags().1,
public_keys: (signing_keys.0, asym_keys.0),
secret_keys: (signing_keys.1, asym_keys.1)
}
}
/// Returns name
pub fn name(&self) -> NameType
|
/// Returns the PublicKeys
pub fn public_keys(&self) -> &(crypto::sign::PublicKey, crypto::asymmetricbox::PublicKey){
&self.public_keys
}
/// Returns the PublicKeys
pub fn secret_keys(&self) -> &(crypto::sign::SecretKey, crypto::asymmetricbox::SecretKey) {
&self.secret_keys
}
/// Signs the data with the SecretKey and returns the Signed data
pub fn sign(&self, data : &[u8]) -> Vec<u8> {
return crypto::sign::sign(&data, &self.secret_keys.0)
}
/// Encrypts and authenticates data. It returns a ciphertext and the Nonce.
pub fn seal(&self, data : &[u8], to : &crypto::asymmetricbox::PublicKey) -> (Vec<u8>, crypto::asymmetricbox::Nonce) {
let nonce = crypto::asymmetricbox::gen_nonce();
let sealed = crypto::asymmetricbox::seal(data, &nonce, &to, &self.secret_keys.1);
return (sealed, nonce);
}
/// Verifies and decrypts the data
pub fn open(
&self,
data : &[u8],
nonce : &crypto::asymmetricbox::Nonce,
from : &crypto::asymmetricbox::PublicKey) -> Result<Vec<u8>, ::CryptoError> {
return crypto::asymmetricbox::open(&data, &nonce, &from, &self.secret_keys.1).ok_or(::CryptoError::Unknown);
}
}
impl PartialEq for IdType {
fn eq(&self, other: &IdType) -> bool {
// Private keys are mathematically linked, so just check public keys
&self.type_tag == &other.type_tag &&
slice_equal(&self.public_keys.0.0, &other.public_keys.0.0) &&
slice_equal(&self.public_keys.1.0, &other.public_keys.1.0)
}
}
impl fmt::Debug for IdType {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "IdType {{ type_tag:{}, public_keys: ({:?}, {:?}) }}", self.type_tag, self.public_keys.0.0.to_vec(), self.public_keys.1.0.to_vec())
}
}
impl Encodable for IdType {
fn encode<E: Encoder>(&self, e: &mut E)->Result<(), E::Error> {
let (crypto::sign::PublicKey(pub_sign_vec), crypto::asymmetricbox::PublicKey(pub_asym_vec)) = self.public_keys;
let (crypto::sign::SecretKey(sec_sign_vec), crypto::asymmetricbox::SecretKey(sec_asym_vec)) = self.secret_keys;
let type_vec = self.type_tag.to_string().into_bytes();
CborTagEncode::new(5483_001, &(
type_vec,
pub_sign_vec.as_ref(),
pub_asym_vec.as_ref(),
sec_sign_vec.as_ref(),
sec_asym_vec.as_ref())).encode(e)
}
}
impl Decodable for IdType {
fn decode<D: Decoder>(d: &mut D)-> Result<IdType, D::Error> {
try!(d.read_u64());
let (tag_type_vec, pub_sign_vec, pub_asym_vec, sec_sign_vec, sec_asym_vec) : (Vec<u8>, Vec<u8>, Vec<u8>, Vec<u8>, Vec<u8>) = try!(Decodable::decode(d));
let pub_sign_arr = convert_to_array!(pub_sign_vec, crypto::sign::PUBLICKEYBYTES);
let pub_asym_arr = convert_to_array!(pub_asym_vec, crypto::asymmetricbox::PUBLICKEYBYTES);
let sec_sign_arr = convert_to_array!(sec_sign_vec, crypto::sign::SECRETKEYBYTES);
let sec_asym_arr = convert_to_array!(sec_asym_vec, crypto::asymmetricbox::SECRETKEYBYTES);
if pub_sign_arr.is_none() || pub_asym_arr.is_none() || sec_sign_arr.is_none() || sec_asym_arr.is_none() {
return Err(d.error("Bad IdType size"));
}
let type_tag: u64 = match String::from_utf8(tag_type_vec) {
Ok(string) => {
match string.parse::<u64>() {
Ok(type_tag) => type_tag,
Err(_) => return Err(d.error("Bad Tag Type"))
}
},
Err(_) => return Err(d.error("Bad Tag Type"))
};
Ok(IdType{ type_tag: type_tag,
public_keys:(crypto::sign::PublicKey(pub_sign_arr.unwrap()), crypto::asymmetricbox::PublicKey(pub_asym_arr.unwrap())),
secret_keys: (crypto::sign::SecretKey(sec_sign_arr.unwrap()), crypto::asymmetricbox::SecretKey(sec_asym_arr.unwrap())) })
}
}
#[cfg(test)]
mod test {
extern crate rand;
use super::*;
use self::rand::Rng;
use cbor;
use super::super::RevocationIdType;
use sodiumoxide::crypto;
use Random;
use MaidTypeTags;
impl Random for IdType {
fn generate_random() -> IdType {
IdType::new(&RevocationIdType::new::<MaidTypeTags>())
}
}
#[test]
fn serialisation_maid() {
use helper::*;
let obj_before = IdType::generate_random();
let mut e = cbor::Encoder::from_memory();
e.encode(&[&obj_before]).unwrap();
let mut d = cbor::Decoder::from_bytes(e.as_bytes());
let obj_after: IdType = d.decode().next().unwrap().unwrap();
let &(crypto::sign::PublicKey(pub_sign_arr_before), crypto::asymmetricbox::PublicKey(pub_asym_arr_before)) = obj_before.public_keys();
let &(crypto::sign::PublicKey(pub_sign_arr_after), crypto::asymmetricbox::PublicKey(pub_asym_arr_after)) = obj_after.public_keys();
let &(crypto::sign::SecretKey(sec_sign_arr_before), crypto::asymmetricbox::SecretKey(sec_asym_arr_before)) = &obj_before.secret_keys;
let &(crypto::sign::SecretKey(sec_sign_arr_after), crypto::asymmetricbox::SecretKey(sec_asym_arr_after)) = &obj_after.secret_keys;
assert_eq!(pub_sign_arr_before, pub_sign_arr_after);
assert_eq!(pub_asym_arr_before, pub_asym_arr_after);
assert!(slice_equal(&sec_sign_arr_before, &sec_sign_arr_after));
assert_eq!(sec_asym_arr_before, sec_asym_arr_after);
}
#[test]
fn generation() {
let maid1 = IdType::generate_random();
let maid2 = IdType::generate_random();
let maid2_clone = maid2.clone();
assert_eq!(maid2, maid2_clone);
assert!(!(maid2!= maid2_clone));
assert!(maid1!= maid2);
let random_bytes = rand::thread_rng().gen_iter::<u8>().take(100).collect::<Vec<u8>>();
{
let sign1 = maid1.sign(&random_bytes);
let sign2 = maid2.sign(&random_bytes);
assert!(sign1!= sign2);
assert!(crypto::sign::verify(&sign1, &maid1.public_keys().0).is_some());
assert!(crypto::sign::verify(&sign2, &maid1.public_keys().0).is_none());
assert!(crypto::sign::verify(&sign2, &maid2.public_keys().0).is_some());
assert!(crypto::sign::verify(&sign2, &maid1.public_keys().0).is_none());
}
{
let maid3 = IdType::generate_random();
let encrypt1 = maid1.seal(&random_bytes, &maid3.public_keys().1);
let encrypt2 = maid2.seal(&random_bytes, &maid3.public_keys().1);
assert!(encrypt1.0!= encrypt2.0);
assert!(maid3.open(&encrypt1.0, &encrypt1.1, &maid1.public_keys().1).is_ok());
assert!(maid3.open(&encrypt1.0, &encrypt1.1, &maid2.public_keys().1).is_err());
assert!(maid3.open(&encrypt2.0, &encrypt2.1, &maid2.public_keys().1).is_ok());
assert!(maid3.open(&encrypt2.0, &encrypt2.1, &maid1.public_keys().1).is_err());
}
}
}
|
{
let combined_iter = (&self.public_keys.0).0.into_iter().chain((&self.public_keys.1).0.into_iter());
let mut combined: Vec<u8> = Vec::new();
for iter in combined_iter {
combined.push(*iter);
}
for i in self.type_tag.to_string().into_bytes().into_iter() {
combined.push(i);
}
NameType(crypto::hash::sha512::hash(&combined).0)
}
|
identifier_body
|
schema2.rs
|
use types::{ColumnType};
pub trait Schema {
fn len(&self) -> usize;
fn name(&self, index: usize) -> &str;
fn ctype(&self, index: usize) -> ColumnType;
fn nullable(&self, index: usize) -> bool;
|
pub names: Vec<String>,
pub types: Vec<ColumnType>,
pub nullable: Vec<bool>,
}
impl Schema2 {
pub fn new() -> Self {
Schema2 {
names: Vec::new(),
types: Vec::new(),
nullable: Vec::new(),
}
}
pub fn add(&mut self,
name: &str,
ctype: ColumnType,
nullable: bool) {
self.names.push(name.to_string());
self.types.push(ctype);
self.nullable.push(nullable);
}
pub fn set_nullable(&mut self, index: usize, nullability: bool) {
self.nullable[index] = nullability;
}
}
impl Schema for Schema2 {
fn len(&self) -> usize {
self.names.len()
}
fn name(&self, index: usize) -> &str {
self.names[index].as_str()
}
fn ctype(&self, index: usize) -> ColumnType {
self.types[index]
}
fn nullable(&self, index: usize) -> bool {
self.nullable[index]
}
}
|
}
#[derive(Clone)]
pub struct Schema2 {
|
random_line_split
|
schema2.rs
|
use types::{ColumnType};
pub trait Schema {
fn len(&self) -> usize;
fn name(&self, index: usize) -> &str;
fn ctype(&self, index: usize) -> ColumnType;
fn nullable(&self, index: usize) -> bool;
}
#[derive(Clone)]
pub struct
|
{
pub names: Vec<String>,
pub types: Vec<ColumnType>,
pub nullable: Vec<bool>,
}
impl Schema2 {
pub fn new() -> Self {
Schema2 {
names: Vec::new(),
types: Vec::new(),
nullable: Vec::new(),
}
}
pub fn add(&mut self,
name: &str,
ctype: ColumnType,
nullable: bool) {
self.names.push(name.to_string());
self.types.push(ctype);
self.nullable.push(nullable);
}
pub fn set_nullable(&mut self, index: usize, nullability: bool) {
self.nullable[index] = nullability;
}
}
impl Schema for Schema2 {
fn len(&self) -> usize {
self.names.len()
}
fn name(&self, index: usize) -> &str {
self.names[index].as_str()
}
fn ctype(&self, index: usize) -> ColumnType {
self.types[index]
}
fn nullable(&self, index: usize) -> bool {
self.nullable[index]
}
}
|
Schema2
|
identifier_name
|
schema2.rs
|
use types::{ColumnType};
pub trait Schema {
fn len(&self) -> usize;
fn name(&self, index: usize) -> &str;
fn ctype(&self, index: usize) -> ColumnType;
fn nullable(&self, index: usize) -> bool;
}
#[derive(Clone)]
pub struct Schema2 {
pub names: Vec<String>,
pub types: Vec<ColumnType>,
pub nullable: Vec<bool>,
}
impl Schema2 {
pub fn new() -> Self {
Schema2 {
names: Vec::new(),
types: Vec::new(),
nullable: Vec::new(),
}
}
pub fn add(&mut self,
name: &str,
ctype: ColumnType,
nullable: bool)
|
pub fn set_nullable(&mut self, index: usize, nullability: bool) {
self.nullable[index] = nullability;
}
}
impl Schema for Schema2 {
fn len(&self) -> usize {
self.names.len()
}
fn name(&self, index: usize) -> &str {
self.names[index].as_str()
}
fn ctype(&self, index: usize) -> ColumnType {
self.types[index]
}
fn nullable(&self, index: usize) -> bool {
self.nullable[index]
}
}
|
{
self.names.push(name.to_string());
self.types.push(ctype);
self.nullable.push(nullable);
}
|
identifier_body
|
lib.rs
|
// Copyright 2021 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
mod abort;
mod breakpoint;
mod bti;
mod common;
mod fp;
mod hvc;
mod ld64b;
mod ldc;
mod mcr;
mod msr;
mod pauth;
mod serror;
mod sve;
#[cfg(test)]
mod tests;
mod wf;
use abort::{decode_iss_data_abort, decode_iss_instruction_abort};
use bit_field::BitField;
use breakpoint::{
decode_iss_breakpoint, decode_iss_breakpoint_vector_catch, decode_iss_software_step,
decode_iss_watchpoint,
};
use bti::decode_iss_bti;
use fp::decode_iss_fp;
use hvc::decode_iss_hvc;
use ld64b::decode_iss_ld64b;
use ldc::decode_iss_ldc;
use mcr::{decode_iss_mcr, decode_iss_mcrr};
use msr::decode_iss_msr;
use pauth::decode_iss_pauth;
use serror::decode_iss_serror;
use std::fmt::{self, Debug, Display, Formatter};
use std::num::ParseIntError;
use sve::decode_iss_sve;
use thiserror::Error;
use wf::decode_iss_wf;
/// Information about a particular field.
#[derive(Clone, Debug, Eq, PartialEq)]
pub struct FieldInfo {
/// The short name of the field, e.g. "ISS".
pub name: &'static str,
/// The long name of the field, e.g. "Instruction Specific Syndrome".
pub long_name: Option<&'static str>,
/// The index of the lowest bit of the field.
pub start: usize,
/// The number of bits in the field.
pub width: usize,
/// The value of the field.
pub value: u64,
/// A description explaining the field value, if available.
pub description: Option<String>,
/// Any sub-fields.
pub subfields: Vec<FieldInfo>,
}
impl FieldInfo {
fn get(
register: u64,
name: &'static str,
long_name: Option<&'static str>,
start: usize,
end: usize,
) -> Self {
let value = register.get_bits(start..end);
Self {
name,
long_name,
start,
width: end - start,
value,
description: None,
subfields: vec![],
}
}
fn get_bit(
register: u64,
name: &'static str,
long_name: Option<&'static str>,
bit: usize,
) -> Self {
Self::get(register, name, long_name, bit, bit + 1)
}
fn with_description(self, description: String) -> Self {
Self {
description: Some(description),
..self
}
}
fn as_bit(&self) -> bool {
assert!(self.width == 1);
self.value == 1
}
/// Assuming this field has a width of exactly 1, describe it with the given function.
///
/// Panics if `self.width!= 1`.
fn describe_bit<F>(self, describer: F) -> Self
where
F: FnOnce(bool) -> &'static str,
{
let bit = self.as_bit();
let description = describer(bit).to_string();
self.with_description(description)
}
fn describe<F>(self, describer: F) -> Result<Self, DecodeError>
where
F: FnOnce(u64) -> Result<&'static str, DecodeError>,
{
let description = describer(self.value)?.to_string();
Ok(self.with_description(description))
}
fn check_res0(self) -> Result<Self, DecodeError> {
if self.value!= 0 {
Err(DecodeError::InvalidRes0 { res0: self.value })
} else {
Ok(self)
}
}
/// Returns the value as a hexadecimal string, or "true" or "false" if it is a single bit.
pub fn value_string(&self) -> String {
if self.width == 1 {
if self.value == 1 { "true" } else { "false" }.to_string()
} else {
format!("{:#01$x}", self.value, (self.width + 3) / 4 + 2,)
}
}
/// Returns the value as a binary strings.
pub fn value_binary_string(&self) -> String {
format!("{:#01$b}", self.value, self.width + 2)
}
}
impl Display for FieldInfo {
fn fmt(&self, f: &mut Formatter) -> fmt::Result {
if self.width == 1 {
write!(
f,
"{}: {}",
self.name,
if self.value == 1 { "true" } else { "false" }
)
} else {
write!(
f,
"{}: {} {}",
self.name,
self.value_string(),
self.value_binary_string(),
)
}
}
}
/// An error decoding an ESR value.
#[derive(Debug, Error)]
pub enum DecodeError {
/// A RES0 field was not 0.
#[error("Invalid ESR, res0 is {res0:#x}")]
InvalidRes0 { res0: u64 },
/// The EC field had an invalid value.
#[error("Invalid EC {ec:#x}")]
InvalidEc { ec: u64 },
/// The DFSC or IFSC field had an invalid value.
#[error("Invalid DFSC or IFSC {fsc:#x}")]
InvalidFsc { fsc: u64 },
/// The SET field had an invalid value.
#[error("Invalid SET {set:#x}")]
InvalidSet { set: u64 },
/// The AET field had an invalid value.
#[error("Invalid AET {aet:#x}")]
InvalidAet { aet: u64 },
/// The AM field had an invalid value.
#[error("Invalid AM {am:#x}")]
InvalidAm { am: u64 },
/// The ISS field has an invalid value for a trapped LD64B or ST64B* exception.
#[error("Invalid ISS {iss:#x} for trapped LD64B or ST64B*")]
InvalidLd64bIss { iss: u64 },
}
fn decode_iss_res0(iss: u64) -> Result<Vec<FieldInfo>, DecodeError> {
let res0 = FieldInfo::get(iss, "RES0", Some("Reserved"), 0, 25)
.check_res0()?
.with_description("ISS is RES0".to_string());
Ok(vec![res0])
}
/// Decodes the given Exception Syndrome Register value, or returns an error if it is not valid.
pub fn decode(esr: u64) -> Result<Vec<FieldInfo>, DecodeError> {
let res0 = FieldInfo::get(esr, "RES0", Some("Reserved"), 37, 64).check_res0()?;
let iss2 = FieldInfo::get(esr, "ISS2", None, 32, 37);
let ec = FieldInfo::get(esr, "EC", Some("Exception Class"), 26, 32);
let il =
FieldInfo::get_bit(esr, "IL", Some("Instruction Length"), 25).describe_bit(describe_il);
let iss = FieldInfo::get(esr, "ISS", Some("Instruction Specific Syndrome"), 0, 25);
let (class, iss_subfields, iss_description) = match ec.value {
0b000000 => ("Unknown reason", decode_iss_res0(iss.value)?, None),
0b000001 => (
"Wrapped WF* instruction execution",
decode_iss_wf(iss.value)?,
None,
),
0b000011 => (
"Trapped MCR or MRC access with coproc=0b1111",
decode_iss_mcr(iss.value)?,
None,
),
0b000100 => (
"Trapped MCRR or MRRC access with coproc=0b1111",
decode_iss_mcrr(iss.value)?,
None,
),
0b000101 => (
"Trapped MCR or MRC access with coproc=0b1110",
decode_iss_mcr(iss.value)?,
None,
),
0b000110 => (
"Trapped LDC or STC access",
decode_iss_ldc(iss.value)?,
None,
),
0b000111 => (
"Trapped access to SVE, Advanced SIMD or floating point",
decode_iss_sve(iss.value)?,
None,
),
0b001010 => (
"Trapped execution of an LD64B, ST64B, ST64BV, or ST64BV0 instruction",
decode_iss_ld64b(iss.value)?,
None,
),
0b001100 => (
"Trapped MRRC access with (coproc==0b1110)",
decode_iss_mcrr(iss.value)?,
None,
),
0b001101 => ("Branch Target Exception", decode_iss_bti(iss.value)?, None),
0b001110 => ("Illegal Execution state", decode_iss_res0(iss.value)?, None),
0b010001 => (
"SVC instruction execution in AArch32 state",
decode_iss_hvc(iss.value)?,
None,
),
0b010101 => (
"SVC instruction execution in AArch64 state",
decode_iss_hvc(iss.value)?,
None,
),
0b011000 => {
let (subfields, description) = decode_iss_msr(iss.value)?;
(
"Trapped MSR, MRS or System instruction execution in AArch64 state",
subfields,
description,
)
}
0b011001 => (
"Access to SVE functionality trapped as a result of CPACR_EL1.ZEN, CPTR_EL2.ZEN, \
CPTR_EL2.TZ, or CPTR_EL3.EZ",
decode_iss_res0(iss.value)?,
None,
),
0b011100 => (
"Exception from a Pointer Authentication instruction authentication failure",
decode_iss_pauth(iss.value)?,
None,
),
0b100000 => (
"Instruction Abort from a lower Exception level",
decode_iss_instruction_abort(iss.value)?,
None,
),
0b100001 => (
"Instruction Abort taken without a change in Exception level",
decode_iss_instruction_abort(iss.value)?,
None,
),
0b100010 => (
"PC alignment fault exception",
decode_iss_res0(iss.value)?,
None,
),
0b100100 => (
"Data Abort from a lower Exception level",
decode_iss_data_abort(iss.value)?,
None,
),
0b100101 => (
"Data Abort taken without a change in Exception level",
decode_iss_data_abort(iss.value)?,
None,
),
0b100110 => (
"SP alignment fault exception",
decode_iss_res0(iss.value)?,
None,
),
0b101000 => (
"Trapped floating-point exception taken from AArch32 state",
decode_iss_fp(iss.value)?,
None,
),
0b101100 => (
"Trapped floating-point exception taken from AArch64 state",
decode_iss_fp(iss.value)?,
None,
),
0b101111 => ("SError interrupt", decode_iss_serror(iss.value)?, None),
0b110000 => (
"Breakpoint exception from a lower Exception level",
decode_iss_breakpoint_vector_catch(iss.value)?,
None,
),
0b110001 => (
"Breakpoint exception taken without a change in Exception level",
decode_iss_breakpoint_vector_catch(iss.value)?,
None,
),
0b110010 => (
"Software Step exception from a lower Exception level",
decode_iss_software_step(iss.value)?,
None,
),
0b110011 => (
"Software Step exception taken without a change in Exception level",
decode_iss_software_step(iss.value)?,
None,
),
0b110100 => (
"Watchpoint exception from a lower Exception level",
decode_iss_watchpoint(iss.value)?,
None,
),
0b110101 => (
"Watchpoint exception taken without a change in Exception level",
decode_iss_watchpoint(iss.value)?,
None,
),
0b111000 => (
"BKPT instruction execution in AArch32 state",
decode_iss_breakpoint(iss.value)?,
None,
),
0b111100 => (
"BRK instruction execution in AArch64 state",
decode_iss_breakpoint(iss.value)?,
None,
),
_ => return Err(DecodeError::InvalidEc { ec: ec.value }),
};
let iss = FieldInfo {
description: iss_description,
subfields: iss_subfields,
..iss
};
let ec = ec.with_description(class.to_string());
Ok(vec![res0, iss2, ec, il, iss])
}
fn
|
(il: bool) -> &'static str {
if il {
"32-bit instruction trapped"
} else {
"16-bit instruction trapped"
}
}
/// Parses a decimal or hexadecimal number from a string.
///
/// If the string starts with `"0x"` then it will be parsed as hexadecimal, otherwise it will be
/// assumed to be decimal.
pub fn parse_number(s: &str) -> Result<u64, ParseIntError> {
if let Some(hex) = s.strip_prefix("0x") {
u64::from_str_radix(hex, 16)
} else {
s.parse()
}
}
|
describe_il
|
identifier_name
|
lib.rs
|
// Copyright 2021 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
mod abort;
mod breakpoint;
mod bti;
mod common;
mod fp;
mod hvc;
mod ld64b;
mod ldc;
mod mcr;
mod msr;
mod pauth;
mod serror;
mod sve;
#[cfg(test)]
mod tests;
mod wf;
use abort::{decode_iss_data_abort, decode_iss_instruction_abort};
use bit_field::BitField;
use breakpoint::{
decode_iss_breakpoint, decode_iss_breakpoint_vector_catch, decode_iss_software_step,
decode_iss_watchpoint,
};
use bti::decode_iss_bti;
use fp::decode_iss_fp;
use hvc::decode_iss_hvc;
use ld64b::decode_iss_ld64b;
use ldc::decode_iss_ldc;
use mcr::{decode_iss_mcr, decode_iss_mcrr};
use msr::decode_iss_msr;
use pauth::decode_iss_pauth;
use serror::decode_iss_serror;
use std::fmt::{self, Debug, Display, Formatter};
use std::num::ParseIntError;
use sve::decode_iss_sve;
use thiserror::Error;
use wf::decode_iss_wf;
/// Information about a particular field.
#[derive(Clone, Debug, Eq, PartialEq)]
pub struct FieldInfo {
/// The short name of the field, e.g. "ISS".
pub name: &'static str,
/// The long name of the field, e.g. "Instruction Specific Syndrome".
pub long_name: Option<&'static str>,
/// The index of the lowest bit of the field.
pub start: usize,
/// The number of bits in the field.
pub width: usize,
/// The value of the field.
pub value: u64,
/// A description explaining the field value, if available.
pub description: Option<String>,
/// Any sub-fields.
pub subfields: Vec<FieldInfo>,
}
impl FieldInfo {
fn get(
register: u64,
name: &'static str,
long_name: Option<&'static str>,
start: usize,
end: usize,
) -> Self {
let value = register.get_bits(start..end);
Self {
name,
long_name,
start,
width: end - start,
value,
description: None,
subfields: vec![],
}
}
fn get_bit(
register: u64,
name: &'static str,
long_name: Option<&'static str>,
bit: usize,
) -> Self {
Self::get(register, name, long_name, bit, bit + 1)
}
fn with_description(self, description: String) -> Self {
Self {
description: Some(description),
..self
}
}
fn as_bit(&self) -> bool {
assert!(self.width == 1);
self.value == 1
}
/// Assuming this field has a width of exactly 1, describe it with the given function.
///
/// Panics if `self.width!= 1`.
fn describe_bit<F>(self, describer: F) -> Self
where
F: FnOnce(bool) -> &'static str,
{
let bit = self.as_bit();
let description = describer(bit).to_string();
self.with_description(description)
}
fn describe<F>(self, describer: F) -> Result<Self, DecodeError>
where
F: FnOnce(u64) -> Result<&'static str, DecodeError>,
|
fn check_res0(self) -> Result<Self, DecodeError> {
if self.value!= 0 {
Err(DecodeError::InvalidRes0 { res0: self.value })
} else {
Ok(self)
}
}
/// Returns the value as a hexadecimal string, or "true" or "false" if it is a single bit.
pub fn value_string(&self) -> String {
if self.width == 1 {
if self.value == 1 { "true" } else { "false" }.to_string()
} else {
format!("{:#01$x}", self.value, (self.width + 3) / 4 + 2,)
}
}
/// Returns the value as a binary strings.
pub fn value_binary_string(&self) -> String {
format!("{:#01$b}", self.value, self.width + 2)
}
}
impl Display for FieldInfo {
fn fmt(&self, f: &mut Formatter) -> fmt::Result {
if self.width == 1 {
write!(
f,
"{}: {}",
self.name,
if self.value == 1 { "true" } else { "false" }
)
} else {
write!(
f,
"{}: {} {}",
self.name,
self.value_string(),
self.value_binary_string(),
)
}
}
}
/// An error decoding an ESR value.
#[derive(Debug, Error)]
pub enum DecodeError {
/// A RES0 field was not 0.
#[error("Invalid ESR, res0 is {res0:#x}")]
InvalidRes0 { res0: u64 },
/// The EC field had an invalid value.
#[error("Invalid EC {ec:#x}")]
InvalidEc { ec: u64 },
/// The DFSC or IFSC field had an invalid value.
#[error("Invalid DFSC or IFSC {fsc:#x}")]
InvalidFsc { fsc: u64 },
/// The SET field had an invalid value.
#[error("Invalid SET {set:#x}")]
InvalidSet { set: u64 },
/// The AET field had an invalid value.
#[error("Invalid AET {aet:#x}")]
InvalidAet { aet: u64 },
/// The AM field had an invalid value.
#[error("Invalid AM {am:#x}")]
InvalidAm { am: u64 },
/// The ISS field has an invalid value for a trapped LD64B or ST64B* exception.
#[error("Invalid ISS {iss:#x} for trapped LD64B or ST64B*")]
InvalidLd64bIss { iss: u64 },
}
fn decode_iss_res0(iss: u64) -> Result<Vec<FieldInfo>, DecodeError> {
let res0 = FieldInfo::get(iss, "RES0", Some("Reserved"), 0, 25)
.check_res0()?
.with_description("ISS is RES0".to_string());
Ok(vec![res0])
}
/// Decodes the given Exception Syndrome Register value, or returns an error if it is not valid.
pub fn decode(esr: u64) -> Result<Vec<FieldInfo>, DecodeError> {
let res0 = FieldInfo::get(esr, "RES0", Some("Reserved"), 37, 64).check_res0()?;
let iss2 = FieldInfo::get(esr, "ISS2", None, 32, 37);
let ec = FieldInfo::get(esr, "EC", Some("Exception Class"), 26, 32);
let il =
FieldInfo::get_bit(esr, "IL", Some("Instruction Length"), 25).describe_bit(describe_il);
let iss = FieldInfo::get(esr, "ISS", Some("Instruction Specific Syndrome"), 0, 25);
let (class, iss_subfields, iss_description) = match ec.value {
0b000000 => ("Unknown reason", decode_iss_res0(iss.value)?, None),
0b000001 => (
"Wrapped WF* instruction execution",
decode_iss_wf(iss.value)?,
None,
),
0b000011 => (
"Trapped MCR or MRC access with coproc=0b1111",
decode_iss_mcr(iss.value)?,
None,
),
0b000100 => (
"Trapped MCRR or MRRC access with coproc=0b1111",
decode_iss_mcrr(iss.value)?,
None,
),
0b000101 => (
"Trapped MCR or MRC access with coproc=0b1110",
decode_iss_mcr(iss.value)?,
None,
),
0b000110 => (
"Trapped LDC or STC access",
decode_iss_ldc(iss.value)?,
None,
),
0b000111 => (
"Trapped access to SVE, Advanced SIMD or floating point",
decode_iss_sve(iss.value)?,
None,
),
0b001010 => (
"Trapped execution of an LD64B, ST64B, ST64BV, or ST64BV0 instruction",
decode_iss_ld64b(iss.value)?,
None,
),
0b001100 => (
"Trapped MRRC access with (coproc==0b1110)",
decode_iss_mcrr(iss.value)?,
None,
),
0b001101 => ("Branch Target Exception", decode_iss_bti(iss.value)?, None),
0b001110 => ("Illegal Execution state", decode_iss_res0(iss.value)?, None),
0b010001 => (
"SVC instruction execution in AArch32 state",
decode_iss_hvc(iss.value)?,
None,
),
0b010101 => (
"SVC instruction execution in AArch64 state",
decode_iss_hvc(iss.value)?,
None,
),
0b011000 => {
let (subfields, description) = decode_iss_msr(iss.value)?;
(
"Trapped MSR, MRS or System instruction execution in AArch64 state",
subfields,
description,
)
}
0b011001 => (
"Access to SVE functionality trapped as a result of CPACR_EL1.ZEN, CPTR_EL2.ZEN, \
CPTR_EL2.TZ, or CPTR_EL3.EZ",
decode_iss_res0(iss.value)?,
None,
),
0b011100 => (
"Exception from a Pointer Authentication instruction authentication failure",
decode_iss_pauth(iss.value)?,
None,
),
0b100000 => (
"Instruction Abort from a lower Exception level",
decode_iss_instruction_abort(iss.value)?,
None,
),
0b100001 => (
"Instruction Abort taken without a change in Exception level",
decode_iss_instruction_abort(iss.value)?,
None,
),
0b100010 => (
"PC alignment fault exception",
decode_iss_res0(iss.value)?,
None,
),
0b100100 => (
"Data Abort from a lower Exception level",
decode_iss_data_abort(iss.value)?,
None,
),
0b100101 => (
"Data Abort taken without a change in Exception level",
decode_iss_data_abort(iss.value)?,
None,
),
0b100110 => (
"SP alignment fault exception",
decode_iss_res0(iss.value)?,
None,
),
0b101000 => (
"Trapped floating-point exception taken from AArch32 state",
decode_iss_fp(iss.value)?,
None,
),
0b101100 => (
"Trapped floating-point exception taken from AArch64 state",
decode_iss_fp(iss.value)?,
None,
),
0b101111 => ("SError interrupt", decode_iss_serror(iss.value)?, None),
0b110000 => (
"Breakpoint exception from a lower Exception level",
decode_iss_breakpoint_vector_catch(iss.value)?,
None,
),
0b110001 => (
"Breakpoint exception taken without a change in Exception level",
decode_iss_breakpoint_vector_catch(iss.value)?,
None,
),
0b110010 => (
"Software Step exception from a lower Exception level",
decode_iss_software_step(iss.value)?,
None,
),
0b110011 => (
"Software Step exception taken without a change in Exception level",
decode_iss_software_step(iss.value)?,
None,
),
0b110100 => (
"Watchpoint exception from a lower Exception level",
decode_iss_watchpoint(iss.value)?,
None,
),
0b110101 => (
"Watchpoint exception taken without a change in Exception level",
decode_iss_watchpoint(iss.value)?,
None,
),
0b111000 => (
"BKPT instruction execution in AArch32 state",
decode_iss_breakpoint(iss.value)?,
None,
),
0b111100 => (
"BRK instruction execution in AArch64 state",
decode_iss_breakpoint(iss.value)?,
None,
),
_ => return Err(DecodeError::InvalidEc { ec: ec.value }),
};
let iss = FieldInfo {
description: iss_description,
subfields: iss_subfields,
..iss
};
let ec = ec.with_description(class.to_string());
Ok(vec![res0, iss2, ec, il, iss])
}
fn describe_il(il: bool) -> &'static str {
if il {
"32-bit instruction trapped"
} else {
"16-bit instruction trapped"
}
}
/// Parses a decimal or hexadecimal number from a string.
///
/// If the string starts with `"0x"` then it will be parsed as hexadecimal, otherwise it will be
/// assumed to be decimal.
pub fn parse_number(s: &str) -> Result<u64, ParseIntError> {
if let Some(hex) = s.strip_prefix("0x") {
u64::from_str_radix(hex, 16)
} else {
s.parse()
}
}
|
{
let description = describer(self.value)?.to_string();
Ok(self.with_description(description))
}
|
identifier_body
|
lib.rs
|
// Copyright 2021 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
mod abort;
mod breakpoint;
mod bti;
mod common;
mod fp;
mod hvc;
mod ld64b;
mod ldc;
mod mcr;
mod msr;
mod pauth;
mod serror;
mod sve;
#[cfg(test)]
mod tests;
mod wf;
use abort::{decode_iss_data_abort, decode_iss_instruction_abort};
use bit_field::BitField;
use breakpoint::{
decode_iss_breakpoint, decode_iss_breakpoint_vector_catch, decode_iss_software_step,
decode_iss_watchpoint,
};
use bti::decode_iss_bti;
use fp::decode_iss_fp;
use hvc::decode_iss_hvc;
use ld64b::decode_iss_ld64b;
use ldc::decode_iss_ldc;
use mcr::{decode_iss_mcr, decode_iss_mcrr};
use msr::decode_iss_msr;
use pauth::decode_iss_pauth;
use serror::decode_iss_serror;
use std::fmt::{self, Debug, Display, Formatter};
use std::num::ParseIntError;
use sve::decode_iss_sve;
use thiserror::Error;
use wf::decode_iss_wf;
/// Information about a particular field.
#[derive(Clone, Debug, Eq, PartialEq)]
pub struct FieldInfo {
/// The short name of the field, e.g. "ISS".
pub name: &'static str,
/// The long name of the field, e.g. "Instruction Specific Syndrome".
pub long_name: Option<&'static str>,
/// The index of the lowest bit of the field.
pub start: usize,
/// The number of bits in the field.
pub width: usize,
/// The value of the field.
pub value: u64,
/// A description explaining the field value, if available.
pub description: Option<String>,
/// Any sub-fields.
pub subfields: Vec<FieldInfo>,
}
impl FieldInfo {
fn get(
register: u64,
name: &'static str,
long_name: Option<&'static str>,
start: usize,
end: usize,
) -> Self {
let value = register.get_bits(start..end);
Self {
name,
long_name,
start,
width: end - start,
value,
description: None,
subfields: vec![],
}
}
fn get_bit(
register: u64,
name: &'static str,
long_name: Option<&'static str>,
bit: usize,
) -> Self {
Self::get(register, name, long_name, bit, bit + 1)
}
fn with_description(self, description: String) -> Self {
Self {
description: Some(description),
..self
}
}
fn as_bit(&self) -> bool {
assert!(self.width == 1);
self.value == 1
}
/// Assuming this field has a width of exactly 1, describe it with the given function.
///
/// Panics if `self.width!= 1`.
fn describe_bit<F>(self, describer: F) -> Self
where
F: FnOnce(bool) -> &'static str,
{
let bit = self.as_bit();
let description = describer(bit).to_string();
self.with_description(description)
}
fn describe<F>(self, describer: F) -> Result<Self, DecodeError>
where
F: FnOnce(u64) -> Result<&'static str, DecodeError>,
{
let description = describer(self.value)?.to_string();
Ok(self.with_description(description))
}
fn check_res0(self) -> Result<Self, DecodeError> {
if self.value!= 0 {
Err(DecodeError::InvalidRes0 { res0: self.value })
} else {
Ok(self)
}
}
/// Returns the value as a hexadecimal string, or "true" or "false" if it is a single bit.
pub fn value_string(&self) -> String {
if self.width == 1 {
if self.value == 1 { "true" } else { "false" }.to_string()
} else {
format!("{:#01$x}", self.value, (self.width + 3) / 4 + 2,)
}
}
/// Returns the value as a binary strings.
pub fn value_binary_string(&self) -> String {
format!("{:#01$b}", self.value, self.width + 2)
}
}
impl Display for FieldInfo {
fn fmt(&self, f: &mut Formatter) -> fmt::Result {
if self.width == 1 {
write!(
f,
"{}: {}",
self.name,
if self.value == 1 { "true" } else { "false" }
)
} else {
write!(
f,
"{}: {} {}",
self.name,
self.value_string(),
self.value_binary_string(),
)
}
}
}
/// An error decoding an ESR value.
#[derive(Debug, Error)]
pub enum DecodeError {
/// A RES0 field was not 0.
#[error("Invalid ESR, res0 is {res0:#x}")]
InvalidRes0 { res0: u64 },
/// The EC field had an invalid value.
#[error("Invalid EC {ec:#x}")]
InvalidEc { ec: u64 },
/// The DFSC or IFSC field had an invalid value.
#[error("Invalid DFSC or IFSC {fsc:#x}")]
InvalidFsc { fsc: u64 },
/// The SET field had an invalid value.
#[error("Invalid SET {set:#x}")]
InvalidSet { set: u64 },
/// The AET field had an invalid value.
#[error("Invalid AET {aet:#x}")]
InvalidAet { aet: u64 },
/// The AM field had an invalid value.
#[error("Invalid AM {am:#x}")]
InvalidAm { am: u64 },
/// The ISS field has an invalid value for a trapped LD64B or ST64B* exception.
#[error("Invalid ISS {iss:#x} for trapped LD64B or ST64B*")]
InvalidLd64bIss { iss: u64 },
}
fn decode_iss_res0(iss: u64) -> Result<Vec<FieldInfo>, DecodeError> {
let res0 = FieldInfo::get(iss, "RES0", Some("Reserved"), 0, 25)
.check_res0()?
.with_description("ISS is RES0".to_string());
Ok(vec![res0])
}
/// Decodes the given Exception Syndrome Register value, or returns an error if it is not valid.
pub fn decode(esr: u64) -> Result<Vec<FieldInfo>, DecodeError> {
let res0 = FieldInfo::get(esr, "RES0", Some("Reserved"), 37, 64).check_res0()?;
let iss2 = FieldInfo::get(esr, "ISS2", None, 32, 37);
let ec = FieldInfo::get(esr, "EC", Some("Exception Class"), 26, 32);
let il =
FieldInfo::get_bit(esr, "IL", Some("Instruction Length"), 25).describe_bit(describe_il);
let iss = FieldInfo::get(esr, "ISS", Some("Instruction Specific Syndrome"), 0, 25);
let (class, iss_subfields, iss_description) = match ec.value {
0b000000 => ("Unknown reason", decode_iss_res0(iss.value)?, None),
0b000001 => (
"Wrapped WF* instruction execution",
decode_iss_wf(iss.value)?,
None,
),
0b000011 => (
"Trapped MCR or MRC access with coproc=0b1111",
decode_iss_mcr(iss.value)?,
None,
),
0b000100 => (
"Trapped MCRR or MRRC access with coproc=0b1111",
decode_iss_mcrr(iss.value)?,
None,
),
0b000101 => (
"Trapped MCR or MRC access with coproc=0b1110",
decode_iss_mcr(iss.value)?,
None,
),
0b000110 => (
"Trapped LDC or STC access",
decode_iss_ldc(iss.value)?,
None,
),
0b000111 => (
"Trapped access to SVE, Advanced SIMD or floating point",
decode_iss_sve(iss.value)?,
None,
),
0b001010 => (
"Trapped execution of an LD64B, ST64B, ST64BV, or ST64BV0 instruction",
decode_iss_ld64b(iss.value)?,
None,
),
0b001100 => (
"Trapped MRRC access with (coproc==0b1110)",
decode_iss_mcrr(iss.value)?,
None,
),
0b001101 => ("Branch Target Exception", decode_iss_bti(iss.value)?, None),
0b001110 => ("Illegal Execution state", decode_iss_res0(iss.value)?, None),
0b010001 => (
"SVC instruction execution in AArch32 state",
decode_iss_hvc(iss.value)?,
None,
),
0b010101 => (
"SVC instruction execution in AArch64 state",
decode_iss_hvc(iss.value)?,
None,
),
0b011000 => {
let (subfields, description) = decode_iss_msr(iss.value)?;
(
"Trapped MSR, MRS or System instruction execution in AArch64 state",
subfields,
description,
)
}
0b011001 => (
"Access to SVE functionality trapped as a result of CPACR_EL1.ZEN, CPTR_EL2.ZEN, \
CPTR_EL2.TZ, or CPTR_EL3.EZ",
decode_iss_res0(iss.value)?,
None,
),
0b011100 => (
"Exception from a Pointer Authentication instruction authentication failure",
decode_iss_pauth(iss.value)?,
None,
),
0b100000 => (
"Instruction Abort from a lower Exception level",
decode_iss_instruction_abort(iss.value)?,
None,
),
0b100001 => (
"Instruction Abort taken without a change in Exception level",
decode_iss_instruction_abort(iss.value)?,
None,
),
0b100010 => (
"PC alignment fault exception",
decode_iss_res0(iss.value)?,
None,
),
0b100100 => (
"Data Abort from a lower Exception level",
decode_iss_data_abort(iss.value)?,
None,
),
0b100101 => (
"Data Abort taken without a change in Exception level",
decode_iss_data_abort(iss.value)?,
None,
),
0b100110 => (
"SP alignment fault exception",
decode_iss_res0(iss.value)?,
None,
),
0b101000 => (
"Trapped floating-point exception taken from AArch32 state",
decode_iss_fp(iss.value)?,
None,
),
0b101100 => (
"Trapped floating-point exception taken from AArch64 state",
decode_iss_fp(iss.value)?,
None,
),
0b101111 => ("SError interrupt", decode_iss_serror(iss.value)?, None),
0b110000 => (
"Breakpoint exception from a lower Exception level",
decode_iss_breakpoint_vector_catch(iss.value)?,
None,
),
0b110001 => (
"Breakpoint exception taken without a change in Exception level",
decode_iss_breakpoint_vector_catch(iss.value)?,
None,
),
0b110010 => (
"Software Step exception from a lower Exception level",
decode_iss_software_step(iss.value)?,
None,
),
0b110011 => (
"Software Step exception taken without a change in Exception level",
decode_iss_software_step(iss.value)?,
None,
),
0b110100 => (
"Watchpoint exception from a lower Exception level",
decode_iss_watchpoint(iss.value)?,
None,
),
0b110101 => (
"Watchpoint exception taken without a change in Exception level",
decode_iss_watchpoint(iss.value)?,
None,
),
0b111000 => (
"BKPT instruction execution in AArch32 state",
decode_iss_breakpoint(iss.value)?,
|
),
0b111100 => (
"BRK instruction execution in AArch64 state",
decode_iss_breakpoint(iss.value)?,
None,
),
_ => return Err(DecodeError::InvalidEc { ec: ec.value }),
};
let iss = FieldInfo {
description: iss_description,
subfields: iss_subfields,
..iss
};
let ec = ec.with_description(class.to_string());
Ok(vec![res0, iss2, ec, il, iss])
}
fn describe_il(il: bool) -> &'static str {
if il {
"32-bit instruction trapped"
} else {
"16-bit instruction trapped"
}
}
/// Parses a decimal or hexadecimal number from a string.
///
/// If the string starts with `"0x"` then it will be parsed as hexadecimal, otherwise it will be
/// assumed to be decimal.
pub fn parse_number(s: &str) -> Result<u64, ParseIntError> {
if let Some(hex) = s.strip_prefix("0x") {
u64::from_str_radix(hex, 16)
} else {
s.parse()
}
}
|
None,
|
random_line_split
|
mod.rs
|
// Copyright 2017 The Xyrosource Team.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
extern crate slog;
extern crate slog_term;
use slog::{DrainExt, Logger};
mod version {
// include the generated version file, containing the
// VERSION symbol with the version as defined in the
// cargo metadata file.
include!(concat!(env!("OUT_DIR"), "/version.rs"));
}
use self::version::VERSION;
pub fn
|
() -> slog::Logger {
let drain = slog_term::streamer().compact().build().fuse();
Logger::root(drain, o!("version" => VERSION))
}
|
setup
|
identifier_name
|
mod.rs
|
// Copyright 2017 The Xyrosource Team.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
extern crate slog;
extern crate slog_term;
use slog::{DrainExt, Logger};
mod version {
// include the generated version file, containing the
// VERSION symbol with the version as defined in the
// cargo metadata file.
include!(concat!(env!("OUT_DIR"), "/version.rs"));
}
|
pub fn setup() -> slog::Logger {
let drain = slog_term::streamer().compact().build().fuse();
Logger::root(drain, o!("version" => VERSION))
}
|
use self::version::VERSION;
|
random_line_split
|
mod.rs
|
// Copyright 2017 The Xyrosource Team.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
extern crate slog;
extern crate slog_term;
use slog::{DrainExt, Logger};
mod version {
// include the generated version file, containing the
// VERSION symbol with the version as defined in the
// cargo metadata file.
include!(concat!(env!("OUT_DIR"), "/version.rs"));
}
use self::version::VERSION;
pub fn setup() -> slog::Logger
|
{
let drain = slog_term::streamer().compact().build().fuse();
Logger::root(drain, o!("version" => VERSION))
}
|
identifier_body
|
|
rate_limit.rs
|
// Copyright 2016 The Grin Developers
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Provides wrappers for throttling readers and writers
use std::time::Instant;
use std::io;
use futures::*;
use tokio_io::*;
/// A Rate Limited Reader
#[derive(Debug)]
pub struct ThrottledReader<R: AsyncRead> {
reader: R,
/// Max Bytes per second
max: u32,
/// Stores a count of last request and last request time
allowed: usize,
last_check: Instant,
}
#[allow(dead_code)]
impl<R: AsyncRead> ThrottledReader<R> {
/// Adds throttling to a reader.
/// The resulting reader will read at most `max` amount of bytes per second
pub fn new(reader: R, max: u32) -> Self {
ThrottledReader {
reader: reader,
max: max,
allowed: max as usize,
last_check: Instant::now(),
}
}
/// Get a shared reference to the inner sink.
pub fn get_ref(&self) -> &R
|
/// Get a mutable reference to the inner sink.
pub fn get_mut(&mut self) -> &mut R {
&mut self.reader
}
/// Consumes this combinator, returning the underlying sink.
///
/// Note that this may discard intermediate state of this combinator, so
/// care should be taken to avoid losing resources when this is called.
pub fn into_inner(self) -> R {
self.reader
}
}
impl<R: AsyncRead> io::Read for ThrottledReader<R> {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
// Check passed Time
let time_passed = self.last_check.elapsed();
self.last_check = Instant::now();
self.allowed += time_passed.as_secs() as usize * self.max as usize;
// Throttle
if self.allowed > self.max as usize {
self.allowed = self.max as usize;
}
// Check if Allowed
if self.allowed < 1 {
return Err(io::Error::new(io::ErrorKind::WouldBlock, "Reached Allowed Read Limit"))
}
// Read Max Allowed
let buf = if buf.len() > self.allowed { &mut buf[0..self.allowed]} else { buf };
let res = self.reader.read(buf);
// Decrement Allowed amount written
if let Ok(n) = res {
self.allowed -= n;
}
res
}
}
impl<R: AsyncRead> AsyncRead for ThrottledReader<R> { }
/// A Rate Limited Writer
#[derive(Debug)]
pub struct ThrottledWriter<W: AsyncWrite> {
writer: W,
/// Max Bytes per second
max: u32,
/// Stores a count of last request and last request time
allowed: usize,
last_check: Instant,
}
#[allow(dead_code)]
impl<W: AsyncWrite> ThrottledWriter<W> {
/// Adds throttling to a writer.
/// The resulting writer will write at most `max` amount of bytes per second
pub fn new(writer: W, max: u32) -> Self {
ThrottledWriter {
writer: writer,
max: max,
allowed: max as usize,
last_check: Instant::now(),
}
}
/// Get a shared reference to the inner sink.
pub fn get_ref(&self) -> &W {
&self.writer
}
/// Get a mutable reference to the inner sink.
pub fn get_mut(&mut self) -> &mut W {
&mut self.writer
}
/// Consumes this combinator, returning the underlying sink.
///
/// Note that this may discard intermediate state of this combinator, so
/// care should be taken to avoid losing resources when this is called.
pub fn into_inner(self) -> W {
self.writer
}
}
impl<W: AsyncWrite> io::Write for ThrottledWriter<W> {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
// Check passed Time
let time_passed = self.last_check.elapsed();
self.last_check = Instant::now();
self.allowed += time_passed.as_secs() as usize * self.max as usize;
// Throttle
if self.allowed > self.max as usize {
self.allowed = self.max as usize;
}
// Check if Allowed
if self.allowed < 1 {
return Err(io::Error::new(io::ErrorKind::WouldBlock, "Reached Allowed Write Limit"))
}
// Write max allowed
let buf = if buf.len() > self.allowed { &buf[0..self.allowed]} else { buf };
let res = self.writer.write(buf);
// Decrement Allowed amount written
if let Ok(n) = res {
self.allowed -= n;
}
res
}
fn flush(&mut self) -> io::Result<()> {
self.writer.flush()
}
}
impl<T: AsyncWrite> AsyncWrite for ThrottledWriter<T> {
fn shutdown(&mut self) -> Poll<(), io::Error> {
self.writer.shutdown()
}
}
#[cfg(test)]
mod test {
use super::*;
use std::io::Cursor;
#[test]
fn should_throttle_write() {
let buf = vec![0; 64];
let mut t_buf = ThrottledWriter::new(Cursor::new(buf), 8);
for _ in 0..16 {
let _ = t_buf.write_buf(&mut Cursor::new(vec![1; 8]));
}
let cursor = t_buf.into_inner();
assert_eq!(cursor.position(), 8);
}
#[test]
fn should_throttle_read() {
let buf = vec![1; 64];
let mut t_buf = ThrottledReader::new(Cursor::new(buf), 8);
let mut dst = Cursor::new(vec![0; 64]);
for _ in 0..16 {
let _ = t_buf.read_buf(&mut dst);
}
assert_eq!(dst.position(), 8);
}
}
|
{
&self.reader
}
|
identifier_body
|
rate_limit.rs
|
// Copyright 2016 The Grin Developers
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Provides wrappers for throttling readers and writers
use std::time::Instant;
use std::io;
use futures::*;
use tokio_io::*;
/// A Rate Limited Reader
#[derive(Debug)]
pub struct ThrottledReader<R: AsyncRead> {
reader: R,
/// Max Bytes per second
max: u32,
/// Stores a count of last request and last request time
allowed: usize,
last_check: Instant,
}
#[allow(dead_code)]
impl<R: AsyncRead> ThrottledReader<R> {
/// Adds throttling to a reader.
/// The resulting reader will read at most `max` amount of bytes per second
pub fn new(reader: R, max: u32) -> Self {
ThrottledReader {
reader: reader,
max: max,
allowed: max as usize,
last_check: Instant::now(),
}
}
/// Get a shared reference to the inner sink.
pub fn get_ref(&self) -> &R {
&self.reader
}
/// Get a mutable reference to the inner sink.
pub fn get_mut(&mut self) -> &mut R {
&mut self.reader
}
/// Consumes this combinator, returning the underlying sink.
///
/// Note that this may discard intermediate state of this combinator, so
/// care should be taken to avoid losing resources when this is called.
pub fn into_inner(self) -> R {
self.reader
}
}
impl<R: AsyncRead> io::Read for ThrottledReader<R> {
fn
|
(&mut self, buf: &mut [u8]) -> io::Result<usize> {
// Check passed Time
let time_passed = self.last_check.elapsed();
self.last_check = Instant::now();
self.allowed += time_passed.as_secs() as usize * self.max as usize;
// Throttle
if self.allowed > self.max as usize {
self.allowed = self.max as usize;
}
// Check if Allowed
if self.allowed < 1 {
return Err(io::Error::new(io::ErrorKind::WouldBlock, "Reached Allowed Read Limit"))
}
// Read Max Allowed
let buf = if buf.len() > self.allowed { &mut buf[0..self.allowed]} else { buf };
let res = self.reader.read(buf);
// Decrement Allowed amount written
if let Ok(n) = res {
self.allowed -= n;
}
res
}
}
impl<R: AsyncRead> AsyncRead for ThrottledReader<R> { }
/// A Rate Limited Writer
#[derive(Debug)]
pub struct ThrottledWriter<W: AsyncWrite> {
writer: W,
/// Max Bytes per second
max: u32,
/// Stores a count of last request and last request time
allowed: usize,
last_check: Instant,
}
#[allow(dead_code)]
impl<W: AsyncWrite> ThrottledWriter<W> {
/// Adds throttling to a writer.
/// The resulting writer will write at most `max` amount of bytes per second
pub fn new(writer: W, max: u32) -> Self {
ThrottledWriter {
writer: writer,
max: max,
allowed: max as usize,
last_check: Instant::now(),
}
}
/// Get a shared reference to the inner sink.
pub fn get_ref(&self) -> &W {
&self.writer
}
/// Get a mutable reference to the inner sink.
pub fn get_mut(&mut self) -> &mut W {
&mut self.writer
}
/// Consumes this combinator, returning the underlying sink.
///
/// Note that this may discard intermediate state of this combinator, so
/// care should be taken to avoid losing resources when this is called.
pub fn into_inner(self) -> W {
self.writer
}
}
impl<W: AsyncWrite> io::Write for ThrottledWriter<W> {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
// Check passed Time
let time_passed = self.last_check.elapsed();
self.last_check = Instant::now();
self.allowed += time_passed.as_secs() as usize * self.max as usize;
// Throttle
if self.allowed > self.max as usize {
self.allowed = self.max as usize;
}
// Check if Allowed
if self.allowed < 1 {
return Err(io::Error::new(io::ErrorKind::WouldBlock, "Reached Allowed Write Limit"))
}
// Write max allowed
let buf = if buf.len() > self.allowed { &buf[0..self.allowed]} else { buf };
let res = self.writer.write(buf);
// Decrement Allowed amount written
if let Ok(n) = res {
self.allowed -= n;
}
res
}
fn flush(&mut self) -> io::Result<()> {
self.writer.flush()
}
}
impl<T: AsyncWrite> AsyncWrite for ThrottledWriter<T> {
fn shutdown(&mut self) -> Poll<(), io::Error> {
self.writer.shutdown()
}
}
#[cfg(test)]
mod test {
use super::*;
use std::io::Cursor;
#[test]
fn should_throttle_write() {
let buf = vec![0; 64];
let mut t_buf = ThrottledWriter::new(Cursor::new(buf), 8);
for _ in 0..16 {
let _ = t_buf.write_buf(&mut Cursor::new(vec![1; 8]));
}
let cursor = t_buf.into_inner();
assert_eq!(cursor.position(), 8);
}
#[test]
fn should_throttle_read() {
let buf = vec![1; 64];
let mut t_buf = ThrottledReader::new(Cursor::new(buf), 8);
let mut dst = Cursor::new(vec![0; 64]);
for _ in 0..16 {
let _ = t_buf.read_buf(&mut dst);
}
assert_eq!(dst.position(), 8);
}
}
|
read
|
identifier_name
|
rate_limit.rs
|
// Copyright 2016 The Grin Developers
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Provides wrappers for throttling readers and writers
use std::time::Instant;
use std::io;
use futures::*;
use tokio_io::*;
/// A Rate Limited Reader
#[derive(Debug)]
pub struct ThrottledReader<R: AsyncRead> {
reader: R,
/// Max Bytes per second
max: u32,
/// Stores a count of last request and last request time
allowed: usize,
last_check: Instant,
}
#[allow(dead_code)]
impl<R: AsyncRead> ThrottledReader<R> {
/// Adds throttling to a reader.
/// The resulting reader will read at most `max` amount of bytes per second
pub fn new(reader: R, max: u32) -> Self {
ThrottledReader {
reader: reader,
max: max,
allowed: max as usize,
last_check: Instant::now(),
}
}
/// Get a shared reference to the inner sink.
pub fn get_ref(&self) -> &R {
&self.reader
}
/// Get a mutable reference to the inner sink.
pub fn get_mut(&mut self) -> &mut R {
&mut self.reader
}
/// Consumes this combinator, returning the underlying sink.
///
/// Note that this may discard intermediate state of this combinator, so
/// care should be taken to avoid losing resources when this is called.
pub fn into_inner(self) -> R {
self.reader
}
}
impl<R: AsyncRead> io::Read for ThrottledReader<R> {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
// Check passed Time
let time_passed = self.last_check.elapsed();
self.last_check = Instant::now();
self.allowed += time_passed.as_secs() as usize * self.max as usize;
// Throttle
if self.allowed > self.max as usize {
self.allowed = self.max as usize;
}
// Check if Allowed
if self.allowed < 1 {
return Err(io::Error::new(io::ErrorKind::WouldBlock, "Reached Allowed Read Limit"))
}
// Read Max Allowed
let buf = if buf.len() > self.allowed { &mut buf[0..self.allowed]} else { buf };
let res = self.reader.read(buf);
// Decrement Allowed amount written
if let Ok(n) = res {
self.allowed -= n;
}
res
}
}
impl<R: AsyncRead> AsyncRead for ThrottledReader<R> { }
/// A Rate Limited Writer
#[derive(Debug)]
pub struct ThrottledWriter<W: AsyncWrite> {
writer: W,
/// Max Bytes per second
max: u32,
/// Stores a count of last request and last request time
allowed: usize,
last_check: Instant,
}
#[allow(dead_code)]
impl<W: AsyncWrite> ThrottledWriter<W> {
/// Adds throttling to a writer.
/// The resulting writer will write at most `max` amount of bytes per second
pub fn new(writer: W, max: u32) -> Self {
ThrottledWriter {
writer: writer,
max: max,
allowed: max as usize,
last_check: Instant::now(),
}
}
/// Get a shared reference to the inner sink.
pub fn get_ref(&self) -> &W {
&self.writer
}
/// Get a mutable reference to the inner sink.
pub fn get_mut(&mut self) -> &mut W {
&mut self.writer
}
/// Consumes this combinator, returning the underlying sink.
///
/// Note that this may discard intermediate state of this combinator, so
/// care should be taken to avoid losing resources when this is called.
pub fn into_inner(self) -> W {
self.writer
}
}
impl<W: AsyncWrite> io::Write for ThrottledWriter<W> {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
// Check passed Time
let time_passed = self.last_check.elapsed();
self.last_check = Instant::now();
self.allowed += time_passed.as_secs() as usize * self.max as usize;
// Throttle
if self.allowed > self.max as usize {
self.allowed = self.max as usize;
}
// Check if Allowed
if self.allowed < 1 {
return Err(io::Error::new(io::ErrorKind::WouldBlock, "Reached Allowed Write Limit"))
}
// Write max allowed
let buf = if buf.len() > self.allowed { &buf[0..self.allowed]} else { buf };
let res = self.writer.write(buf);
// Decrement Allowed amount written
if let Ok(n) = res {
self.allowed -= n;
}
res
}
fn flush(&mut self) -> io::Result<()> {
self.writer.flush()
}
}
impl<T: AsyncWrite> AsyncWrite for ThrottledWriter<T> {
fn shutdown(&mut self) -> Poll<(), io::Error> {
self.writer.shutdown()
}
}
#[cfg(test)]
mod test {
use super::*;
use std::io::Cursor;
#[test]
fn should_throttle_write() {
let buf = vec![0; 64];
let mut t_buf = ThrottledWriter::new(Cursor::new(buf), 8);
for _ in 0..16 {
|
assert_eq!(cursor.position(), 8);
}
#[test]
fn should_throttle_read() {
let buf = vec![1; 64];
let mut t_buf = ThrottledReader::new(Cursor::new(buf), 8);
let mut dst = Cursor::new(vec![0; 64]);
for _ in 0..16 {
let _ = t_buf.read_buf(&mut dst);
}
assert_eq!(dst.position(), 8);
}
}
|
let _ = t_buf.write_buf(&mut Cursor::new(vec![1; 8]));
}
let cursor = t_buf.into_inner();
|
random_line_split
|
rate_limit.rs
|
// Copyright 2016 The Grin Developers
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Provides wrappers for throttling readers and writers
use std::time::Instant;
use std::io;
use futures::*;
use tokio_io::*;
/// A Rate Limited Reader
#[derive(Debug)]
pub struct ThrottledReader<R: AsyncRead> {
reader: R,
/// Max Bytes per second
max: u32,
/// Stores a count of last request and last request time
allowed: usize,
last_check: Instant,
}
#[allow(dead_code)]
impl<R: AsyncRead> ThrottledReader<R> {
/// Adds throttling to a reader.
/// The resulting reader will read at most `max` amount of bytes per second
pub fn new(reader: R, max: u32) -> Self {
ThrottledReader {
reader: reader,
max: max,
allowed: max as usize,
last_check: Instant::now(),
}
}
/// Get a shared reference to the inner sink.
pub fn get_ref(&self) -> &R {
&self.reader
}
/// Get a mutable reference to the inner sink.
pub fn get_mut(&mut self) -> &mut R {
&mut self.reader
}
/// Consumes this combinator, returning the underlying sink.
///
/// Note that this may discard intermediate state of this combinator, so
/// care should be taken to avoid losing resources when this is called.
pub fn into_inner(self) -> R {
self.reader
}
}
impl<R: AsyncRead> io::Read for ThrottledReader<R> {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
// Check passed Time
let time_passed = self.last_check.elapsed();
self.last_check = Instant::now();
self.allowed += time_passed.as_secs() as usize * self.max as usize;
// Throttle
if self.allowed > self.max as usize {
self.allowed = self.max as usize;
}
// Check if Allowed
if self.allowed < 1 {
return Err(io::Error::new(io::ErrorKind::WouldBlock, "Reached Allowed Read Limit"))
}
// Read Max Allowed
let buf = if buf.len() > self.allowed { &mut buf[0..self.allowed]} else { buf };
let res = self.reader.read(buf);
// Decrement Allowed amount written
if let Ok(n) = res {
self.allowed -= n;
}
res
}
}
impl<R: AsyncRead> AsyncRead for ThrottledReader<R> { }
/// A Rate Limited Writer
#[derive(Debug)]
pub struct ThrottledWriter<W: AsyncWrite> {
writer: W,
/// Max Bytes per second
max: u32,
/// Stores a count of last request and last request time
allowed: usize,
last_check: Instant,
}
#[allow(dead_code)]
impl<W: AsyncWrite> ThrottledWriter<W> {
/// Adds throttling to a writer.
/// The resulting writer will write at most `max` amount of bytes per second
pub fn new(writer: W, max: u32) -> Self {
ThrottledWriter {
writer: writer,
max: max,
allowed: max as usize,
last_check: Instant::now(),
}
}
/// Get a shared reference to the inner sink.
pub fn get_ref(&self) -> &W {
&self.writer
}
/// Get a mutable reference to the inner sink.
pub fn get_mut(&mut self) -> &mut W {
&mut self.writer
}
/// Consumes this combinator, returning the underlying sink.
///
/// Note that this may discard intermediate state of this combinator, so
/// care should be taken to avoid losing resources when this is called.
pub fn into_inner(self) -> W {
self.writer
}
}
impl<W: AsyncWrite> io::Write for ThrottledWriter<W> {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
// Check passed Time
let time_passed = self.last_check.elapsed();
self.last_check = Instant::now();
self.allowed += time_passed.as_secs() as usize * self.max as usize;
// Throttle
if self.allowed > self.max as usize {
self.allowed = self.max as usize;
}
// Check if Allowed
if self.allowed < 1 {
return Err(io::Error::new(io::ErrorKind::WouldBlock, "Reached Allowed Write Limit"))
}
// Write max allowed
let buf = if buf.len() > self.allowed { &buf[0..self.allowed]} else { buf };
let res = self.writer.write(buf);
// Decrement Allowed amount written
if let Ok(n) = res
|
res
}
fn flush(&mut self) -> io::Result<()> {
self.writer.flush()
}
}
impl<T: AsyncWrite> AsyncWrite for ThrottledWriter<T> {
fn shutdown(&mut self) -> Poll<(), io::Error> {
self.writer.shutdown()
}
}
#[cfg(test)]
mod test {
use super::*;
use std::io::Cursor;
#[test]
fn should_throttle_write() {
let buf = vec![0; 64];
let mut t_buf = ThrottledWriter::new(Cursor::new(buf), 8);
for _ in 0..16 {
let _ = t_buf.write_buf(&mut Cursor::new(vec![1; 8]));
}
let cursor = t_buf.into_inner();
assert_eq!(cursor.position(), 8);
}
#[test]
fn should_throttle_read() {
let buf = vec![1; 64];
let mut t_buf = ThrottledReader::new(Cursor::new(buf), 8);
let mut dst = Cursor::new(vec![0; 64]);
for _ in 0..16 {
let _ = t_buf.read_buf(&mut dst);
}
assert_eq!(dst.position(), 8);
}
}
|
{
self.allowed -= n;
}
|
conditional_block
|
markdown.rs
|
// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use std::io;
use std::string::String;
use core;
use getopts;
use testing;
use externalfiles::ExternalHtml;
use html::escape::Escape;
use html::markdown;
use html::markdown::{Markdown, MarkdownWithToc, find_testable_code, reset_headers};
use test::Collector;
/// Separate any lines at the start of the file that begin with `%`.
fn extract_leading_metadata<'a>(s: &'a str) -> (Vec<&'a str>, &'a str) {
let mut metadata = Vec::new();
for line in s.lines() {
if line.starts_with("%") {
// remove %<whitespace>
metadata.push(line.slice_from(1).trim_left())
} else {
let line_start_byte = s.subslice_offset(line);
return (metadata, s.slice_from(line_start_byte));
}
}
// if we're here, then all lines were metadata % lines.
(metadata, "")
}
/// Render `input` (e.g. "foo.md") into an HTML file in `output`
/// (e.g. output = "bar" => "bar/foo.html").
pub fn render(input: &str, mut output: Path, matches: &getopts::Matches,
external_html: &ExternalHtml, include_toc: bool) -> int {
let input_p = Path::new(input);
output.push(input_p.filestem().unwrap());
output.set_extension("html");
let mut css = String::new();
for name in matches.opt_strs("markdown-css").iter() {
let s = format!("<link rel=\"stylesheet\" type=\"text/css\" href=\"{}\">\n", name);
css.push_str(s.as_slice())
}
let input_str = load_or_return!(input, 1, 2);
let playground = matches.opt_str("markdown-playground-url");
if playground.is_some() {
markdown::playground_krate.replace(Some(None));
}
let playground = playground.unwrap_or("".to_string());
let mut out = match io::File::create(&output) {
Err(e) => {
let _ = writeln!(&mut io::stderr(),
"error opening `{}` for writing: {}",
output.display(), e);
return 4;
}
Ok(f) => f
};
let (metadata, text) = extract_leading_metadata(input_str.as_slice());
if metadata.len() == 0 {
let _ = writeln!(&mut io::stderr(),
"invalid markdown file: expecting initial line with `%...TITLE...`");
return 5;
}
let title = metadata[0].as_slice();
reset_headers();
let rendered = if include_toc {
format!("{}", MarkdownWithToc(text))
} else {
format!("{}", Markdown(text))
};
let err = write!(
&mut out,
r#"<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="generator" content="rustdoc">
<title>{title}</title>
{css}
{in_header}
</head>
<body class="rustdoc">
<!--[if lte IE 8]>
<div class="warning">
This old browser is unsupported and will most likely display funky
things.
</div>
|
{text}
<script type="text/javascript">
window.playgroundUrl = "{playground}";
</script>
{after_content}
</body>
</html>"#,
title = Escape(title),
css = css,
in_header = external_html.in_header,
before_content = external_html.before_content,
text = rendered,
after_content = external_html.after_content,
playground = playground,
);
match err {
Err(e) => {
let _ = writeln!(&mut io::stderr(),
"error writing to `{}`: {}",
output.display(), e);
6
}
Ok(_) => 0
}
}
/// Run any tests/code examples in the markdown file `input`.
pub fn test(input: &str, libs: Vec<Path>, externs: core::Externs,
mut test_args: Vec<String>) -> int {
let input_str = load_or_return!(input, 1, 2);
let mut collector = Collector::new(input.to_string(), libs, externs, true);
find_testable_code(input_str.as_slice(), &mut collector);
test_args.insert(0, "rustdoctest".to_string());
testing::test_main(test_args.as_slice(), collector.tests);
0
}
|
<![endif]-->
{before_content}
<h1 class="title">{title}</h1>
|
random_line_split
|
markdown.rs
|
// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use std::io;
use std::string::String;
use core;
use getopts;
use testing;
use externalfiles::ExternalHtml;
use html::escape::Escape;
use html::markdown;
use html::markdown::{Markdown, MarkdownWithToc, find_testable_code, reset_headers};
use test::Collector;
/// Separate any lines at the start of the file that begin with `%`.
fn extract_leading_metadata<'a>(s: &'a str) -> (Vec<&'a str>, &'a str) {
let mut metadata = Vec::new();
for line in s.lines() {
if line.starts_with("%") {
// remove %<whitespace>
metadata.push(line.slice_from(1).trim_left())
} else {
let line_start_byte = s.subslice_offset(line);
return (metadata, s.slice_from(line_start_byte));
}
}
// if we're here, then all lines were metadata % lines.
(metadata, "")
}
/// Render `input` (e.g. "foo.md") into an HTML file in `output`
/// (e.g. output = "bar" => "bar/foo.html").
pub fn
|
(input: &str, mut output: Path, matches: &getopts::Matches,
external_html: &ExternalHtml, include_toc: bool) -> int {
let input_p = Path::new(input);
output.push(input_p.filestem().unwrap());
output.set_extension("html");
let mut css = String::new();
for name in matches.opt_strs("markdown-css").iter() {
let s = format!("<link rel=\"stylesheet\" type=\"text/css\" href=\"{}\">\n", name);
css.push_str(s.as_slice())
}
let input_str = load_or_return!(input, 1, 2);
let playground = matches.opt_str("markdown-playground-url");
if playground.is_some() {
markdown::playground_krate.replace(Some(None));
}
let playground = playground.unwrap_or("".to_string());
let mut out = match io::File::create(&output) {
Err(e) => {
let _ = writeln!(&mut io::stderr(),
"error opening `{}` for writing: {}",
output.display(), e);
return 4;
}
Ok(f) => f
};
let (metadata, text) = extract_leading_metadata(input_str.as_slice());
if metadata.len() == 0 {
let _ = writeln!(&mut io::stderr(),
"invalid markdown file: expecting initial line with `%...TITLE...`");
return 5;
}
let title = metadata[0].as_slice();
reset_headers();
let rendered = if include_toc {
format!("{}", MarkdownWithToc(text))
} else {
format!("{}", Markdown(text))
};
let err = write!(
&mut out,
r#"<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="generator" content="rustdoc">
<title>{title}</title>
{css}
{in_header}
</head>
<body class="rustdoc">
<!--[if lte IE 8]>
<div class="warning">
This old browser is unsupported and will most likely display funky
things.
</div>
<![endif]-->
{before_content}
<h1 class="title">{title}</h1>
{text}
<script type="text/javascript">
window.playgroundUrl = "{playground}";
</script>
{after_content}
</body>
</html>"#,
title = Escape(title),
css = css,
in_header = external_html.in_header,
before_content = external_html.before_content,
text = rendered,
after_content = external_html.after_content,
playground = playground,
);
match err {
Err(e) => {
let _ = writeln!(&mut io::stderr(),
"error writing to `{}`: {}",
output.display(), e);
6
}
Ok(_) => 0
}
}
/// Run any tests/code examples in the markdown file `input`.
pub fn test(input: &str, libs: Vec<Path>, externs: core::Externs,
mut test_args: Vec<String>) -> int {
let input_str = load_or_return!(input, 1, 2);
let mut collector = Collector::new(input.to_string(), libs, externs, true);
find_testable_code(input_str.as_slice(), &mut collector);
test_args.insert(0, "rustdoctest".to_string());
testing::test_main(test_args.as_slice(), collector.tests);
0
}
|
render
|
identifier_name
|
markdown.rs
|
// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use std::io;
use std::string::String;
use core;
use getopts;
use testing;
use externalfiles::ExternalHtml;
use html::escape::Escape;
use html::markdown;
use html::markdown::{Markdown, MarkdownWithToc, find_testable_code, reset_headers};
use test::Collector;
/// Separate any lines at the start of the file that begin with `%`.
fn extract_leading_metadata<'a>(s: &'a str) -> (Vec<&'a str>, &'a str)
|
/// Render `input` (e.g. "foo.md") into an HTML file in `output`
/// (e.g. output = "bar" => "bar/foo.html").
pub fn render(input: &str, mut output: Path, matches: &getopts::Matches,
external_html: &ExternalHtml, include_toc: bool) -> int {
let input_p = Path::new(input);
output.push(input_p.filestem().unwrap());
output.set_extension("html");
let mut css = String::new();
for name in matches.opt_strs("markdown-css").iter() {
let s = format!("<link rel=\"stylesheet\" type=\"text/css\" href=\"{}\">\n", name);
css.push_str(s.as_slice())
}
let input_str = load_or_return!(input, 1, 2);
let playground = matches.opt_str("markdown-playground-url");
if playground.is_some() {
markdown::playground_krate.replace(Some(None));
}
let playground = playground.unwrap_or("".to_string());
let mut out = match io::File::create(&output) {
Err(e) => {
let _ = writeln!(&mut io::stderr(),
"error opening `{}` for writing: {}",
output.display(), e);
return 4;
}
Ok(f) => f
};
let (metadata, text) = extract_leading_metadata(input_str.as_slice());
if metadata.len() == 0 {
let _ = writeln!(&mut io::stderr(),
"invalid markdown file: expecting initial line with `%...TITLE...`");
return 5;
}
let title = metadata[0].as_slice();
reset_headers();
let rendered = if include_toc {
format!("{}", MarkdownWithToc(text))
} else {
format!("{}", Markdown(text))
};
let err = write!(
&mut out,
r#"<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="generator" content="rustdoc">
<title>{title}</title>
{css}
{in_header}
</head>
<body class="rustdoc">
<!--[if lte IE 8]>
<div class="warning">
This old browser is unsupported and will most likely display funky
things.
</div>
<![endif]-->
{before_content}
<h1 class="title">{title}</h1>
{text}
<script type="text/javascript">
window.playgroundUrl = "{playground}";
</script>
{after_content}
</body>
</html>"#,
title = Escape(title),
css = css,
in_header = external_html.in_header,
before_content = external_html.before_content,
text = rendered,
after_content = external_html.after_content,
playground = playground,
);
match err {
Err(e) => {
let _ = writeln!(&mut io::stderr(),
"error writing to `{}`: {}",
output.display(), e);
6
}
Ok(_) => 0
}
}
/// Run any tests/code examples in the markdown file `input`.
pub fn test(input: &str, libs: Vec<Path>, externs: core::Externs,
mut test_args: Vec<String>) -> int {
let input_str = load_or_return!(input, 1, 2);
let mut collector = Collector::new(input.to_string(), libs, externs, true);
find_testable_code(input_str.as_slice(), &mut collector);
test_args.insert(0, "rustdoctest".to_string());
testing::test_main(test_args.as_slice(), collector.tests);
0
}
|
{
let mut metadata = Vec::new();
for line in s.lines() {
if line.starts_with("%") {
// remove %<whitespace>
metadata.push(line.slice_from(1).trim_left())
} else {
let line_start_byte = s.subslice_offset(line);
return (metadata, s.slice_from(line_start_byte));
}
}
// if we're here, then all lines were metadata % lines.
(metadata, "")
}
|
identifier_body
|
markdown.rs
|
// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use std::io;
use std::string::String;
use core;
use getopts;
use testing;
use externalfiles::ExternalHtml;
use html::escape::Escape;
use html::markdown;
use html::markdown::{Markdown, MarkdownWithToc, find_testable_code, reset_headers};
use test::Collector;
/// Separate any lines at the start of the file that begin with `%`.
fn extract_leading_metadata<'a>(s: &'a str) -> (Vec<&'a str>, &'a str) {
let mut metadata = Vec::new();
for line in s.lines() {
if line.starts_with("%") {
// remove %<whitespace>
metadata.push(line.slice_from(1).trim_left())
} else {
let line_start_byte = s.subslice_offset(line);
return (metadata, s.slice_from(line_start_byte));
}
}
// if we're here, then all lines were metadata % lines.
(metadata, "")
}
/// Render `input` (e.g. "foo.md") into an HTML file in `output`
/// (e.g. output = "bar" => "bar/foo.html").
pub fn render(input: &str, mut output: Path, matches: &getopts::Matches,
external_html: &ExternalHtml, include_toc: bool) -> int {
let input_p = Path::new(input);
output.push(input_p.filestem().unwrap());
output.set_extension("html");
let mut css = String::new();
for name in matches.opt_strs("markdown-css").iter() {
let s = format!("<link rel=\"stylesheet\" type=\"text/css\" href=\"{}\">\n", name);
css.push_str(s.as_slice())
}
let input_str = load_or_return!(input, 1, 2);
let playground = matches.opt_str("markdown-playground-url");
if playground.is_some() {
markdown::playground_krate.replace(Some(None));
}
let playground = playground.unwrap_or("".to_string());
let mut out = match io::File::create(&output) {
Err(e) =>
|
Ok(f) => f
};
let (metadata, text) = extract_leading_metadata(input_str.as_slice());
if metadata.len() == 0 {
let _ = writeln!(&mut io::stderr(),
"invalid markdown file: expecting initial line with `%...TITLE...`");
return 5;
}
let title = metadata[0].as_slice();
reset_headers();
let rendered = if include_toc {
format!("{}", MarkdownWithToc(text))
} else {
format!("{}", Markdown(text))
};
let err = write!(
&mut out,
r#"<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="generator" content="rustdoc">
<title>{title}</title>
{css}
{in_header}
</head>
<body class="rustdoc">
<!--[if lte IE 8]>
<div class="warning">
This old browser is unsupported and will most likely display funky
things.
</div>
<![endif]-->
{before_content}
<h1 class="title">{title}</h1>
{text}
<script type="text/javascript">
window.playgroundUrl = "{playground}";
</script>
{after_content}
</body>
</html>"#,
title = Escape(title),
css = css,
in_header = external_html.in_header,
before_content = external_html.before_content,
text = rendered,
after_content = external_html.after_content,
playground = playground,
);
match err {
Err(e) => {
let _ = writeln!(&mut io::stderr(),
"error writing to `{}`: {}",
output.display(), e);
6
}
Ok(_) => 0
}
}
/// Run any tests/code examples in the markdown file `input`.
pub fn test(input: &str, libs: Vec<Path>, externs: core::Externs,
mut test_args: Vec<String>) -> int {
let input_str = load_or_return!(input, 1, 2);
let mut collector = Collector::new(input.to_string(), libs, externs, true);
find_testable_code(input_str.as_slice(), &mut collector);
test_args.insert(0, "rustdoctest".to_string());
testing::test_main(test_args.as_slice(), collector.tests);
0
}
|
{
let _ = writeln!(&mut io::stderr(),
"error opening `{}` for writing: {}",
output.display(), e);
return 4;
}
|
conditional_block
|
mod.rs
|
pub mod default;
pub mod plugged;
|
#[derive(Clone, Debug, Ord, PartialOrd, Eq, PartialEq)]
pub enum Tag {
Encrypted(Vec<u8>, Vec<u8>),
PlainText(Vec<u8>, String)
}
#[derive(Debug)]
pub enum TagName {
OfEncrypted(Vec<u8>),
OfPlain(Vec<u8>),
}
#[derive(Clone, Debug)]
pub struct StorageRecord {
pub id: Vec<u8>,
pub value: Option<EncryptedValue>,
pub type_: Option<Vec<u8>>,
pub tags: Option<Vec<Tag>>,
}
impl StorageRecord {
fn new(id: Vec<u8>, value: Option<EncryptedValue>, type_: Option<Vec<u8>>, tags: Option<Vec<Tag>>) -> Self {
Self {
id,
value,
type_,
tags,
}
}
}
pub trait StorageIterator {
fn next(&mut self) -> Result<Option<StorageRecord>, IndyError>;
fn get_total_count(&self) -> Result<Option<usize>, IndyError>;
}
pub trait WalletStorage {
fn get(&self, type_: &[u8], id: &[u8], options: &str) -> Result<StorageRecord, IndyError>;
fn add(&self, type_: &[u8], id: &[u8], value: &EncryptedValue, tags: &[Tag]) -> Result<(), IndyError>;
fn update(&self, type_: &[u8], id: &[u8], value: &EncryptedValue) -> Result<(), IndyError>;
fn add_tags(&self, type_: &[u8], id: &[u8], tags: &[Tag]) -> Result<(), IndyError>;
fn update_tags(&self, type_: &[u8], id: &[u8], tags: &[Tag]) -> Result<(), IndyError>;
fn delete_tags(&self, type_: &[u8], id: &[u8], tag_names: &[TagName]) -> Result<(), IndyError>;
fn delete(&self, type_: &[u8], id: &[u8]) -> Result<(), IndyError>;
fn get_storage_metadata(&self) -> Result<Vec<u8>, IndyError>;
fn set_storage_metadata(&self, metadata: &[u8]) -> Result<(), IndyError>;
fn get_all(&self) -> Result<Box<dyn StorageIterator>, IndyError>;
fn search(&self, type_: &[u8], query: &language::Operator, options: Option<&str>) -> Result<Box<dyn StorageIterator>, IndyError>;
fn close(&mut self) -> Result<(), IndyError>;
}
pub trait WalletStorageType {
fn create_storage(&self, id: &str, config: Option<&str>, credentials: Option<&str>, metadata: &[u8]) -> Result<(), IndyError>;
fn open_storage(&self, id: &str, config: Option<&str>, credentials: Option<&str>) -> Result<Box<dyn WalletStorage>, IndyError>;
fn delete_storage(&self, id: &str, config: Option<&str>, credentials: Option<&str>) -> Result<(), IndyError>;
}
|
use errors::prelude::*;
use services::wallet::language;
use services::wallet::wallet::EncryptedValue;
|
random_line_split
|
mod.rs
|
pub mod default;
pub mod plugged;
use errors::prelude::*;
use services::wallet::language;
use services::wallet::wallet::EncryptedValue;
#[derive(Clone, Debug, Ord, PartialOrd, Eq, PartialEq)]
pub enum
|
{
Encrypted(Vec<u8>, Vec<u8>),
PlainText(Vec<u8>, String)
}
#[derive(Debug)]
pub enum TagName {
OfEncrypted(Vec<u8>),
OfPlain(Vec<u8>),
}
#[derive(Clone, Debug)]
pub struct StorageRecord {
pub id: Vec<u8>,
pub value: Option<EncryptedValue>,
pub type_: Option<Vec<u8>>,
pub tags: Option<Vec<Tag>>,
}
impl StorageRecord {
fn new(id: Vec<u8>, value: Option<EncryptedValue>, type_: Option<Vec<u8>>, tags: Option<Vec<Tag>>) -> Self {
Self {
id,
value,
type_,
tags,
}
}
}
pub trait StorageIterator {
fn next(&mut self) -> Result<Option<StorageRecord>, IndyError>;
fn get_total_count(&self) -> Result<Option<usize>, IndyError>;
}
pub trait WalletStorage {
fn get(&self, type_: &[u8], id: &[u8], options: &str) -> Result<StorageRecord, IndyError>;
fn add(&self, type_: &[u8], id: &[u8], value: &EncryptedValue, tags: &[Tag]) -> Result<(), IndyError>;
fn update(&self, type_: &[u8], id: &[u8], value: &EncryptedValue) -> Result<(), IndyError>;
fn add_tags(&self, type_: &[u8], id: &[u8], tags: &[Tag]) -> Result<(), IndyError>;
fn update_tags(&self, type_: &[u8], id: &[u8], tags: &[Tag]) -> Result<(), IndyError>;
fn delete_tags(&self, type_: &[u8], id: &[u8], tag_names: &[TagName]) -> Result<(), IndyError>;
fn delete(&self, type_: &[u8], id: &[u8]) -> Result<(), IndyError>;
fn get_storage_metadata(&self) -> Result<Vec<u8>, IndyError>;
fn set_storage_metadata(&self, metadata: &[u8]) -> Result<(), IndyError>;
fn get_all(&self) -> Result<Box<dyn StorageIterator>, IndyError>;
fn search(&self, type_: &[u8], query: &language::Operator, options: Option<&str>) -> Result<Box<dyn StorageIterator>, IndyError>;
fn close(&mut self) -> Result<(), IndyError>;
}
pub trait WalletStorageType {
fn create_storage(&self, id: &str, config: Option<&str>, credentials: Option<&str>, metadata: &[u8]) -> Result<(), IndyError>;
fn open_storage(&self, id: &str, config: Option<&str>, credentials: Option<&str>) -> Result<Box<dyn WalletStorage>, IndyError>;
fn delete_storage(&self, id: &str, config: Option<&str>, credentials: Option<&str>) -> Result<(), IndyError>;
}
|
Tag
|
identifier_name
|
drop-env.rs
|
// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// run-pass
#![feature(generators, generator_trait)]
use std::ops::Generator;
use std::sync::atomic::{AtomicUsize, ATOMIC_USIZE_INIT, Ordering};
static A: AtomicUsize = ATOMIC_USIZE_INIT;
struct B;
impl Drop for B {
fn
|
(&mut self) {
A.fetch_add(1, Ordering::SeqCst);
}
}
fn main() {
t1();
t2();
t3();
}
fn t1() {
let b = B;
let mut foo = || {
yield;
drop(b);
};
let n = A.load(Ordering::SeqCst);
drop(unsafe { foo.resume() });
assert_eq!(A.load(Ordering::SeqCst), n);
drop(foo);
assert_eq!(A.load(Ordering::SeqCst), n + 1);
}
fn t2() {
let b = B;
let mut foo = || {
yield b;
};
let n = A.load(Ordering::SeqCst);
drop(unsafe { foo.resume() });
assert_eq!(A.load(Ordering::SeqCst), n + 1);
drop(foo);
assert_eq!(A.load(Ordering::SeqCst), n + 1);
}
fn t3() {
let b = B;
let foo = || {
yield;
drop(b);
};
let n = A.load(Ordering::SeqCst);
assert_eq!(A.load(Ordering::SeqCst), n);
drop(foo);
assert_eq!(A.load(Ordering::SeqCst), n + 1);
}
|
drop
|
identifier_name
|
drop-env.rs
|
// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// run-pass
#![feature(generators, generator_trait)]
use std::ops::Generator;
use std::sync::atomic::{AtomicUsize, ATOMIC_USIZE_INIT, Ordering};
static A: AtomicUsize = ATOMIC_USIZE_INIT;
|
impl Drop for B {
fn drop(&mut self) {
A.fetch_add(1, Ordering::SeqCst);
}
}
fn main() {
t1();
t2();
t3();
}
fn t1() {
let b = B;
let mut foo = || {
yield;
drop(b);
};
let n = A.load(Ordering::SeqCst);
drop(unsafe { foo.resume() });
assert_eq!(A.load(Ordering::SeqCst), n);
drop(foo);
assert_eq!(A.load(Ordering::SeqCst), n + 1);
}
fn t2() {
let b = B;
let mut foo = || {
yield b;
};
let n = A.load(Ordering::SeqCst);
drop(unsafe { foo.resume() });
assert_eq!(A.load(Ordering::SeqCst), n + 1);
drop(foo);
assert_eq!(A.load(Ordering::SeqCst), n + 1);
}
fn t3() {
let b = B;
let foo = || {
yield;
drop(b);
};
let n = A.load(Ordering::SeqCst);
assert_eq!(A.load(Ordering::SeqCst), n);
drop(foo);
assert_eq!(A.load(Ordering::SeqCst), n + 1);
}
|
struct B;
|
random_line_split
|
drop-env.rs
|
// Copyright 2017 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// run-pass
#![feature(generators, generator_trait)]
use std::ops::Generator;
use std::sync::atomic::{AtomicUsize, ATOMIC_USIZE_INIT, Ordering};
static A: AtomicUsize = ATOMIC_USIZE_INIT;
struct B;
impl Drop for B {
fn drop(&mut self) {
A.fetch_add(1, Ordering::SeqCst);
}
}
fn main() {
t1();
t2();
t3();
}
fn t1()
|
fn t2() {
let b = B;
let mut foo = || {
yield b;
};
let n = A.load(Ordering::SeqCst);
drop(unsafe { foo.resume() });
assert_eq!(A.load(Ordering::SeqCst), n + 1);
drop(foo);
assert_eq!(A.load(Ordering::SeqCst), n + 1);
}
fn t3() {
let b = B;
let foo = || {
yield;
drop(b);
};
let n = A.load(Ordering::SeqCst);
assert_eq!(A.load(Ordering::SeqCst), n);
drop(foo);
assert_eq!(A.load(Ordering::SeqCst), n + 1);
}
|
{
let b = B;
let mut foo = || {
yield;
drop(b);
};
let n = A.load(Ordering::SeqCst);
drop(unsafe { foo.resume() });
assert_eq!(A.load(Ordering::SeqCst), n);
drop(foo);
assert_eq!(A.load(Ordering::SeqCst), n + 1);
}
|
identifier_body
|
local_data.rs
|
r a derived/new value)
// back with replace(v). We take special care to reuse the allocation in this
// case for performance reasons.
//
// However, that does mean that if a value is replaced with None, the
// allocation will stay alive and the entry will stay in the TLD map until the
// task deallocates. This makes the assumption that every key inserted into a
// given task's TLD is going to be present for a majority of the rest of the
// task's lifetime, but that's a fairly safe assumption, and there's very
// little downside as long as it holds true for most keys.
//
// The Map type must be public in order to allow rustrt to see it.
//
// We'd like to use HashMap here, but it uses TLD in its construction (it uses
// the task-local rng). We could try to provide our own source of randomness,
// except it also lives in libstd (which is a client of us) so we can't even
// reference it. Instead, use TreeMap, which provides reasonable performance.
#[doc(hidden)]
pub type Map = TreeMap<uint, TLDValue>;
#[unsafe_no_drop_flag]
struct TLDValue {
// box_ptr is a pointer to TLDValueBox<T>. It can never be null.
box_ptr: *mut (),
// drop_fn is the function that knows how to drop the box_ptr.
drop_fn: unsafe fn(p: *mut ())
}
struct TLDValueBox<T> {
// value is only initialized when refcount >= 1.
value: T,
// refcount of 0 means uninitialized value, 1 means initialized, 2+ means
// borrowed.
// NB: we use UnsafeCell instead of Cell because Ref should be allowed to
// be Sync. The only mutation occurs when a Ref is created or destroyed,
// so there's no issue with &Ref being thread-safe.
refcount: UnsafeCell<uint>
}
// Gets the map from the runtime. Lazily initialises if not done so already.
unsafe fn get_local_map<'a>() -> Option<&'a mut Map> {
if!Local::exists(None::<Task>) { return None }
let task: *mut Task = Local::unsafe_borrow();
match &mut (*task).storage {
// If the at_exit function is already set, then we just need to take
// a loan out on the TLD map stored inside
&LocalStorage(Some(ref mut map_ptr)) => {
return Some(map_ptr);
}
// If this is the first time we've accessed TLD, perform similar
// actions to the oldsched way of doing things.
&LocalStorage(ref mut slot) => {
*slot = Some(TreeMap::new());
match *slot {
Some(ref mut map_ptr) => { return Some(map_ptr) }
None => panic!("unreachable code"),
}
}
}
}
/// A RAII immutable reference to a task-local value.
///
/// The task-local data can be accessed through this value, and when this
/// structure is dropped it will return the borrow on the data.
pub struct Ref<T:'static> {
// FIXME #12808: strange names to try to avoid interfering with
// field accesses of the contained type via Deref
_inner: &'static TLDValueBox<T>,
_marker: marker::NoSend
}
fn key_to_key_value<T:'static>(key: Key<T>) -> uint {
key as *const _ as uint
}
impl<T:'static> KeyValue<T> {
/// Replaces a value in task local data.
///
/// If this key is already present in TLD, then the previous value is
/// replaced with the provided data, and then returned.
///
/// # Panics
///
/// This function will panic if the key is present in TLD and currently on
/// loan with the `get` method.
///
/// It will also panic if there is no local task (because the current thread
/// is not owned by the runtime).
///
/// # Example
///
/// ```
/// local_data_key!(foo: int)
///
/// assert_eq!(foo.replace(Some(10)), None);
/// assert_eq!(foo.replace(Some(4)), Some(10));
/// assert_eq!(foo.replace(None), Some(4));
/// ```
pub fn replace(&'static self, data: Option<T>) -> Option<T> {
let map = match unsafe { get_local_map() } {
Some(map) => map,
None => panic!("must have a local task to insert into TLD"),
};
let keyval = key_to_key_value(self);
// The following match takes a mutable borrow on the map. In order to insert
// our data if the key isn't present, we need to let the match end first.
let data = match (map.get_mut(&keyval), data) {
(None, Some(data)) => {
// The key doesn't exist and we need to insert it. To make borrowck
// happy, return it up a scope and insert it there.
data
}
(None, None) => {
// The key doesn't exist and we're trying to replace it with nothing.
// Do nothing.
return None
}
(Some(slot), data) => {
// We have a slot with a box.
let value_box = slot.box_ptr as *mut TLDValueBox<T>;
let refcount = unsafe { *(*value_box).refcount.get() };
return match (refcount, data) {
(0, None) => {
// The current value is uninitialized and we have no new value.
// Do nothing.
None
}
(0, Some(new_value)) => {
// The current value is uninitialized and we're storing a new value.
unsafe {
ptr::write(&mut (*value_box).value, new_value);
*(*value_box).refcount.get() = 1;
None
}
}
(1, None) => {
// We have an initialized value and we're removing it.
unsafe {
let ret = ptr::read(&(*value_box).value);
*(*value_box).refcount.get() = 0;
Some(ret)
}
}
(1, Some(new_value)) => {
// We have an initialized value and we're replacing it.
let value_ref = unsafe { &mut (*value_box).value };
let ret = mem::replace(value_ref, new_value);
// Refcount is already 1, leave it as that.
Some(ret)
}
_ => {
// Refcount is 2+, which means we have a live borrow.
panic!("TLD value cannot be replaced because it is already borrowed");
}
}
}
};
// If we've reached this point, we need to insert into the map.
map.insert(keyval, TLDValue::new(data));
None
}
/// Borrows a value from TLD.
///
/// If `None` is returned, then this key is not present in TLD. If `Some`
/// is returned, then the returned data is a smart pointer representing a
/// new loan on this TLD key. While on loan, this key cannot be altered via
/// the `replace` method.
///
/// # Example
///
/// ```
/// local_data_key!(key: int)
///
/// assert!(key.get().is_none());
///
/// key.replace(Some(3));
/// assert_eq!(*key.get().unwrap(), 3);
/// ```
pub fn get(&'static self) -> Option<Ref<T>> {
let map = match unsafe { get_local_map() } {
Some(map) => map,
None => return None,
};
let keyval = key_to_key_value(self);
match map.get(&keyval) {
Some(slot) => {
let value_box = slot.box_ptr as *mut TLDValueBox<T>;
if unsafe { *(*value_box).refcount.get() } >= 1 {
unsafe {
*(*value_box).refcount.get() += 1;
Some(Ref {
_inner: &*value_box,
_marker: marker::NoSend
})
}
} else {
None
}
}
None => None
}
}
// it's not clear if this is the right design for a public API, or if
// there's even a need for this as a public API, but our benchmarks need
// this to ensure consistent behavior on each run.
#[cfg(test)]
fn clear(&'static self) {
let map = match unsafe { get_local_map() } {
Some(map) => map,
None => return
};
let keyval = key_to_key_value(self);
self.replace(None); // ensure we have no outstanding borrows
map.remove(&keyval);
}
}
impl<T:'static> Deref<T> for Ref<T> {
#[inline(always)]
fn deref<'a>(&'a self) -> &'a T {
&self._inner.value
}
}
impl<T:'static + fmt::Show> fmt::Show for Ref<T> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
(**self).fmt(f)
}
}
impl<T: cmp::PartialEq +'static> cmp::PartialEq for Ref<T> {
fn eq(&self, other: &Ref<T>) -> bool {
(**self).eq(&**other)
}
fn ne(&self, other: &Ref<T>) -> bool {
(**self).ne(&**other)
}
}
impl<T: cmp::Eq +'static> cmp::Eq for Ref<T> {}
impl<T: cmp::PartialOrd +'static> cmp::PartialOrd for Ref<T> {
fn partial_cmp(&self, other: &Ref<T>) -> Option<cmp::Ordering> {
(**self).partial_cmp(&**other)
}
fn lt(&self, other: &Ref<T>) -> bool
|
fn le(&self, other: &Ref<T>) -> bool { (**self).le(&**other) }
fn gt(&self, other: &Ref<T>) -> bool { (**self).gt(&**other) }
fn ge(&self, other: &Ref<T>) -> bool { (**self).ge(&**other) }
}
impl<T: cmp::Ord +'static> cmp::Ord for Ref<T> {
fn cmp(&self, other: &Ref<T>) -> cmp::Ordering {
(**self).cmp(&**other)
}
}
#[unsafe_destructor]
impl<T:'static> Drop for Ref<T> {
fn drop(&mut self) {
unsafe {
*self._inner.refcount.get() -= 1;
}
}
}
impl TLDValue {
fn new<T>(value: T) -> TLDValue {
let box_ptr = unsafe {
let allocation = heap::allocate(mem::size_of::<TLDValueBox<T>>(),
mem::min_align_of::<TLDValueBox<T>>());
if allocation.is_null() { ::alloc::oom() }
let value_box = allocation as *mut TLDValueBox<T>;
ptr::write(value_box, TLDValueBox {
value: value,
refcount: UnsafeCell::new(1)
});
value_box as *mut ()
};
// Destruction of TLDValue needs to know how to properly deallocate the TLDValueBox,
// so we need our own custom destructor function.
unsafe fn d<T>(p: *mut ()) {
let value_box = p as *mut TLDValueBox<T>;
debug_assert!(*(*value_box).refcount.get() < 2, "TLDValue destructed while borrowed");
// use a RAII type here to ensure we always deallocate even if we panic while
// running the destructor for the value.
struct Guard<T> {
p: *mut TLDValueBox<T>
}
#[unsafe_destructor]
impl<T> Drop for Guard<T> {
fn drop(&mut self) {
let size = mem::size_of::<TLDValueBox<T>>();
let align = mem::align_of::<TLDValueBox<T>>();
unsafe { heap::deallocate(self.p as *mut u8, size, align); }
}
}
let _guard = Guard::<T> { p: value_box };
if *(*value_box).refcount.get()!= 0 {
// the contained value is valid; drop it
ptr::read(&(*value_box).value);
}
// the box will be deallocated by the guard
}
TLDValue {
box_ptr: box_ptr,
drop_fn: d::<T>
}
}
}
impl Drop for TLDValue {
fn drop(&mut self) {
// box_ptr should always be non-null. Check it anyway just to be thorough
if!self.box_ptr.is_null() {
unsafe { (self.drop_fn)(self.box_ptr) }
}
}
}
#[cfg(test)]
mod tests {
extern crate test;
use std::prelude::*;
use super::*;
use std::task;
#[test]
fn test_tls_multitask() {
static MY_KEY: Key<String> = &KeyValueKey;
MY_KEY.replace(Some("parent data".to_string()));
task::spawn(proc() {
// TLD shouldn't carry over.
assert!(MY_KEY.get().is_none());
MY_KEY.replace(Some("child data".to_string()));
assert!(MY_KEY.get().as_ref().unwrap().as_slice() == "child data");
// should be cleaned up for us
});
// Must work multiple times
assert!(MY_KEY.get().unwrap().as_slice() == "parent data");
assert!(MY_KEY.get().unwrap().as_slice() == "parent data");
assert!(MY_KEY.get().unwrap().as_slice() == "parent data");
}
#[test]
fn test_tls_overwrite() {
static MY_KEY: Key<String> = &KeyValueKey;
MY_KEY.replace(Some("first data".to_string()));
MY_KEY.replace(Some("next data".to_string())); // Shouldn't leak.
assert!(MY_KEY.get().unwrap().as_slice() == "next data");
}
#[test]
fn test_tls_pop() {
static MY_KEY: Key<String> = &KeyValueKey;
MY_KEY.replace(Some("weasel".to_string()));
assert!(MY_KEY.replace(None).unwrap() == "weasel".to_string());
// Pop must remove the data from the map.
assert!(MY_KEY.replace(None).is_none());
}
#[test]
fn test_tls_crust_automorestack_memorial_bug() {
// This might result in a stack-canary clobber if the runtime fails to
// set sp_limit to 0 when calling the cleanup extern - it might
// automatically jump over to the rust stack, which causes next_c_sp
// to get recorded as something within a rust stack segment. Then a
// subsequent upcall (esp. for logging, think vsnprintf) would run on
// a stack smaller than 1 MB.
static MY_KEY: Key<String> = &KeyValueKey;
task::spawn(proc() {
MY_KEY.replace(Some("hax".to_string()));
});
}
#[test]
fn test_tls_multiple_types() {
static STR_KEY: Key<String> = &KeyValueKey;
static BOX_KEY: Key<Box<int>> = &KeyValueKey;
static INT_KEY: Key<int> = &KeyValueKey;
task::spawn(proc() {
STR_KEY.replace(Some("string data".to_string()));
BOX_KEY.replace(Some(box 0));
INT_KEY.replace(Some(42));
});
}
#[test]
fn test_tls_overwrite_multiple_types() {
static STR_KEY: Key<String> = &KeyValueKey;
static BOX_KEY: Key<Box<int>> = &KeyValueKey;
static INT_KEY: Key<int> = &KeyValueKey;
task::spawn(proc() {
STR_KEY.replace(Some("string data".to_string()));
STR_KEY.replace(Some("string data 2".to_string()));
BOX_KEY.replace(Some(box 0));
BOX_KEY.replace(Some(box 1));
INT_KEY.replace(Some(42));
// This could cause a segfault if overwriting-destruction is done
// with the crazy polymorphic transmute rather than the provided
// finaliser.
INT_KEY.replace(Some(31337));
});
}
#[test]
#[should_fail]
fn test_tls_cleanup_on_panic() {
static STR_KEY: Key<String> = &KeyValueKey;
static BOX_KEY: Key<Box<int>> = &KeyValueKey;
static INT_KEY: Key<int> = &KeyValueKey;
STR_KEY.replace(Some("parent data".to_string()));
BOX_KEY.replace(Some(box 0));
task::spawn(proc() {
STR_KEY.replace(Some("string data".to_string()));
BOX_KEY.replace(Some(box 2));
INT_KEY.replace(Some(42));
panic!();
});
// Not quite nondeterministic.
INT_KEY.replace(Some(31337));
panic!();
}
#[test]
fn test_cleanup_drops_values() {
let (tx, rx) = channel::<()>();
struct Dropper {
tx: Sender<()>
};
impl Drop for Dropper {
fn drop(&mut self) {
self.tx.send(());
}
}
static KEY: Key<Dropper> = &KeyValueKey;
let _ = task::try(proc() {
KEY.replace(Some(Dropper{ tx: tx }));
});
// At this point the task has been cleaned up and the TLD dropped.
// If the channel doesn't have a value now, then the Sender was leaked.
assert_eq!(rx.try_recv(), Ok(()));
}
#[test]
fn test_static_pointer() {
static KEY: Key<&'static int> = &KeyValueKey;
static VALUE: int = 0;
KEY.replace(Some(&VALUE));
}
#[test]
fn test_owned() {
static KEY: Key<Box<int>> = &KeyValueKey;
KEY.replace(Some(box 1));
{
let k1 = KEY.get().unwrap();
let k2 = KEY.get().unwrap();
let k3 = KEY.get().unwrap();
assert_eq!(**k1, 1);
assert_eq!(**k2, 1);
assert_eq!(**k3, 1);
}
KEY.replace(Some(box 2));
assert_eq!(**KEY.get().unwrap(), 2);
}
#[test]
fn test_same_key_type() {
static KEY1: Key<int> = &KeyValueKey;
static KEY2: Key<int> = &KeyValueKey;
static KEY3: Key<int> = &KeyValueKey;
static KEY4: Key<int> = &KeyValueKey;
static KEY5: Key<int> = &KeyValueKey;
KEY1.replace(Some(1));
KEY2.replace(Some(2));
KEY3.replace(Some(3));
KEY4.replace(Some(4));
KEY5.replace(Some(5));
assert_eq!(*KEY1.get().unwrap(), 1);
|
{ (**self).lt(&**other) }
|
identifier_body
|
local_data.rs
|
(or a derived/new value)
// back with replace(v). We take special care to reuse the allocation in this
// case for performance reasons.
//
// However, that does mean that if a value is replaced with None, the
// allocation will stay alive and the entry will stay in the TLD map until the
// task deallocates. This makes the assumption that every key inserted into a
// given task's TLD is going to be present for a majority of the rest of the
// task's lifetime, but that's a fairly safe assumption, and there's very
// little downside as long as it holds true for most keys.
//
// The Map type must be public in order to allow rustrt to see it.
//
// We'd like to use HashMap here, but it uses TLD in its construction (it uses
// the task-local rng). We could try to provide our own source of randomness,
// except it also lives in libstd (which is a client of us) so we can't even
// reference it. Instead, use TreeMap, which provides reasonable performance.
#[doc(hidden)]
pub type Map = TreeMap<uint, TLDValue>;
#[unsafe_no_drop_flag]
struct TLDValue {
// box_ptr is a pointer to TLDValueBox<T>. It can never be null.
box_ptr: *mut (),
// drop_fn is the function that knows how to drop the box_ptr.
drop_fn: unsafe fn(p: *mut ())
}
struct TLDValueBox<T> {
// value is only initialized when refcount >= 1.
value: T,
// refcount of 0 means uninitialized value, 1 means initialized, 2+ means
// borrowed.
// NB: we use UnsafeCell instead of Cell because Ref should be allowed to
// be Sync. The only mutation occurs when a Ref is created or destroyed,
// so there's no issue with &Ref being thread-safe.
refcount: UnsafeCell<uint>
}
// Gets the map from the runtime. Lazily initialises if not done so already.
unsafe fn get_local_map<'a>() -> Option<&'a mut Map> {
if!Local::exists(None::<Task>) { return None }
let task: *mut Task = Local::unsafe_borrow();
match &mut (*task).storage {
// If the at_exit function is already set, then we just need to take
// a loan out on the TLD map stored inside
&LocalStorage(Some(ref mut map_ptr)) => {
return Some(map_ptr);
}
// If this is the first time we've accessed TLD, perform similar
// actions to the oldsched way of doing things.
&LocalStorage(ref mut slot) => {
*slot = Some(TreeMap::new());
match *slot {
Some(ref mut map_ptr) => { return Some(map_ptr) }
None => panic!("unreachable code"),
}
}
}
}
/// A RAII immutable reference to a task-local value.
///
/// The task-local data can be accessed through this value, and when this
/// structure is dropped it will return the borrow on the data.
pub struct Ref<T:'static> {
// FIXME #12808: strange names to try to avoid interfering with
// field accesses of the contained type via Deref
_inner: &'static TLDValueBox<T>,
_marker: marker::NoSend
}
fn key_to_key_value<T:'static>(key: Key<T>) -> uint {
key as *const _ as uint
}
impl<T:'static> KeyValue<T> {
/// Replaces a value in task local data.
///
/// If this key is already present in TLD, then the previous value is
/// replaced with the provided data, and then returned.
///
/// # Panics
///
/// This function will panic if the key is present in TLD and currently on
/// loan with the `get` method.
///
/// It will also panic if there is no local task (because the current thread
/// is not owned by the runtime).
///
/// # Example
///
/// ```
/// local_data_key!(foo: int)
///
/// assert_eq!(foo.replace(Some(10)), None);
/// assert_eq!(foo.replace(Some(4)), Some(10));
/// assert_eq!(foo.replace(None), Some(4));
/// ```
pub fn replace(&'static self, data: Option<T>) -> Option<T> {
let map = match unsafe { get_local_map() } {
Some(map) => map,
None => panic!("must have a local task to insert into TLD"),
};
let keyval = key_to_key_value(self);
// The following match takes a mutable borrow on the map. In order to insert
// our data if the key isn't present, we need to let the match end first.
let data = match (map.get_mut(&keyval), data) {
(None, Some(data)) => {
// The key doesn't exist and we need to insert it. To make borrowck
// happy, return it up a scope and insert it there.
data
}
(None, None) => {
// The key doesn't exist and we're trying to replace it with nothing.
// Do nothing.
return None
}
(Some(slot), data) => {
// We have a slot with a box.
let value_box = slot.box_ptr as *mut TLDValueBox<T>;
let refcount = unsafe { *(*value_box).refcount.get() };
return match (refcount, data) {
(0, None) => {
// The current value is uninitialized and we have no new value.
// Do nothing.
None
}
(0, Some(new_value)) => {
// The current value is uninitialized and we're storing a new value.
unsafe {
ptr::write(&mut (*value_box).value, new_value);
*(*value_box).refcount.get() = 1;
None
}
}
(1, None) => {
// We have an initialized value and we're removing it.
unsafe {
let ret = ptr::read(&(*value_box).value);
*(*value_box).refcount.get() = 0;
Some(ret)
}
}
(1, Some(new_value)) => {
// We have an initialized value and we're replacing it.
let value_ref = unsafe { &mut (*value_box).value };
let ret = mem::replace(value_ref, new_value);
// Refcount is already 1, leave it as that.
Some(ret)
}
_ => {
// Refcount is 2+, which means we have a live borrow.
panic!("TLD value cannot be replaced because it is already borrowed");
}
}
}
};
// If we've reached this point, we need to insert into the map.
map.insert(keyval, TLDValue::new(data));
None
}
/// Borrows a value from TLD.
///
/// If `None` is returned, then this key is not present in TLD. If `Some`
/// is returned, then the returned data is a smart pointer representing a
/// new loan on this TLD key. While on loan, this key cannot be altered via
/// the `replace` method.
///
/// # Example
///
/// ```
/// local_data_key!(key: int)
///
/// assert!(key.get().is_none());
///
/// key.replace(Some(3));
/// assert_eq!(*key.get().unwrap(), 3);
/// ```
pub fn get(&'static self) -> Option<Ref<T>> {
let map = match unsafe { get_local_map() } {
Some(map) => map,
None => return None,
};
let keyval = key_to_key_value(self);
match map.get(&keyval) {
Some(slot) => {
let value_box = slot.box_ptr as *mut TLDValueBox<T>;
if unsafe { *(*value_box).refcount.get() } >= 1 {
unsafe {
*(*value_box).refcount.get() += 1;
Some(Ref {
_inner: &*value_box,
_marker: marker::NoSend
})
}
} else {
None
}
}
None => None
}
}
// it's not clear if this is the right design for a public API, or if
// there's even a need for this as a public API, but our benchmarks need
// this to ensure consistent behavior on each run.
#[cfg(test)]
fn clear(&'static self) {
let map = match unsafe { get_local_map() } {
Some(map) => map,
None => return
};
let keyval = key_to_key_value(self);
self.replace(None); // ensure we have no outstanding borrows
map.remove(&keyval);
}
}
impl<T:'static> Deref<T> for Ref<T> {
#[inline(always)]
fn deref<'a>(&'a self) -> &'a T {
&self._inner.value
}
}
impl<T:'static + fmt::Show> fmt::Show for Ref<T> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
(**self).fmt(f)
}
}
impl<T: cmp::PartialEq +'static> cmp::PartialEq for Ref<T> {
fn eq(&self, other: &Ref<T>) -> bool {
(**self).eq(&**other)
}
fn ne(&self, other: &Ref<T>) -> bool {
(**self).ne(&**other)
}
}
impl<T: cmp::Eq +'static> cmp::Eq for Ref<T> {}
impl<T: cmp::PartialOrd +'static> cmp::PartialOrd for Ref<T> {
fn partial_cmp(&self, other: &Ref<T>) -> Option<cmp::Ordering> {
(**self).partial_cmp(&**other)
}
fn lt(&self, other: &Ref<T>) -> bool { (**self).lt(&**other) }
fn le(&self, other: &Ref<T>) -> bool { (**self).le(&**other) }
fn gt(&self, other: &Ref<T>) -> bool { (**self).gt(&**other) }
fn ge(&self, other: &Ref<T>) -> bool { (**self).ge(&**other) }
}
impl<T: cmp::Ord +'static> cmp::Ord for Ref<T> {
fn cmp(&self, other: &Ref<T>) -> cmp::Ordering {
(**self).cmp(&**other)
}
}
#[unsafe_destructor]
impl<T:'static> Drop for Ref<T> {
fn drop(&mut self) {
unsafe {
*self._inner.refcount.get() -= 1;
}
}
}
impl TLDValue {
fn new<T>(value: T) -> TLDValue {
let box_ptr = unsafe {
let allocation = heap::allocate(mem::size_of::<TLDValueBox<T>>(),
mem::min_align_of::<TLDValueBox<T>>());
if allocation.is_null() { ::alloc::oom() }
let value_box = allocation as *mut TLDValueBox<T>;
ptr::write(value_box, TLDValueBox {
value: value,
refcount: UnsafeCell::new(1)
});
value_box as *mut ()
};
// Destruction of TLDValue needs to know how to properly deallocate the TLDValueBox,
// so we need our own custom destructor function.
unsafe fn d<T>(p: *mut ()) {
let value_box = p as *mut TLDValueBox<T>;
debug_assert!(*(*value_box).refcount.get() < 2, "TLDValue destructed while borrowed");
// use a RAII type here to ensure we always deallocate even if we panic while
// running the destructor for the value.
struct Guard<T> {
p: *mut TLDValueBox<T>
}
#[unsafe_destructor]
impl<T> Drop for Guard<T> {
fn drop(&mut self) {
let size = mem::size_of::<TLDValueBox<T>>();
let align = mem::align_of::<TLDValueBox<T>>();
unsafe { heap::deallocate(self.p as *mut u8, size, align); }
}
}
let _guard = Guard::<T> { p: value_box };
if *(*value_box).refcount.get()!= 0 {
|
TLDValue {
box_ptr: box_ptr,
drop_fn: d::<T>
}
}
}
impl Drop for TLDValue {
fn drop(&mut self) {
// box_ptr should always be non-null. Check it anyway just to be thorough
if!self.box_ptr.is_null() {
unsafe { (self.drop_fn)(self.box_ptr) }
}
}
}
#[cfg(test)]
mod tests {
extern crate test;
use std::prelude::*;
use super::*;
use std::task;
#[test]
fn test_tls_multitask() {
static MY_KEY: Key<String> = &KeyValueKey;
MY_KEY.replace(Some("parent data".to_string()));
task::spawn(proc() {
// TLD shouldn't carry over.
assert!(MY_KEY.get().is_none());
MY_KEY.replace(Some("child data".to_string()));
assert!(MY_KEY.get().as_ref().unwrap().as_slice() == "child data");
// should be cleaned up for us
});
// Must work multiple times
assert!(MY_KEY.get().unwrap().as_slice() == "parent data");
assert!(MY_KEY.get().unwrap().as_slice() == "parent data");
assert!(MY_KEY.get().unwrap().as_slice() == "parent data");
}
#[test]
fn test_tls_overwrite() {
static MY_KEY: Key<String> = &KeyValueKey;
MY_KEY.replace(Some("first data".to_string()));
MY_KEY.replace(Some("next data".to_string())); // Shouldn't leak.
assert!(MY_KEY.get().unwrap().as_slice() == "next data");
}
#[test]
fn test_tls_pop() {
static MY_KEY: Key<String> = &KeyValueKey;
MY_KEY.replace(Some("weasel".to_string()));
assert!(MY_KEY.replace(None).unwrap() == "weasel".to_string());
// Pop must remove the data from the map.
assert!(MY_KEY.replace(None).is_none());
}
#[test]
fn test_tls_crust_automorestack_memorial_bug() {
// This might result in a stack-canary clobber if the runtime fails to
// set sp_limit to 0 when calling the cleanup extern - it might
// automatically jump over to the rust stack, which causes next_c_sp
// to get recorded as something within a rust stack segment. Then a
// subsequent upcall (esp. for logging, think vsnprintf) would run on
// a stack smaller than 1 MB.
static MY_KEY: Key<String> = &KeyValueKey;
task::spawn(proc() {
MY_KEY.replace(Some("hax".to_string()));
});
}
#[test]
fn test_tls_multiple_types() {
static STR_KEY: Key<String> = &KeyValueKey;
static BOX_KEY: Key<Box<int>> = &KeyValueKey;
static INT_KEY: Key<int> = &KeyValueKey;
task::spawn(proc() {
STR_KEY.replace(Some("string data".to_string()));
BOX_KEY.replace(Some(box 0));
INT_KEY.replace(Some(42));
});
}
#[test]
fn test_tls_overwrite_multiple_types() {
static STR_KEY: Key<String> = &KeyValueKey;
static BOX_KEY: Key<Box<int>> = &KeyValueKey;
static INT_KEY: Key<int> = &KeyValueKey;
task::spawn(proc() {
STR_KEY.replace(Some("string data".to_string()));
STR_KEY.replace(Some("string data 2".to_string()));
BOX_KEY.replace(Some(box 0));
BOX_KEY.replace(Some(box 1));
INT_KEY.replace(Some(42));
// This could cause a segfault if overwriting-destruction is done
// with the crazy polymorphic transmute rather than the provided
// finaliser.
INT_KEY.replace(Some(31337));
});
}
#[test]
#[should_fail]
fn test_tls_cleanup_on_panic() {
static STR_KEY: Key<String> = &KeyValueKey;
static BOX_KEY: Key<Box<int>> = &KeyValueKey;
static INT_KEY: Key<int> = &KeyValueKey;
STR_KEY.replace(Some("parent data".to_string()));
BOX_KEY.replace(Some(box 0));
task::spawn(proc() {
STR_KEY.replace(Some("string data".to_string()));
BOX_KEY.replace(Some(box 2));
INT_KEY.replace(Some(42));
panic!();
});
// Not quite nondeterministic.
INT_KEY.replace(Some(31337));
panic!();
}
#[test]
fn test_cleanup_drops_values() {
let (tx, rx) = channel::<()>();
struct Dropper {
tx: Sender<()>
};
impl Drop for Dropper {
fn drop(&mut self) {
self.tx.send(());
}
}
static KEY: Key<Dropper> = &KeyValueKey;
let _ = task::try(proc() {
KEY.replace(Some(Dropper{ tx: tx }));
});
// At this point the task has been cleaned up and the TLD dropped.
// If the channel doesn't have a value now, then the Sender was leaked.
assert_eq!(rx.try_recv(), Ok(()));
}
#[test]
fn test_static_pointer() {
static KEY: Key<&'static int> = &KeyValueKey;
static VALUE: int = 0;
KEY.replace(Some(&VALUE));
}
#[test]
fn test_owned() {
static KEY: Key<Box<int>> = &KeyValueKey;
KEY.replace(Some(box 1));
{
let k1 = KEY.get().unwrap();
let k2 = KEY.get().unwrap();
let k3 = KEY.get().unwrap();
assert_eq!(**k1, 1);
assert_eq!(**k2, 1);
assert_eq!(**k3, 1);
}
KEY.replace(Some(box 2));
assert_eq!(**KEY.get().unwrap(), 2);
}
#[test]
fn test_same_key_type() {
static KEY1: Key<int> = &KeyValueKey;
static KEY2: Key<int> = &KeyValueKey;
static KEY3: Key<int> = &KeyValueKey;
static KEY4: Key<int> = &KeyValueKey;
static KEY5: Key<int> = &KeyValueKey;
KEY1.replace(Some(1));
KEY2.replace(Some(2));
KEY3.replace(Some(3));
KEY4.replace(Some(4));
KEY5.replace(Some(5));
assert_eq!(*KEY1.get().unwrap(), 1);
assert
|
// the contained value is valid; drop it
ptr::read(&(*value_box).value);
}
// the box will be deallocated by the guard
}
|
random_line_split
|
local_data.rs
|
or a derived/new value)
// back with replace(v). We take special care to reuse the allocation in this
// case for performance reasons.
//
// However, that does mean that if a value is replaced with None, the
// allocation will stay alive and the entry will stay in the TLD map until the
// task deallocates. This makes the assumption that every key inserted into a
// given task's TLD is going to be present for a majority of the rest of the
// task's lifetime, but that's a fairly safe assumption, and there's very
// little downside as long as it holds true for most keys.
//
// The Map type must be public in order to allow rustrt to see it.
//
// We'd like to use HashMap here, but it uses TLD in its construction (it uses
// the task-local rng). We could try to provide our own source of randomness,
// except it also lives in libstd (which is a client of us) so we can't even
// reference it. Instead, use TreeMap, which provides reasonable performance.
#[doc(hidden)]
pub type Map = TreeMap<uint, TLDValue>;
#[unsafe_no_drop_flag]
struct TLDValue {
// box_ptr is a pointer to TLDValueBox<T>. It can never be null.
box_ptr: *mut (),
// drop_fn is the function that knows how to drop the box_ptr.
drop_fn: unsafe fn(p: *mut ())
}
struct TLDValueBox<T> {
// value is only initialized when refcount >= 1.
value: T,
// refcount of 0 means uninitialized value, 1 means initialized, 2+ means
// borrowed.
// NB: we use UnsafeCell instead of Cell because Ref should be allowed to
// be Sync. The only mutation occurs when a Ref is created or destroyed,
// so there's no issue with &Ref being thread-safe.
refcount: UnsafeCell<uint>
}
// Gets the map from the runtime. Lazily initialises if not done so already.
unsafe fn get_local_map<'a>() -> Option<&'a mut Map> {
if!Local::exists(None::<Task>) { return None }
let task: *mut Task = Local::unsafe_borrow();
match &mut (*task).storage {
// If the at_exit function is already set, then we just need to take
// a loan out on the TLD map stored inside
&LocalStorage(Some(ref mut map_ptr)) => {
return Some(map_ptr);
}
// If this is the first time we've accessed TLD, perform similar
// actions to the oldsched way of doing things.
&LocalStorage(ref mut slot) => {
*slot = Some(TreeMap::new());
match *slot {
Some(ref mut map_ptr) => { return Some(map_ptr) }
None => panic!("unreachable code"),
}
}
}
}
/// A RAII immutable reference to a task-local value.
///
/// The task-local data can be accessed through this value, and when this
/// structure is dropped it will return the borrow on the data.
pub struct Ref<T:'static> {
// FIXME #12808: strange names to try to avoid interfering with
// field accesses of the contained type via Deref
_inner: &'static TLDValueBox<T>,
_marker: marker::NoSend
}
fn key_to_key_value<T:'static>(key: Key<T>) -> uint {
key as *const _ as uint
}
impl<T:'static> KeyValue<T> {
/// Replaces a value in task local data.
///
/// If this key is already present in TLD, then the previous value is
/// replaced with the provided data, and then returned.
///
/// # Panics
///
/// This function will panic if the key is present in TLD and currently on
/// loan with the `get` method.
///
/// It will also panic if there is no local task (because the current thread
/// is not owned by the runtime).
///
/// # Example
///
/// ```
/// local_data_key!(foo: int)
///
/// assert_eq!(foo.replace(Some(10)), None);
/// assert_eq!(foo.replace(Some(4)), Some(10));
/// assert_eq!(foo.replace(None), Some(4));
/// ```
pub fn replace(&'static self, data: Option<T>) -> Option<T> {
let map = match unsafe { get_local_map() } {
Some(map) => map,
None => panic!("must have a local task to insert into TLD"),
};
let keyval = key_to_key_value(self);
// The following match takes a mutable borrow on the map. In order to insert
// our data if the key isn't present, we need to let the match end first.
let data = match (map.get_mut(&keyval), data) {
(None, Some(data)) => {
// The key doesn't exist and we need to insert it. To make borrowck
// happy, return it up a scope and insert it there.
data
}
(None, None) => {
// The key doesn't exist and we're trying to replace it with nothing.
// Do nothing.
return None
}
(Some(slot), data) => {
// We have a slot with a box.
let value_box = slot.box_ptr as *mut TLDValueBox<T>;
let refcount = unsafe { *(*value_box).refcount.get() };
return match (refcount, data) {
(0, None) => {
// The current value is uninitialized and we have no new value.
// Do nothing.
None
}
(0, Some(new_value)) => {
// The current value is uninitialized and we're storing a new value.
unsafe {
ptr::write(&mut (*value_box).value, new_value);
*(*value_box).refcount.get() = 1;
None
}
}
(1, None) => {
// We have an initialized value and we're removing it.
unsafe {
let ret = ptr::read(&(*value_box).value);
*(*value_box).refcount.get() = 0;
Some(ret)
}
}
(1, Some(new_value)) => {
// We have an initialized value and we're replacing it.
let value_ref = unsafe { &mut (*value_box).value };
let ret = mem::replace(value_ref, new_value);
// Refcount is already 1, leave it as that.
Some(ret)
}
_ => {
// Refcount is 2+, which means we have a live borrow.
panic!("TLD value cannot be replaced because it is already borrowed");
}
}
}
};
// If we've reached this point, we need to insert into the map.
map.insert(keyval, TLDValue::new(data));
None
}
/// Borrows a value from TLD.
///
/// If `None` is returned, then this key is not present in TLD. If `Some`
/// is returned, then the returned data is a smart pointer representing a
/// new loan on this TLD key. While on loan, this key cannot be altered via
/// the `replace` method.
///
/// # Example
///
/// ```
/// local_data_key!(key: int)
///
/// assert!(key.get().is_none());
///
/// key.replace(Some(3));
/// assert_eq!(*key.get().unwrap(), 3);
/// ```
pub fn get(&'static self) -> Option<Ref<T>> {
let map = match unsafe { get_local_map() } {
Some(map) => map,
None => return None,
};
let keyval = key_to_key_value(self);
match map.get(&keyval) {
Some(slot) => {
let value_box = slot.box_ptr as *mut TLDValueBox<T>;
if unsafe { *(*value_box).refcount.get() } >= 1 {
unsafe {
*(*value_box).refcount.get() += 1;
Some(Ref {
_inner: &*value_box,
_marker: marker::NoSend
})
}
} else {
None
}
}
None => None
}
}
// it's not clear if this is the right design for a public API, or if
// there's even a need for this as a public API, but our benchmarks need
// this to ensure consistent behavior on each run.
#[cfg(test)]
fn clear(&'static self) {
let map = match unsafe { get_local_map() } {
Some(map) => map,
None => return
};
let keyval = key_to_key_value(self);
self.replace(None); // ensure we have no outstanding borrows
map.remove(&keyval);
}
}
impl<T:'static> Deref<T> for Ref<T> {
#[inline(always)]
fn deref<'a>(&'a self) -> &'a T {
&self._inner.value
}
}
impl<T:'static + fmt::Show> fmt::Show for Ref<T> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
(**self).fmt(f)
}
}
impl<T: cmp::PartialEq +'static> cmp::PartialEq for Ref<T> {
fn eq(&self, other: &Ref<T>) -> bool {
(**self).eq(&**other)
}
fn ne(&self, other: &Ref<T>) -> bool {
(**self).ne(&**other)
}
}
impl<T: cmp::Eq +'static> cmp::Eq for Ref<T> {}
impl<T: cmp::PartialOrd +'static> cmp::PartialOrd for Ref<T> {
fn partial_cmp(&self, other: &Ref<T>) -> Option<cmp::Ordering> {
(**self).partial_cmp(&**other)
}
fn lt(&self, other: &Ref<T>) -> bool { (**self).lt(&**other) }
fn le(&self, other: &Ref<T>) -> bool { (**self).le(&**other) }
fn gt(&self, other: &Ref<T>) -> bool { (**self).gt(&**other) }
fn ge(&self, other: &Ref<T>) -> bool { (**self).ge(&**other) }
}
impl<T: cmp::Ord +'static> cmp::Ord for Ref<T> {
fn cmp(&self, other: &Ref<T>) -> cmp::Ordering {
(**self).cmp(&**other)
}
}
#[unsafe_destructor]
impl<T:'static> Drop for Ref<T> {
fn drop(&mut self) {
unsafe {
*self._inner.refcount.get() -= 1;
}
}
}
impl TLDValue {
fn new<T>(value: T) -> TLDValue {
let box_ptr = unsafe {
let allocation = heap::allocate(mem::size_of::<TLDValueBox<T>>(),
mem::min_align_of::<TLDValueBox<T>>());
if allocation.is_null() { ::alloc::oom() }
let value_box = allocation as *mut TLDValueBox<T>;
ptr::write(value_box, TLDValueBox {
value: value,
refcount: UnsafeCell::new(1)
});
value_box as *mut ()
};
// Destruction of TLDValue needs to know how to properly deallocate the TLDValueBox,
// so we need our own custom destructor function.
unsafe fn d<T>(p: *mut ()) {
let value_box = p as *mut TLDValueBox<T>;
debug_assert!(*(*value_box).refcount.get() < 2, "TLDValue destructed while borrowed");
// use a RAII type here to ensure we always deallocate even if we panic while
// running the destructor for the value.
struct Guard<T> {
p: *mut TLDValueBox<T>
}
#[unsafe_destructor]
impl<T> Drop for Guard<T> {
fn drop(&mut self) {
let size = mem::size_of::<TLDValueBox<T>>();
let align = mem::align_of::<TLDValueBox<T>>();
unsafe { heap::deallocate(self.p as *mut u8, size, align); }
}
}
let _guard = Guard::<T> { p: value_box };
if *(*value_box).refcount.get()!= 0 {
// the contained value is valid; drop it
ptr::read(&(*value_box).value);
}
// the box will be deallocated by the guard
}
TLDValue {
box_ptr: box_ptr,
drop_fn: d::<T>
}
}
}
impl Drop for TLDValue {
fn drop(&mut self) {
// box_ptr should always be non-null. Check it anyway just to be thorough
if!self.box_ptr.is_null() {
unsafe { (self.drop_fn)(self.box_ptr) }
}
}
}
#[cfg(test)]
mod tests {
extern crate test;
use std::prelude::*;
use super::*;
use std::task;
#[test]
fn test_tls_multitask() {
static MY_KEY: Key<String> = &KeyValueKey;
MY_KEY.replace(Some("parent data".to_string()));
task::spawn(proc() {
// TLD shouldn't carry over.
assert!(MY_KEY.get().is_none());
MY_KEY.replace(Some("child data".to_string()));
assert!(MY_KEY.get().as_ref().unwrap().as_slice() == "child data");
// should be cleaned up for us
});
// Must work multiple times
assert!(MY_KEY.get().unwrap().as_slice() == "parent data");
assert!(MY_KEY.get().unwrap().as_slice() == "parent data");
assert!(MY_KEY.get().unwrap().as_slice() == "parent data");
}
#[test]
fn
|
() {
static MY_KEY: Key<String> = &KeyValueKey;
MY_KEY.replace(Some("first data".to_string()));
MY_KEY.replace(Some("next data".to_string())); // Shouldn't leak.
assert!(MY_KEY.get().unwrap().as_slice() == "next data");
}
#[test]
fn test_tls_pop() {
static MY_KEY: Key<String> = &KeyValueKey;
MY_KEY.replace(Some("weasel".to_string()));
assert!(MY_KEY.replace(None).unwrap() == "weasel".to_string());
// Pop must remove the data from the map.
assert!(MY_KEY.replace(None).is_none());
}
#[test]
fn test_tls_crust_automorestack_memorial_bug() {
// This might result in a stack-canary clobber if the runtime fails to
// set sp_limit to 0 when calling the cleanup extern - it might
// automatically jump over to the rust stack, which causes next_c_sp
// to get recorded as something within a rust stack segment. Then a
// subsequent upcall (esp. for logging, think vsnprintf) would run on
// a stack smaller than 1 MB.
static MY_KEY: Key<String> = &KeyValueKey;
task::spawn(proc() {
MY_KEY.replace(Some("hax".to_string()));
});
}
#[test]
fn test_tls_multiple_types() {
static STR_KEY: Key<String> = &KeyValueKey;
static BOX_KEY: Key<Box<int>> = &KeyValueKey;
static INT_KEY: Key<int> = &KeyValueKey;
task::spawn(proc() {
STR_KEY.replace(Some("string data".to_string()));
BOX_KEY.replace(Some(box 0));
INT_KEY.replace(Some(42));
});
}
#[test]
fn test_tls_overwrite_multiple_types() {
static STR_KEY: Key<String> = &KeyValueKey;
static BOX_KEY: Key<Box<int>> = &KeyValueKey;
static INT_KEY: Key<int> = &KeyValueKey;
task::spawn(proc() {
STR_KEY.replace(Some("string data".to_string()));
STR_KEY.replace(Some("string data 2".to_string()));
BOX_KEY.replace(Some(box 0));
BOX_KEY.replace(Some(box 1));
INT_KEY.replace(Some(42));
// This could cause a segfault if overwriting-destruction is done
// with the crazy polymorphic transmute rather than the provided
// finaliser.
INT_KEY.replace(Some(31337));
});
}
#[test]
#[should_fail]
fn test_tls_cleanup_on_panic() {
static STR_KEY: Key<String> = &KeyValueKey;
static BOX_KEY: Key<Box<int>> = &KeyValueKey;
static INT_KEY: Key<int> = &KeyValueKey;
STR_KEY.replace(Some("parent data".to_string()));
BOX_KEY.replace(Some(box 0));
task::spawn(proc() {
STR_KEY.replace(Some("string data".to_string()));
BOX_KEY.replace(Some(box 2));
INT_KEY.replace(Some(42));
panic!();
});
// Not quite nondeterministic.
INT_KEY.replace(Some(31337));
panic!();
}
#[test]
fn test_cleanup_drops_values() {
let (tx, rx) = channel::<()>();
struct Dropper {
tx: Sender<()>
};
impl Drop for Dropper {
fn drop(&mut self) {
self.tx.send(());
}
}
static KEY: Key<Dropper> = &KeyValueKey;
let _ = task::try(proc() {
KEY.replace(Some(Dropper{ tx: tx }));
});
// At this point the task has been cleaned up and the TLD dropped.
// If the channel doesn't have a value now, then the Sender was leaked.
assert_eq!(rx.try_recv(), Ok(()));
}
#[test]
fn test_static_pointer() {
static KEY: Key<&'static int> = &KeyValueKey;
static VALUE: int = 0;
KEY.replace(Some(&VALUE));
}
#[test]
fn test_owned() {
static KEY: Key<Box<int>> = &KeyValueKey;
KEY.replace(Some(box 1));
{
let k1 = KEY.get().unwrap();
let k2 = KEY.get().unwrap();
let k3 = KEY.get().unwrap();
assert_eq!(**k1, 1);
assert_eq!(**k2, 1);
assert_eq!(**k3, 1);
}
KEY.replace(Some(box 2));
assert_eq!(**KEY.get().unwrap(), 2);
}
#[test]
fn test_same_key_type() {
static KEY1: Key<int> = &KeyValueKey;
static KEY2: Key<int> = &KeyValueKey;
static KEY3: Key<int> = &KeyValueKey;
static KEY4: Key<int> = &KeyValueKey;
static KEY5: Key<int> = &KeyValueKey;
KEY1.replace(Some(1));
KEY2.replace(Some(2));
KEY3.replace(Some(3));
KEY4.replace(Some(4));
KEY5.replace(Some(5));
assert_eq!(*KEY1.get().unwrap(), 1);
|
test_tls_overwrite
|
identifier_name
|
failure.rs
|
// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Failure support for libcore
//!
//! The core library cannot define failure, but it does *declare* failure. This
//! means that the functions inside of libcore are allowed to fail, but to be
//! useful an upstream crate must define failure for libcore to use. The current
//! interface for failure is:
//!
//! ```ignore
//! fn begin_unwind(fmt: &fmt::Arguments, &(&'static str, uint)) ->!;
//! ```
//!
//! This definition allows for failing with any general message, but it does not
//! allow for failing with a `~Any` value. The reason for this is that libcore
//! is not allowed to allocate.
//!
//! This module contains a few other failure functions, but these are just the
//! necessary lang items for the compiler. All failure is funneled through this
//! one function. Currently, the actual symbol is declared in the standard
//! library, but the location of this may change over time.
#![allow(dead_code, missing_doc)]
use fmt;
use intrinsics;
#[cold] #[inline(never)] // this is the slow path, always
#[lang="fail_"]
fn fail_(expr_file_line: &(&'static str, &'static str, uint)) ->! {
let (expr, file, line) = *expr_file_line;
let ref file_line = (file, line);
format_args!(|args| -> () {
begin_unwind(args, file_line);
}, "{}", expr);
unsafe { intrinsics::abort() }
}
#[cold] #[inline(never)]
#[lang="fail_bounds_check"]
fn fail_bounds_check(file_line: &(&'static str, uint),
index: uint, len: uint) ->! {
format_args!(|args| -> () {
begin_unwind(args, file_line);
}, "index out of bounds: the len is {} but the index is {}", len, index);
unsafe { intrinsics::abort() }
}
#[cold] #[inline(never)]
pub fn begin_unwind(fmt: &fmt::Arguments, file_line: &(&'static str, uint)) ->! {
|
line: uint) ->!;
}
let (file, line) = *file_line;
unsafe { begin_unwind(fmt, file, line) }
}
|
#[allow(ctypes)]
extern {
#[lang = "begin_unwind"]
fn begin_unwind(fmt: &fmt::Arguments, file: &'static str,
|
random_line_split
|
failure.rs
|
// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Failure support for libcore
//!
//! The core library cannot define failure, but it does *declare* failure. This
//! means that the functions inside of libcore are allowed to fail, but to be
//! useful an upstream crate must define failure for libcore to use. The current
//! interface for failure is:
//!
//! ```ignore
//! fn begin_unwind(fmt: &fmt::Arguments, &(&'static str, uint)) ->!;
//! ```
//!
//! This definition allows for failing with any general message, but it does not
//! allow for failing with a `~Any` value. The reason for this is that libcore
//! is not allowed to allocate.
//!
//! This module contains a few other failure functions, but these are just the
//! necessary lang items for the compiler. All failure is funneled through this
//! one function. Currently, the actual symbol is declared in the standard
//! library, but the location of this may change over time.
#![allow(dead_code, missing_doc)]
use fmt;
use intrinsics;
#[cold] #[inline(never)] // this is the slow path, always
#[lang="fail_"]
fn
|
(expr_file_line: &(&'static str, &'static str, uint)) ->! {
let (expr, file, line) = *expr_file_line;
let ref file_line = (file, line);
format_args!(|args| -> () {
begin_unwind(args, file_line);
}, "{}", expr);
unsafe { intrinsics::abort() }
}
#[cold] #[inline(never)]
#[lang="fail_bounds_check"]
fn fail_bounds_check(file_line: &(&'static str, uint),
index: uint, len: uint) ->! {
format_args!(|args| -> () {
begin_unwind(args, file_line);
}, "index out of bounds: the len is {} but the index is {}", len, index);
unsafe { intrinsics::abort() }
}
#[cold] #[inline(never)]
pub fn begin_unwind(fmt: &fmt::Arguments, file_line: &(&'static str, uint)) ->! {
#[allow(ctypes)]
extern {
#[lang = "begin_unwind"]
fn begin_unwind(fmt: &fmt::Arguments, file: &'static str,
line: uint) ->!;
}
let (file, line) = *file_line;
unsafe { begin_unwind(fmt, file, line) }
}
|
fail_
|
identifier_name
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.