file_name
large_stringlengths 4
69
| prefix
large_stringlengths 0
26.7k
| suffix
large_stringlengths 0
24.8k
| middle
large_stringlengths 0
2.12k
| fim_type
large_stringclasses 4
values |
---|---|---|---|---|
svh.rs |
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Calculation and management of a Strict Version Hash for crates
//!
//! # Today's ABI problem
//!
//! In today's implementation of rustc, it is incredibly difficult to achieve
//! forward binary compatibility without resorting to C-like interfaces. Within
//! rust code itself, abi details such as symbol names suffer from a variety of
//! unrelated factors to code changing such as the "def id drift" problem. This
//! ends up yielding confusing error messages about metadata mismatches and
//! such.
//!
//! The core of this problem is when an upstream dependency changes and
//! downstream dependents are not recompiled. This causes compile errors because
//! the upstream crate's metadata has changed but the downstream crates are
//! still referencing the older crate's metadata.
//!
//! This problem exists for many reasons, the primary of which is that rust does
//! not currently support forwards ABI compatibility (in place upgrades of a
//! crate).
//!
//! # SVH and how it alleviates the problem
//!
//! With all of this knowledge on hand, this module contains the implementation
//! of a notion of a "Strict Version Hash" for a crate. This is essentially a
//! hash of all contents of a crate which can somehow be exposed to downstream
//! crates.
//!
//! This hash is currently calculated by just hashing the AST, but this is
//! obviously wrong (doc changes should not result in an incompatible ABI).
//! Implementation-wise, this is required at this moment in time.
//!
//! By encoding this strict version hash into all crate's metadata, stale crates
//! can be detected immediately and error'd about by rustc itself.
//!
//! # Relevant links
//!
//! Original issue: https://github.com/rust-lang/rust/issues/10207
use std::fmt;
use std::hash::Hash;
use std::hash::sip::SipState;
use std::iter::range_step;
use syntax::ast;
use syntax::visit;
#[deriving(Clone, PartialEq)]
pub struct Svh {
hash: String,
}
impl Svh {
pub fn new(hash: &str) -> Svh {
assert!(hash.len() == 16);
Svh { hash: hash.to_string() }
}
pub fn as_str<'a>(&'a self) -> &'a str {
self.hash.as_slice()
}
pub fn calculate(metadata: &Vec<String>, krate: &ast::Crate) -> Svh {
// FIXME (#14132): This is better than it used to be, but it still not
// ideal. We now attempt to hash only the relevant portions of the
// Crate AST as well as the top-level crate attributes. (However,
// the hashing of the crate attributes should be double-checked
// to ensure it is not incorporating implementation artifacts into
// the hash that are not otherwise visible.)
// FIXME: this should use SHA1, not SipHash. SipHash is not built to
// avoid collisions.
let mut state = SipState::new();
for data in metadata.iter() {
data.hash(&mut state);
}
{
let mut visit = svh_visitor::make(&mut state);
visit::walk_crate(&mut visit, krate);
}
// FIXME (#14132): This hash is still sensitive to e.g. the
// spans of the crate Attributes and their underlying
// MetaItems; we should make ContentHashable impl for those
// types and then use hash_content. But, since all crate
// attributes should appear near beginning of the file, it is
// not such a big deal to be sensitive to their spans for now.
//
// We hash only the MetaItems instead of the entire Attribute
// to avoid hashing the AttrId
for attr in krate.attrs.iter() {
attr.node.value.hash(&mut state);
}
let hash = state.result();
return Svh {
hash: range_step(0u, 64u, 4u).map(|i| hex(hash >> i)).collect()
};
fn hex(b: u64) -> char {
let b = (b & 0xf) as u8;
let b = match b {
0... 9 => '0' as u8 + b,
_ => 'a' as u8 + b - 10,
};
b as char
}
}
}
impl fmt::Show for Svh {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.pad(self.as_str())
}
}
// FIXME (#14132): Even this SVH computation still has implementation
// artifacts: namely, the order of item declaration will affect the
// hash computation, but for many kinds of items the order of
// declaration should be irrelevant to the ABI.
mod svh_visitor {
pub use self::SawExprComponent::*;
pub use self::SawStmtComponent::*;
use self::SawAbiComponent::*;
use syntax::ast;
use syntax::ast::*;
use syntax::codemap::Span;
use syntax::parse::token;
use syntax::print::pprust;
use syntax::visit;
use syntax::visit::{Visitor, FnKind};
use std::hash::Hash;
use std::hash::sip::SipState;
pub struct StrictVersionHashVisitor<'a> {
pub st: &'a mut SipState,
}
pub fn make<'a>(st: &'a mut SipState) -> StrictVersionHashVisitor<'a> {
StrictVersionHashVisitor { st: st }
}
// To off-load the bulk of the hash-computation on deriving(Hash),
// we define a set of enums corresponding to the content that our
// crate visitor will encounter as it traverses the ast.
//
// The important invariant is that all of the Saw*Component enums
// do not carry any Spans, Names, or Idents.
//
// Not carrying any Names/Idents is the important fix for problem
// noted on PR #13948: using the ident.name as the basis for a
// hash leads to unstable SVH, because ident.name is just an index
// into intern table (i.e. essentially a random address), not
// computed from the name content.
//
// With the below enums, the SVH computation is not sensitive to
// artifacts of how rustc was invoked nor of how the source code
// was laid out. (Or at least it is *less* sensitive.)
// This enum represents the different potential bits of code the
// visitor could encounter that could affect the ABI for the crate,
// and assigns each a distinct tag to feed into the hash computation.
#[deriving(Hash)]
enum SawAbiComponent<'a> {
// FIXME (#14132): should we include (some function of)
// ident.ctxt as well?
SawIdent(token::InternedString),
SawStructDef(token::InternedString),
SawLifetimeRef(token::InternedString),
SawLifetimeDef(token::InternedString),
SawMod,
SawViewItem,
SawForeignItem,
SawItem,
SawDecl,
SawTy,
SawGenerics,
SawFn,
SawTyMethod,
SawTraitMethod,
SawStructField,
SawVariant,
SawExplicitSelf,
SawPath,
SawOptLifetimeRef,
SawBlock,
SawPat,
SawLocal,
SawArm,
SawExpr(SawExprComponent<'a>),
SawStmt(SawStmtComponent),
}
/// SawExprComponent carries all of the information that we want
/// to include in the hash that *won't* be covered by the
/// subsequent recursive traversal of the expression's
/// substructure by the visitor.
///
/// We know every Expr_ variant is covered by a variant because
/// `fn saw_expr` maps each to some case below. Ensuring that
/// each variant carries an appropriate payload has to be verified
/// by hand.
///
/// (However, getting that *exactly* right is not so important
/// because the SVH is just a developer convenience; there is no
/// guarantee of collision-freedom, hash collisions are just
/// (hopefully) unlikely.)
#[deriving(Hash)]
pub enum SawExprComponent<'a> {
SawExprLoop(Option<token::InternedString>),
SawExprField(token::InternedString),
SawExprTupField(uint),
SawExprBreak(Option<token::InternedString>),
SawExprAgain(Option<token::InternedString>),
SawExprBox,
SawExprVec,
SawExprCall,
SawExprMethodCall,
SawExprTup,
SawExprBinary(ast::BinOp),
SawExprUnary(ast::UnOp),
SawExprLit(ast::Lit_),
SawExprCast,
SawExprIf,
SawExprWhile,
SawExprMatch,
SawExprFnBlock,
SawExprUnboxedFn,
SawExprProc,
SawExprBlock,
SawExprAssign,
SawExprAssignOp(ast::BinOp),
SawExprIndex,
SawExprSlice,
SawExprPath,
SawExprAddrOf(ast::Mutability),
SawExprRet,
SawExprInlineAsm(&'a ast::InlineAsm),
SawExprStruct,
SawExprRepeat,
SawExprParen,
SawExprForLoop,
}
fn saw_expr<'a>(node: &'a Expr_) -> SawExprComponent<'a> {
match *node {
ExprBox(..) => SawExprBox,
ExprVec(..) => SawExprVec,
ExprCall(..) => SawExprCall,
ExprMethodCall(..) => SawExprMethodCall,
ExprTup(..) => SawExprTup,
ExprBinary(op, _, _) => SawExprBinary(op),
ExprUnary(op, _) => SawExprUnary(op),
ExprLit(ref lit) => SawExprLit(lit.node.clone()),
ExprCast(..) => SawExprCast,
ExprIf(..) => SawExprIf,
ExprWhile(..) => SawExprWhile,
ExprLoop(_, id) => SawExprLoop(id.map(content)),
ExprMatch(..) => SawExprMatch,
ExprFnBlock(..) => SawExprFnBlock,
ExprUnboxedFn(..) => SawExprUnboxedFn,
ExprProc(..) => SawExprProc,
ExprBlock(..) => SawExprBlock,
ExprAssign(..) => SawExprAssign,
ExprAssignOp(op, _, _) => SawExprAssignOp(op),
ExprField(_, id, _) => SawExprField(content(id.node)),
ExprTupField(_, id, _) => SawExprTupField(id.node),
ExprIndex(..) => SawExprIndex,
ExprSlice(..) => SawExprSlice,
ExprPath(..) => SawExprPath,
ExprAddrOf(m, _) => SawExprAddrOf(m),
ExprBreak(id) => SawExprBreak(id.map(content)),
ExprAgain(id) => SawExprAgain(id.map(content)),
ExprRet(..) => SawExprRet,
ExprInlineAsm(ref asm) => SawExprInlineAsm(asm),
ExprStruct(..) => SawExprStruct,
ExprRepeat(..) => SawExprRepeat,
ExprParen(..) => SawExprParen,
ExprForLoop(..) => SawExprForLoop,
// just syntactic artifacts, expanded away by time of SVH.
ExprIfLet(..) => unreachable!(),
ExprWhileLet(..) => unreachable!(),
ExprMac(..) => unreachable!(),
}
}
/// SawStmtComponent is analogous to SawExprComponent, but for statements.
#[deriving(Hash)]
pub enum SawStmtComponent {
SawStmtDecl,
SawStmtExpr,
SawStmtSemi,
}
fn saw_stmt(node: &Stmt_) -> SawStmtComponent {
match *node {
StmtDecl(..) => SawStmtDecl,
StmtExpr(..) => SawStmtExpr,
StmtSemi(..) => SawStmtSemi,
StmtMac(..) => unreachable!(),
}
}
// Ad-hoc overloading between Ident and Name to their intern table lookups.
trait InternKey { fn get_content(self) -> token::InternedString; }
impl InternKey for Ident {
fn get_content(self) -> token::InternedString { token::get_ident(self) }
}
impl InternKey for Name {
fn get_content(self) -> token::InternedString { token::get_name(self) }
}
fn content<K:InternKey>(k: K) -> token::InternedString { k.get_content() }
impl<'a, 'v> Visitor<'v> for StrictVersionHashVisitor<'a> {
fn visit_mac(&mut self, macro: &Mac) {
// macro invocations, namely macro_rules definitions,
// *can* appear as items, even in the expanded crate AST.
if macro_name(macro).get() == "macro_rules" {
// Pretty-printing definition to a string strips out
// surface artifacts (currently), such as the span
// information, yielding a content-based hash.
// FIXME (#14132): building temporary string is
// expensive; a direct content-based hash on token
// trees might be faster. Implementing this is far
// easier in short term.
let macro_defn_as_string =
pprust::to_string(|pp_state| pp_state.print_mac(macro));
macro_defn_as_string.hash(self.st);
} else {
// It is not possible to observe any kind of macro
// invocation at this stage except `macro_rules!`.
panic!("reached macro somehow: {}",
pprust::to_string(|pp_state| pp_state.print_mac(macro)));
}
visit::walk_mac(self, macro);
fn macro_name(macro: &Mac) -> token::InternedString {
match ¯o.node {
&MacInvocTT(ref path, ref _tts, ref _stx_ctxt) => {
let s = path.segments.as_slice();
assert_eq!(s.len(), 1);
content(s[0].identifier)
}
}
}
}
fn visit_struct_def(&mut self, s: &StructDef, ident: Ident,
g: &Generics, _: NodeId) {
SawStructDef(content(ident)).hash(self.st);
visit::walk_generics(self, g);
visit::walk_struct_def(self, s)
}
fn visit_variant(&mut self, v: &Variant, g: &Generics) {
SawVariant.hash(self.st);
// walk_variant does not call walk_generics, so do it here.
visit::walk_generics(self, g);
visit::walk_variant(self, v, g)
}
fn visit_opt_lifetime_ref(&mut self, _: Span, l: &Option<Lifetime>) {
SawOptLifetimeRef.hash(self.st);
// (This is a strange method in the visitor trait, in that
// it does not expose a walk function to do the subroutine
// calls.)
match *l {
Some(ref l) => self.visit_lifetime_ref(l),
None => ()
}
}
// All of the remaining methods just record (in the hash
// SipState) that the visitor saw that particular variant
// (with its payload), and continue walking as the default
// visitor would.
//
// Some of the implementations have some notes as to how one
// might try to make their SVH computation less discerning
// (e.g. by incorporating reachability analysis). But
// currently all of their implementations are uniform and
// uninteresting.
//
// (If you edit a method such that it deviates from the
// pattern, please move that method up above this comment.)
fn visit_ident(&mut self, _: Span, ident: Ident) {
SawIdent(content(ident)).hash(self.st);
}
fn visit_lifetime_ref(&mut self, l: &Lifetime) {
SawLifetimeRef(content(l.name)).hash(self.st);
}
fn visit_lifetime_def(&mut self, l: &LifetimeDef) {
SawLifetimeDef(content(l.lifetime.name)).hash(self.st);
}
// We do recursively walk the bodies of functions/methods
// (rather than omitting their bodies from the hash) since
// monomorphization and cross-crate inlining generally implies
// that a change to a crate body will require downstream
// crates to be recompiled.
fn visit_expr(&mut self, ex: &Expr) {
SawExpr(saw_expr(&ex.node)).hash(self.st); visit::walk_expr(self, ex)
}
fn visit_stmt(&mut self, s: &Stmt) {
SawStmt(saw_stmt(&s.node)).hash(self.st); visit::walk_stmt(self, s)
}
fn | (&mut self, i: &ViewItem) {
// Two kinds of view items can affect the ABI for a crate:
// exported `pub use` view items (since that may expose
// items that downstream crates can call), and `use
// foo::Trait`, since changing that may affect method
// resolution.
//
// The simplest approach to handling both of the above is
// just to adopt the same simple-minded (fine-grained)
// hash that I am deploying elsewhere here.
SawViewItem.hash(self.st); visit::walk_view_item(self, i)
}
fn visit_foreign_item(&mut self, i: &ForeignItem) {
// FIXME (#14132) ideally we would incorporate privacy (or
// perhaps reachability) somewhere here, so foreign items
// that do not leak into downstream crates would not be
// part of the ABI.
SawForeignItem.hash(self.st); visit::walk_foreign_item(self, i)
}
fn visit_item(&mut self, i: &Item) {
// FIXME (#14132) ideally would incorporate reachability
// analysis somewhere here, so items that never leak into
// downstream crates (e.g. via monomorphisation or
// inlining) would not be part of the ABI.
SawItem.hash(self.st); visit::walk_item(self, i)
}
fn visit_mod(&mut self, m: &Mod, _s: Span, _n: NodeId) {
SawMod.hash(self.st); visit::walk_mod(self, m)
}
fn visit_decl(&mut self, d: &Decl) {
SawDecl.hash(self.st); visit::walk_decl(self, d)
}
fn visit_ty(&mut self, t: &Ty) {
SawTy.hash(self.st); visit::walk_ty(self, t)
}
fn visit_generics(&mut self, g: &Generics) {
SawGenerics.hash(self.st); visit::walk_generics(self, g)
}
fn visit_fn(&mut self, fk: FnKind<'v>, fd: &'v FnDecl,
b: &'v Block, s: Span, _: NodeId) {
SawFn.hash(self.st); visit::walk_fn(self, fk, fd, b, s)
}
fn visit_ty_method(&mut self, t: &TypeMethod) {
SawTyMethod.hash(self.st); visit::walk_ty_method(self, t)
}
fn visit_trait_item(&mut self, t: &TraitItem) {
SawTraitMethod.hash(self.st); visit::walk_trait_item(self, t)
}
fn visit_struct_field(&mut self, s: &StructField) {
SawStructField.hash(self.st); visit::walk_struct_field(self, s)
}
fn visit_explicit_self(&mut self, es: &ExplicitSelf) {
SawExplicitSelf.hash(self.st); visit::walk_explicit_self(self, es)
}
fn visit_path(&mut self, path: &Path, _: ast::NodeId) {
SawPath.hash(self.st); visit::walk_path(self, path)
}
fn visit_block(&mut self, b: &Block) {
SawBlock.hash(self.st); visit::walk_block(self, b)
}
fn visit_pat(&mut self, p: &Pat) {
SawPat.hash(self.st); visit::walk_pat(self, p)
} | visit_view_item | identifier_name |
svh.rs | or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Calculation and management of a Strict Version Hash for crates
//!
//! # Today's ABI problem
//!
//! In today's implementation of rustc, it is incredibly difficult to achieve
//! forward binary compatibility without resorting to C-like interfaces. Within
//! rust code itself, abi details such as symbol names suffer from a variety of
//! unrelated factors to code changing such as the "def id drift" problem. This
//! ends up yielding confusing error messages about metadata mismatches and
//! such.
//!
//! The core of this problem is when an upstream dependency changes and
//! downstream dependents are not recompiled. This causes compile errors because
//! the upstream crate's metadata has changed but the downstream crates are
//! still referencing the older crate's metadata.
//!
//! This problem exists for many reasons, the primary of which is that rust does
//! not currently support forwards ABI compatibility (in place upgrades of a
//! crate).
//!
//! # SVH and how it alleviates the problem
//!
//! With all of this knowledge on hand, this module contains the implementation
//! of a notion of a "Strict Version Hash" for a crate. This is essentially a
//! hash of all contents of a crate which can somehow be exposed to downstream
//! crates.
//!
//! This hash is currently calculated by just hashing the AST, but this is
//! obviously wrong (doc changes should not result in an incompatible ABI).
//! Implementation-wise, this is required at this moment in time.
//!
//! By encoding this strict version hash into all crate's metadata, stale crates
//! can be detected immediately and error'd about by rustc itself.
//!
//! # Relevant links
//!
//! Original issue: https://github.com/rust-lang/rust/issues/10207
use std::fmt;
use std::hash::Hash;
use std::hash::sip::SipState;
use std::iter::range_step;
use syntax::ast;
use syntax::visit;
#[deriving(Clone, PartialEq)]
pub struct Svh {
hash: String,
}
impl Svh {
pub fn new(hash: &str) -> Svh {
assert!(hash.len() == 16);
Svh { hash: hash.to_string() }
}
pub fn as_str<'a>(&'a self) -> &'a str {
self.hash.as_slice()
}
pub fn calculate(metadata: &Vec<String>, krate: &ast::Crate) -> Svh {
// FIXME (#14132): This is better than it used to be, but it still not
// ideal. We now attempt to hash only the relevant portions of the
// Crate AST as well as the top-level crate attributes. (However,
// the hashing of the crate attributes should be double-checked
// to ensure it is not incorporating implementation artifacts into
// the hash that are not otherwise visible.)
// FIXME: this should use SHA1, not SipHash. SipHash is not built to
// avoid collisions.
let mut state = SipState::new();
for data in metadata.iter() {
data.hash(&mut state);
}
{
let mut visit = svh_visitor::make(&mut state);
visit::walk_crate(&mut visit, krate);
}
// FIXME (#14132): This hash is still sensitive to e.g. the
// spans of the crate Attributes and their underlying
// MetaItems; we should make ContentHashable impl for those
// types and then use hash_content. But, since all crate
// attributes should appear near beginning of the file, it is
// not such a big deal to be sensitive to their spans for now.
//
// We hash only the MetaItems instead of the entire Attribute
// to avoid hashing the AttrId
for attr in krate.attrs.iter() {
attr.node.value.hash(&mut state);
}
let hash = state.result();
return Svh {
hash: range_step(0u, 64u, 4u).map(|i| hex(hash >> i)).collect()
};
fn hex(b: u64) -> char {
let b = (b & 0xf) as u8;
let b = match b {
0... 9 => '0' as u8 + b,
_ => 'a' as u8 + b - 10,
};
b as char
}
}
}
impl fmt::Show for Svh {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.pad(self.as_str())
}
}
// FIXME (#14132): Even this SVH computation still has implementation
// artifacts: namely, the order of item declaration will affect the
// hash computation, but for many kinds of items the order of
// declaration should be irrelevant to the ABI.
mod svh_visitor {
pub use self::SawExprComponent::*;
pub use self::SawStmtComponent::*;
use self::SawAbiComponent::*;
use syntax::ast;
use syntax::ast::*;
use syntax::codemap::Span;
use syntax::parse::token;
use syntax::print::pprust;
use syntax::visit;
use syntax::visit::{Visitor, FnKind};
use std::hash::Hash;
use std::hash::sip::SipState;
pub struct StrictVersionHashVisitor<'a> {
pub st: &'a mut SipState,
}
pub fn make<'a>(st: &'a mut SipState) -> StrictVersionHashVisitor<'a> {
StrictVersionHashVisitor { st: st }
}
// To off-load the bulk of the hash-computation on deriving(Hash),
// we define a set of enums corresponding to the content that our
// crate visitor will encounter as it traverses the ast.
//
// The important invariant is that all of the Saw*Component enums
// do not carry any Spans, Names, or Idents.
//
// Not carrying any Names/Idents is the important fix for problem
// noted on PR #13948: using the ident.name as the basis for a
// hash leads to unstable SVH, because ident.name is just an index
// into intern table (i.e. essentially a random address), not
// computed from the name content.
//
// With the below enums, the SVH computation is not sensitive to
// artifacts of how rustc was invoked nor of how the source code
// was laid out. (Or at least it is *less* sensitive.)
// This enum represents the different potential bits of code the
// visitor could encounter that could affect the ABI for the crate,
// and assigns each a distinct tag to feed into the hash computation.
#[deriving(Hash)]
enum SawAbiComponent<'a> {
// FIXME (#14132): should we include (some function of)
// ident.ctxt as well?
SawIdent(token::InternedString),
SawStructDef(token::InternedString),
SawLifetimeRef(token::InternedString),
SawLifetimeDef(token::InternedString),
SawMod,
SawViewItem,
SawForeignItem,
SawItem,
SawDecl,
SawTy,
SawGenerics,
SawFn,
SawTyMethod,
SawTraitMethod,
SawStructField,
SawVariant,
SawExplicitSelf,
SawPath,
SawOptLifetimeRef,
SawBlock,
SawPat,
SawLocal,
SawArm,
SawExpr(SawExprComponent<'a>),
SawStmt(SawStmtComponent),
}
/// SawExprComponent carries all of the information that we want
/// to include in the hash that *won't* be covered by the
/// subsequent recursive traversal of the expression's
/// substructure by the visitor.
///
/// We know every Expr_ variant is covered by a variant because
/// `fn saw_expr` maps each to some case below. Ensuring that
/// each variant carries an appropriate payload has to be verified
/// by hand.
///
/// (However, getting that *exactly* right is not so important
/// because the SVH is just a developer convenience; there is no
/// guarantee of collision-freedom, hash collisions are just
/// (hopefully) unlikely.)
#[deriving(Hash)]
pub enum SawExprComponent<'a> {
SawExprLoop(Option<token::InternedString>),
SawExprField(token::InternedString),
SawExprTupField(uint),
SawExprBreak(Option<token::InternedString>),
SawExprAgain(Option<token::InternedString>),
SawExprBox,
SawExprVec,
SawExprCall,
SawExprMethodCall,
SawExprTup,
SawExprBinary(ast::BinOp),
SawExprUnary(ast::UnOp),
SawExprLit(ast::Lit_),
SawExprCast,
SawExprIf,
SawExprWhile,
SawExprMatch,
SawExprFnBlock,
SawExprUnboxedFn,
SawExprProc,
SawExprBlock,
SawExprAssign,
SawExprAssignOp(ast::BinOp),
SawExprIndex,
SawExprSlice,
SawExprPath,
SawExprAddrOf(ast::Mutability),
SawExprRet,
SawExprInlineAsm(&'a ast::InlineAsm),
SawExprStruct,
SawExprRepeat,
SawExprParen,
SawExprForLoop,
}
fn saw_expr<'a>(node: &'a Expr_) -> SawExprComponent<'a> {
match *node {
ExprBox(..) => SawExprBox,
ExprVec(..) => SawExprVec,
ExprCall(..) => SawExprCall,
ExprMethodCall(..) => SawExprMethodCall,
ExprTup(..) => SawExprTup,
ExprBinary(op, _, _) => SawExprBinary(op),
ExprUnary(op, _) => SawExprUnary(op),
ExprLit(ref lit) => SawExprLit(lit.node.clone()),
ExprCast(..) => SawExprCast,
ExprIf(..) => SawExprIf,
ExprWhile(..) => SawExprWhile,
ExprLoop(_, id) => SawExprLoop(id.map(content)),
ExprMatch(..) => SawExprMatch,
ExprFnBlock(..) => SawExprFnBlock,
ExprUnboxedFn(..) => SawExprUnboxedFn,
ExprProc(..) => SawExprProc,
ExprBlock(..) => SawExprBlock,
ExprAssign(..) => SawExprAssign,
ExprAssignOp(op, _, _) => SawExprAssignOp(op),
ExprField(_, id, _) => SawExprField(content(id.node)),
ExprTupField(_, id, _) => SawExprTupField(id.node),
ExprIndex(..) => SawExprIndex,
ExprSlice(..) => SawExprSlice,
ExprPath(..) => SawExprPath,
ExprAddrOf(m, _) => SawExprAddrOf(m),
ExprBreak(id) => SawExprBreak(id.map(content)),
ExprAgain(id) => SawExprAgain(id.map(content)),
ExprRet(..) => SawExprRet,
ExprInlineAsm(ref asm) => SawExprInlineAsm(asm),
ExprStruct(..) => SawExprStruct,
ExprRepeat(..) => SawExprRepeat,
ExprParen(..) => SawExprParen,
ExprForLoop(..) => SawExprForLoop,
// just syntactic artifacts, expanded away by time of SVH.
ExprIfLet(..) => unreachable!(),
ExprWhileLet(..) => unreachable!(),
ExprMac(..) => unreachable!(),
}
}
/// SawStmtComponent is analogous to SawExprComponent, but for statements.
#[deriving(Hash)]
pub enum SawStmtComponent {
SawStmtDecl,
SawStmtExpr,
SawStmtSemi,
}
fn saw_stmt(node: &Stmt_) -> SawStmtComponent {
match *node {
StmtDecl(..) => SawStmtDecl,
StmtExpr(..) => SawStmtExpr,
StmtSemi(..) => SawStmtSemi,
StmtMac(..) => unreachable!(),
}
}
// Ad-hoc overloading between Ident and Name to their intern table lookups.
trait InternKey { fn get_content(self) -> token::InternedString; } | impl InternKey for Name {
fn get_content(self) -> token::InternedString { token::get_name(self) }
}
fn content<K:InternKey>(k: K) -> token::InternedString { k.get_content() }
impl<'a, 'v> Visitor<'v> for StrictVersionHashVisitor<'a> {
fn visit_mac(&mut self, macro: &Mac) {
// macro invocations, namely macro_rules definitions,
// *can* appear as items, even in the expanded crate AST.
if macro_name(macro).get() == "macro_rules" {
// Pretty-printing definition to a string strips out
// surface artifacts (currently), such as the span
// information, yielding a content-based hash.
// FIXME (#14132): building temporary string is
// expensive; a direct content-based hash on token
// trees might be faster. Implementing this is far
// easier in short term.
let macro_defn_as_string =
pprust::to_string(|pp_state| pp_state.print_mac(macro));
macro_defn_as_string.hash(self.st);
} else {
// It is not possible to observe any kind of macro
// invocation at this stage except `macro_rules!`.
panic!("reached macro somehow: {}",
pprust::to_string(|pp_state| pp_state.print_mac(macro)));
}
visit::walk_mac(self, macro);
fn macro_name(macro: &Mac) -> token::InternedString {
match ¯o.node {
&MacInvocTT(ref path, ref _tts, ref _stx_ctxt) => {
let s = path.segments.as_slice();
assert_eq!(s.len(), 1);
content(s[0].identifier)
}
}
}
}
fn visit_struct_def(&mut self, s: &StructDef, ident: Ident,
g: &Generics, _: NodeId) {
SawStructDef(content(ident)).hash(self.st);
visit::walk_generics(self, g);
visit::walk_struct_def(self, s)
}
fn visit_variant(&mut self, v: &Variant, g: &Generics) {
SawVariant.hash(self.st);
// walk_variant does not call walk_generics, so do it here.
visit::walk_generics(self, g);
visit::walk_variant(self, v, g)
}
fn visit_opt_lifetime_ref(&mut self, _: Span, l: &Option<Lifetime>) {
SawOptLifetimeRef.hash(self.st);
// (This is a strange method in the visitor trait, in that
// it does not expose a walk function to do the subroutine
// calls.)
match *l {
Some(ref l) => self.visit_lifetime_ref(l),
None => ()
}
}
// All of the remaining methods just record (in the hash
// SipState) that the visitor saw that particular variant
// (with its payload), and continue walking as the default
// visitor would.
//
// Some of the implementations have some notes as to how one
// might try to make their SVH computation less discerning
// (e.g. by incorporating reachability analysis). But
// currently all of their implementations are uniform and
// uninteresting.
//
// (If you edit a method such that it deviates from the
// pattern, please move that method up above this comment.)
fn visit_ident(&mut self, _: Span, ident: Ident) {
SawIdent(content(ident)).hash(self.st);
}
fn visit_lifetime_ref(&mut self, l: &Lifetime) {
SawLifetimeRef(content(l.name)).hash(self.st);
}
fn visit_lifetime_def(&mut self, l: &LifetimeDef) {
SawLifetimeDef(content(l.lifetime.name)).hash(self.st);
}
// We do recursively walk the bodies of functions/methods
// (rather than omitting their bodies from the hash) since
// monomorphization and cross-crate inlining generally implies
// that a change to a crate body will require downstream
// crates to be recompiled.
fn visit_expr(&mut self, ex: &Expr) {
SawExpr(saw_expr(&ex.node)).hash(self.st); visit::walk_expr(self, ex)
}
fn visit_stmt(&mut self, s: &Stmt) {
SawStmt(saw_stmt(&s.node)).hash(self.st); visit::walk_stmt(self, s)
}
fn visit_view_item(&mut self, i: &ViewItem) {
// Two kinds of view items can affect the ABI for a crate:
// exported `pub use` view items (since that may expose
// items that downstream crates can call), and `use
// foo::Trait`, since changing that may affect method
// resolution.
//
// The simplest approach to handling both of the above is
// just to adopt the same simple-minded (fine-grained)
// hash that I am deploying elsewhere here.
SawViewItem.hash(self.st); visit::walk_view_item(self, i)
}
fn visit_foreign_item(&mut self, i: &ForeignItem) {
// FIXME (#14132) ideally we would incorporate privacy (or
// perhaps reachability) somewhere here, so foreign items
// that do not leak into downstream crates would not be
// part of the ABI.
SawForeignItem.hash(self.st); visit::walk_foreign_item(self, i)
}
fn visit_item(&mut self, i: &Item) {
// FIXME (#14132) ideally would incorporate reachability
// analysis somewhere here, so items that never leak into
// downstream crates (e.g. via monomorphisation or
// inlining) would not be part of the ABI.
SawItem.hash(self.st); visit::walk_item(self, i)
}
fn visit_mod(&mut self, m: &Mod, _s: Span, _n: NodeId) {
SawMod.hash(self.st); visit::walk_mod(self, m)
}
fn visit_decl(&mut self, d: &Decl) {
SawDecl.hash(self.st); visit::walk_decl(self, d)
}
fn visit_ty(&mut self, t: &Ty) {
SawTy.hash(self.st); visit::walk_ty(self, t)
}
fn visit_generics(&mut self, g: &Generics) {
SawGenerics.hash(self.st); visit::walk_generics(self, g)
}
fn visit_fn(&mut self, fk: FnKind<'v>, fd: &'v FnDecl,
b: &'v Block, s: Span, _: NodeId) {
SawFn.hash(self.st); visit::walk_fn(self, fk, fd, b, s)
}
fn visit_ty_method(&mut self, t: &TypeMethod) {
SawTyMethod.hash(self.st); visit::walk_ty_method(self, t)
}
fn visit_trait_item(&mut self, t: &TraitItem) {
SawTraitMethod.hash(self.st); visit::walk_trait_item(self, t)
}
fn visit_struct_field(&mut self, s: &StructField) {
SawStructField.hash(self.st); visit::walk_struct_field(self, s)
}
fn visit_explicit_self(&mut self, es: &ExplicitSelf) {
SawExplicitSelf.hash(self.st); visit::walk_explicit_self(self, es)
}
fn visit_path(&mut self, path: &Path, _: ast::NodeId) {
SawPath.hash(self.st); visit::walk_path(self, path)
}
fn visit_block(&mut self, b: &Block) {
SawBlock.hash(self.st); visit::walk_block(self, b)
}
fn visit_pat(&mut self, p: &Pat) {
SawPat.hash(self.st); visit::walk_pat(self, p)
}
| impl InternKey for Ident {
fn get_content(self) -> token::InternedString { token::get_ident(self) }
} | random_line_split |
svh.rs |
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Calculation and management of a Strict Version Hash for crates
//!
//! # Today's ABI problem
//!
//! In today's implementation of rustc, it is incredibly difficult to achieve
//! forward binary compatibility without resorting to C-like interfaces. Within
//! rust code itself, abi details such as symbol names suffer from a variety of
//! unrelated factors to code changing such as the "def id drift" problem. This
//! ends up yielding confusing error messages about metadata mismatches and
//! such.
//!
//! The core of this problem is when an upstream dependency changes and
//! downstream dependents are not recompiled. This causes compile errors because
//! the upstream crate's metadata has changed but the downstream crates are
//! still referencing the older crate's metadata.
//!
//! This problem exists for many reasons, the primary of which is that rust does
//! not currently support forwards ABI compatibility (in place upgrades of a
//! crate).
//!
//! # SVH and how it alleviates the problem
//!
//! With all of this knowledge on hand, this module contains the implementation
//! of a notion of a "Strict Version Hash" for a crate. This is essentially a
//! hash of all contents of a crate which can somehow be exposed to downstream
//! crates.
//!
//! This hash is currently calculated by just hashing the AST, but this is
//! obviously wrong (doc changes should not result in an incompatible ABI).
//! Implementation-wise, this is required at this moment in time.
//!
//! By encoding this strict version hash into all crate's metadata, stale crates
//! can be detected immediately and error'd about by rustc itself.
//!
//! # Relevant links
//!
//! Original issue: https://github.com/rust-lang/rust/issues/10207
use std::fmt;
use std::hash::Hash;
use std::hash::sip::SipState;
use std::iter::range_step;
use syntax::ast;
use syntax::visit;
#[deriving(Clone, PartialEq)]
pub struct Svh {
hash: String,
}
impl Svh {
pub fn new(hash: &str) -> Svh {
assert!(hash.len() == 16);
Svh { hash: hash.to_string() }
}
pub fn as_str<'a>(&'a self) -> &'a str {
self.hash.as_slice()
}
pub fn calculate(metadata: &Vec<String>, krate: &ast::Crate) -> Svh {
// FIXME (#14132): This is better than it used to be, but it still not
// ideal. We now attempt to hash only the relevant portions of the
// Crate AST as well as the top-level crate attributes. (However,
// the hashing of the crate attributes should be double-checked
// to ensure it is not incorporating implementation artifacts into
// the hash that are not otherwise visible.)
// FIXME: this should use SHA1, not SipHash. SipHash is not built to
// avoid collisions.
let mut state = SipState::new();
for data in metadata.iter() {
data.hash(&mut state);
}
{
let mut visit = svh_visitor::make(&mut state);
visit::walk_crate(&mut visit, krate);
}
// FIXME (#14132): This hash is still sensitive to e.g. the
// spans of the crate Attributes and their underlying
// MetaItems; we should make ContentHashable impl for those
// types and then use hash_content. But, since all crate
// attributes should appear near beginning of the file, it is
// not such a big deal to be sensitive to their spans for now.
//
// We hash only the MetaItems instead of the entire Attribute
// to avoid hashing the AttrId
for attr in krate.attrs.iter() {
attr.node.value.hash(&mut state);
}
let hash = state.result();
return Svh {
hash: range_step(0u, 64u, 4u).map(|i| hex(hash >> i)).collect()
};
fn hex(b: u64) -> char {
let b = (b & 0xf) as u8;
let b = match b {
0... 9 => '0' as u8 + b,
_ => 'a' as u8 + b - 10,
};
b as char
}
}
}
impl fmt::Show for Svh {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.pad(self.as_str())
}
}
// FIXME (#14132): Even this SVH computation still has implementation
// artifacts: namely, the order of item declaration will affect the
// hash computation, but for many kinds of items the order of
// declaration should be irrelevant to the ABI.
mod svh_visitor {
pub use self::SawExprComponent::*;
pub use self::SawStmtComponent::*;
use self::SawAbiComponent::*;
use syntax::ast;
use syntax::ast::*;
use syntax::codemap::Span;
use syntax::parse::token;
use syntax::print::pprust;
use syntax::visit;
use syntax::visit::{Visitor, FnKind};
use std::hash::Hash;
use std::hash::sip::SipState;
pub struct StrictVersionHashVisitor<'a> {
pub st: &'a mut SipState,
}
pub fn make<'a>(st: &'a mut SipState) -> StrictVersionHashVisitor<'a> {
StrictVersionHashVisitor { st: st }
}
// To off-load the bulk of the hash-computation on deriving(Hash),
// we define a set of enums corresponding to the content that our
// crate visitor will encounter as it traverses the ast.
//
// The important invariant is that all of the Saw*Component enums
// do not carry any Spans, Names, or Idents.
//
// Not carrying any Names/Idents is the important fix for problem
// noted on PR #13948: using the ident.name as the basis for a
// hash leads to unstable SVH, because ident.name is just an index
// into intern table (i.e. essentially a random address), not
// computed from the name content.
//
// With the below enums, the SVH computation is not sensitive to
// artifacts of how rustc was invoked nor of how the source code
// was laid out. (Or at least it is *less* sensitive.)
// This enum represents the different potential bits of code the
// visitor could encounter that could affect the ABI for the crate,
// and assigns each a distinct tag to feed into the hash computation.
#[deriving(Hash)]
enum SawAbiComponent<'a> {
// FIXME (#14132): should we include (some function of)
// ident.ctxt as well?
SawIdent(token::InternedString),
SawStructDef(token::InternedString),
SawLifetimeRef(token::InternedString),
SawLifetimeDef(token::InternedString),
SawMod,
SawViewItem,
SawForeignItem,
SawItem,
SawDecl,
SawTy,
SawGenerics,
SawFn,
SawTyMethod,
SawTraitMethod,
SawStructField,
SawVariant,
SawExplicitSelf,
SawPath,
SawOptLifetimeRef,
SawBlock,
SawPat,
SawLocal,
SawArm,
SawExpr(SawExprComponent<'a>),
SawStmt(SawStmtComponent),
}
/// SawExprComponent carries all of the information that we want
/// to include in the hash that *won't* be covered by the
/// subsequent recursive traversal of the expression's
/// substructure by the visitor.
///
/// We know every Expr_ variant is covered by a variant because
/// `fn saw_expr` maps each to some case below. Ensuring that
/// each variant carries an appropriate payload has to be verified
/// by hand.
///
/// (However, getting that *exactly* right is not so important
/// because the SVH is just a developer convenience; there is no
/// guarantee of collision-freedom, hash collisions are just
/// (hopefully) unlikely.)
#[deriving(Hash)]
pub enum SawExprComponent<'a> {
SawExprLoop(Option<token::InternedString>),
SawExprField(token::InternedString),
SawExprTupField(uint),
SawExprBreak(Option<token::InternedString>),
SawExprAgain(Option<token::InternedString>),
SawExprBox,
SawExprVec,
SawExprCall,
SawExprMethodCall,
SawExprTup,
SawExprBinary(ast::BinOp),
SawExprUnary(ast::UnOp),
SawExprLit(ast::Lit_),
SawExprCast,
SawExprIf,
SawExprWhile,
SawExprMatch,
SawExprFnBlock,
SawExprUnboxedFn,
SawExprProc,
SawExprBlock,
SawExprAssign,
SawExprAssignOp(ast::BinOp),
SawExprIndex,
SawExprSlice,
SawExprPath,
SawExprAddrOf(ast::Mutability),
SawExprRet,
SawExprInlineAsm(&'a ast::InlineAsm),
SawExprStruct,
SawExprRepeat,
SawExprParen,
SawExprForLoop,
}
fn saw_expr<'a>(node: &'a Expr_) -> SawExprComponent<'a> {
match *node {
ExprBox(..) => SawExprBox,
ExprVec(..) => SawExprVec,
ExprCall(..) => SawExprCall,
ExprMethodCall(..) => SawExprMethodCall,
ExprTup(..) => SawExprTup,
ExprBinary(op, _, _) => SawExprBinary(op),
ExprUnary(op, _) => SawExprUnary(op),
ExprLit(ref lit) => SawExprLit(lit.node.clone()),
ExprCast(..) => SawExprCast,
ExprIf(..) => SawExprIf,
ExprWhile(..) => SawExprWhile,
ExprLoop(_, id) => SawExprLoop(id.map(content)),
ExprMatch(..) => SawExprMatch,
ExprFnBlock(..) => SawExprFnBlock,
ExprUnboxedFn(..) => SawExprUnboxedFn,
ExprProc(..) => SawExprProc,
ExprBlock(..) => SawExprBlock,
ExprAssign(..) => SawExprAssign,
ExprAssignOp(op, _, _) => SawExprAssignOp(op),
ExprField(_, id, _) => SawExprField(content(id.node)),
ExprTupField(_, id, _) => SawExprTupField(id.node),
ExprIndex(..) => SawExprIndex,
ExprSlice(..) => SawExprSlice,
ExprPath(..) => SawExprPath,
ExprAddrOf(m, _) => SawExprAddrOf(m),
ExprBreak(id) => SawExprBreak(id.map(content)),
ExprAgain(id) => SawExprAgain(id.map(content)),
ExprRet(..) => SawExprRet,
ExprInlineAsm(ref asm) => SawExprInlineAsm(asm),
ExprStruct(..) => SawExprStruct,
ExprRepeat(..) => SawExprRepeat,
ExprParen(..) => SawExprParen,
ExprForLoop(..) => SawExprForLoop,
// just syntactic artifacts, expanded away by time of SVH.
ExprIfLet(..) => unreachable!(),
ExprWhileLet(..) => unreachable!(),
ExprMac(..) => unreachable!(),
}
}
/// SawStmtComponent is analogous to SawExprComponent, but for statements.
#[deriving(Hash)]
pub enum SawStmtComponent {
SawStmtDecl,
SawStmtExpr,
SawStmtSemi,
}
fn saw_stmt(node: &Stmt_) -> SawStmtComponent {
match *node {
StmtDecl(..) => SawStmtDecl,
StmtExpr(..) => SawStmtExpr,
StmtSemi(..) => SawStmtSemi,
StmtMac(..) => unreachable!(),
}
}
// Ad-hoc overloading between Ident and Name to their intern table lookups.
trait InternKey { fn get_content(self) -> token::InternedString; }
impl InternKey for Ident {
fn get_content(self) -> token::InternedString { token::get_ident(self) }
}
impl InternKey for Name {
fn get_content(self) -> token::InternedString { token::get_name(self) }
}
fn content<K:InternKey>(k: K) -> token::InternedString |
impl<'a, 'v> Visitor<'v> for StrictVersionHashVisitor<'a> {
fn visit_mac(&mut self, macro: &Mac) {
// macro invocations, namely macro_rules definitions,
// *can* appear as items, even in the expanded crate AST.
if macro_name(macro).get() == "macro_rules" {
// Pretty-printing definition to a string strips out
// surface artifacts (currently), such as the span
// information, yielding a content-based hash.
// FIXME (#14132): building temporary string is
// expensive; a direct content-based hash on token
// trees might be faster. Implementing this is far
// easier in short term.
let macro_defn_as_string =
pprust::to_string(|pp_state| pp_state.print_mac(macro));
macro_defn_as_string.hash(self.st);
} else {
// It is not possible to observe any kind of macro
// invocation at this stage except `macro_rules!`.
panic!("reached macro somehow: {}",
pprust::to_string(|pp_state| pp_state.print_mac(macro)));
}
visit::walk_mac(self, macro);
fn macro_name(macro: &Mac) -> token::InternedString {
match ¯o.node {
&MacInvocTT(ref path, ref _tts, ref _stx_ctxt) => {
let s = path.segments.as_slice();
assert_eq!(s.len(), 1);
content(s[0].identifier)
}
}
}
}
fn visit_struct_def(&mut self, s: &StructDef, ident: Ident,
g: &Generics, _: NodeId) {
SawStructDef(content(ident)).hash(self.st);
visit::walk_generics(self, g);
visit::walk_struct_def(self, s)
}
fn visit_variant(&mut self, v: &Variant, g: &Generics) {
SawVariant.hash(self.st);
// walk_variant does not call walk_generics, so do it here.
visit::walk_generics(self, g);
visit::walk_variant(self, v, g)
}
fn visit_opt_lifetime_ref(&mut self, _: Span, l: &Option<Lifetime>) {
SawOptLifetimeRef.hash(self.st);
// (This is a strange method in the visitor trait, in that
// it does not expose a walk function to do the subroutine
// calls.)
match *l {
Some(ref l) => self.visit_lifetime_ref(l),
None => ()
}
}
// All of the remaining methods just record (in the hash
// SipState) that the visitor saw that particular variant
// (with its payload), and continue walking as the default
// visitor would.
//
// Some of the implementations have some notes as to how one
// might try to make their SVH computation less discerning
// (e.g. by incorporating reachability analysis). But
// currently all of their implementations are uniform and
// uninteresting.
//
// (If you edit a method such that it deviates from the
// pattern, please move that method up above this comment.)
fn visit_ident(&mut self, _: Span, ident: Ident) {
SawIdent(content(ident)).hash(self.st);
}
fn visit_lifetime_ref(&mut self, l: &Lifetime) {
SawLifetimeRef(content(l.name)).hash(self.st);
}
fn visit_lifetime_def(&mut self, l: &LifetimeDef) {
SawLifetimeDef(content(l.lifetime.name)).hash(self.st);
}
// We do recursively walk the bodies of functions/methods
// (rather than omitting their bodies from the hash) since
// monomorphization and cross-crate inlining generally implies
// that a change to a crate body will require downstream
// crates to be recompiled.
fn visit_expr(&mut self, ex: &Expr) {
SawExpr(saw_expr(&ex.node)).hash(self.st); visit::walk_expr(self, ex)
}
fn visit_stmt(&mut self, s: &Stmt) {
SawStmt(saw_stmt(&s.node)).hash(self.st); visit::walk_stmt(self, s)
}
fn visit_view_item(&mut self, i: &ViewItem) {
// Two kinds of view items can affect the ABI for a crate:
// exported `pub use` view items (since that may expose
// items that downstream crates can call), and `use
// foo::Trait`, since changing that may affect method
// resolution.
//
// The simplest approach to handling both of the above is
// just to adopt the same simple-minded (fine-grained)
// hash that I am deploying elsewhere here.
SawViewItem.hash(self.st); visit::walk_view_item(self, i)
}
fn visit_foreign_item(&mut self, i: &ForeignItem) {
// FIXME (#14132) ideally we would incorporate privacy (or
// perhaps reachability) somewhere here, so foreign items
// that do not leak into downstream crates would not be
// part of the ABI.
SawForeignItem.hash(self.st); visit::walk_foreign_item(self, i)
}
fn visit_item(&mut self, i: &Item) {
// FIXME (#14132) ideally would incorporate reachability
// analysis somewhere here, so items that never leak into
// downstream crates (e.g. via monomorphisation or
// inlining) would not be part of the ABI.
SawItem.hash(self.st); visit::walk_item(self, i)
}
fn visit_mod(&mut self, m: &Mod, _s: Span, _n: NodeId) {
SawMod.hash(self.st); visit::walk_mod(self, m)
}
fn visit_decl(&mut self, d: &Decl) {
SawDecl.hash(self.st); visit::walk_decl(self, d)
}
fn visit_ty(&mut self, t: &Ty) {
SawTy.hash(self.st); visit::walk_ty(self, t)
}
fn visit_generics(&mut self, g: &Generics) {
SawGenerics.hash(self.st); visit::walk_generics(self, g)
}
fn visit_fn(&mut self, fk: FnKind<'v>, fd: &'v FnDecl,
b: &'v Block, s: Span, _: NodeId) {
SawFn.hash(self.st); visit::walk_fn(self, fk, fd, b, s)
}
fn visit_ty_method(&mut self, t: &TypeMethod) {
SawTyMethod.hash(self.st); visit::walk_ty_method(self, t)
}
fn visit_trait_item(&mut self, t: &TraitItem) {
SawTraitMethod.hash(self.st); visit::walk_trait_item(self, t)
}
fn visit_struct_field(&mut self, s: &StructField) {
SawStructField.hash(self.st); visit::walk_struct_field(self, s)
}
fn visit_explicit_self(&mut self, es: &ExplicitSelf) {
SawExplicitSelf.hash(self.st); visit::walk_explicit_self(self, es)
}
fn visit_path(&mut self, path: &Path, _: ast::NodeId) {
SawPath.hash(self.st); visit::walk_path(self, path)
}
fn visit_block(&mut self, b: &Block) {
SawBlock.hash(self.st); visit::walk_block(self, b)
}
fn visit_pat(&mut self, p: &Pat) {
SawPat.hash(self.st); visit::walk_pat(self, p)
} | { k.get_content() } | identifier_body |
generation_size.rs | /*
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* This software may be used and distributed according to the terms of the
* GNU General Public License version 2.
*/
use std::collections::HashMap;
use crate::utils;
use crate::MononokeSQLBlobGCArgs;
use anyhow::Result;
use bytesize::ByteSize;
use clap::Parser;
use futures::stream::{self, StreamExt, TryStreamExt};
use futures::TryFutureExt;
use mononoke_app::MononokeApp;
/// measure generation sizes
#[derive(Parser)]
pub struct CommandArgs {}
fn | (sizes: &HashMap<Option<u64>, u64>) {
let generations = {
let mut keys: Vec<_> = sizes.keys().collect();
keys.sort_unstable();
keys
};
println!("Generation | Size");
println!("-----------------");
for generation in generations {
let size = ByteSize::b(sizes[generation]);
let generation = match generation {
None => "NULL".to_string(),
Some(g) => g.to_string(),
};
println!("{:>10} | {}", generation, size.to_string_as(true));
}
}
pub async fn run(app: MononokeApp, _args: CommandArgs) -> Result<()> {
let common_args: MononokeSQLBlobGCArgs = app.args()?;
let max_parallelism: usize = common_args.scheduled_max;
let (sqlblob, shard_range) = utils::get_sqlblob_and_shard_range(&app).await?;
let scuba_sample_builder = app.environment().scuba_sample_builder.clone();
let shard_sizes: Vec<(usize, HashMap<Option<u64>, (u64, u64)>)> = stream::iter(shard_range)
.map(|shard| {
sqlblob
.get_chunk_sizes_by_generation(shard)
.map_ok(move |gen_to_size| (shard, gen_to_size))
})
.buffer_unordered(max_parallelism)
.try_collect::<Vec<(usize, HashMap<Option<u64>, (u64, u64)>)>>()
.await?;
let mut size_summary = HashMap::new();
for (shard, sizes) in shard_sizes {
for (generation, (size, chunk_id_count)) in sizes.into_iter() {
let mut sample = scuba_sample_builder.clone();
sample.add("shard", shard);
sample.add_opt("generation", generation);
sample.add("size", size);
sample.add("chunk_id_count", chunk_id_count);
sample.log();
*size_summary.entry(generation).or_insert(0u64) += size;
}
}
if scuba_sample_builder.is_discard() {
print_sizes(&size_summary);
}
Ok(())
}
| print_sizes | identifier_name |
generation_size.rs | /*
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* This software may be used and distributed according to the terms of the
* GNU General Public License version 2.
*/
use std::collections::HashMap;
use crate::utils;
use crate::MononokeSQLBlobGCArgs;
use anyhow::Result;
use bytesize::ByteSize;
use clap::Parser;
use futures::stream::{self, StreamExt, TryStreamExt};
use futures::TryFutureExt;
use mononoke_app::MononokeApp;
/// measure generation sizes
#[derive(Parser)]
pub struct CommandArgs {}
fn print_sizes(sizes: &HashMap<Option<u64>, u64>) {
let generations = {
let mut keys: Vec<_> = sizes.keys().collect();
keys.sort_unstable();
keys
};
println!("Generation | Size");
println!("-----------------");
for generation in generations {
let size = ByteSize::b(sizes[generation]);
let generation = match generation {
None => "NULL".to_string(),
Some(g) => g.to_string(),
};
println!("{:>10} | {}", generation, size.to_string_as(true));
}
}
pub async fn run(app: MononokeApp, _args: CommandArgs) -> Result<()> | for (shard, sizes) in shard_sizes {
for (generation, (size, chunk_id_count)) in sizes.into_iter() {
let mut sample = scuba_sample_builder.clone();
sample.add("shard", shard);
sample.add_opt("generation", generation);
sample.add("size", size);
sample.add("chunk_id_count", chunk_id_count);
sample.log();
*size_summary.entry(generation).or_insert(0u64) += size;
}
}
if scuba_sample_builder.is_discard() {
print_sizes(&size_summary);
}
Ok(())
}
| {
let common_args: MononokeSQLBlobGCArgs = app.args()?;
let max_parallelism: usize = common_args.scheduled_max;
let (sqlblob, shard_range) = utils::get_sqlblob_and_shard_range(&app).await?;
let scuba_sample_builder = app.environment().scuba_sample_builder.clone();
let shard_sizes: Vec<(usize, HashMap<Option<u64>, (u64, u64)>)> = stream::iter(shard_range)
.map(|shard| {
sqlblob
.get_chunk_sizes_by_generation(shard)
.map_ok(move |gen_to_size| (shard, gen_to_size))
})
.buffer_unordered(max_parallelism)
.try_collect::<Vec<(usize, HashMap<Option<u64>, (u64, u64)>)>>()
.await?;
let mut size_summary = HashMap::new(); | identifier_body |
generation_size.rs | /*
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* This software may be used and distributed according to the terms of the
* GNU General Public License version 2.
*/
use std::collections::HashMap;
use crate::utils;
use crate::MononokeSQLBlobGCArgs;
use anyhow::Result;
use bytesize::ByteSize;
use clap::Parser;
use futures::stream::{self, StreamExt, TryStreamExt};
use futures::TryFutureExt;
use mononoke_app::MononokeApp;
/// measure generation sizes
#[derive(Parser)]
pub struct CommandArgs {}
fn print_sizes(sizes: &HashMap<Option<u64>, u64>) {
let generations = {
let mut keys: Vec<_> = sizes.keys().collect();
keys.sort_unstable();
keys
};
println!("Generation | Size");
println!("-----------------");
for generation in generations {
let size = ByteSize::b(sizes[generation]);
let generation = match generation {
None => "NULL".to_string(), | };
println!("{:>10} | {}", generation, size.to_string_as(true));
}
}
pub async fn run(app: MononokeApp, _args: CommandArgs) -> Result<()> {
let common_args: MononokeSQLBlobGCArgs = app.args()?;
let max_parallelism: usize = common_args.scheduled_max;
let (sqlblob, shard_range) = utils::get_sqlblob_and_shard_range(&app).await?;
let scuba_sample_builder = app.environment().scuba_sample_builder.clone();
let shard_sizes: Vec<(usize, HashMap<Option<u64>, (u64, u64)>)> = stream::iter(shard_range)
.map(|shard| {
sqlblob
.get_chunk_sizes_by_generation(shard)
.map_ok(move |gen_to_size| (shard, gen_to_size))
})
.buffer_unordered(max_parallelism)
.try_collect::<Vec<(usize, HashMap<Option<u64>, (u64, u64)>)>>()
.await?;
let mut size_summary = HashMap::new();
for (shard, sizes) in shard_sizes {
for (generation, (size, chunk_id_count)) in sizes.into_iter() {
let mut sample = scuba_sample_builder.clone();
sample.add("shard", shard);
sample.add_opt("generation", generation);
sample.add("size", size);
sample.add("chunk_id_count", chunk_id_count);
sample.log();
*size_summary.entry(generation).or_insert(0u64) += size;
}
}
if scuba_sample_builder.is_discard() {
print_sizes(&size_summary);
}
Ok(())
} | Some(g) => g.to_string(), | random_line_split |
ast_util.rs | BiLe => "<=",
BiNe => "!=",
BiGe => ">=",
BiGt => ">"
}
}
pub fn lazy_binop(b: BinOp) -> bool {
match b {
BiAnd => true,
BiOr => true,
_ => false
}
}
pub fn is_shift_binop(b: BinOp) -> bool {
match b {
BiShl => true,
BiShr => true,
_ => false
}
}
pub fn unop_to_str(op: UnOp) -> &'static str {
match op {
UnBox => "box(GC) ",
UnUniq => "box() ",
UnDeref => "*",
UnNot => "!",
UnNeg => "-",
}
}
pub fn is_path(e: Gc<Expr>) -> bool {
return match e.node { ExprPath(_) => true, _ => false };
}
// Get a string representation of a signed int type, with its value.
// We want to avoid "45int" and "-3int" in favor of "45" and "-3"
pub fn int_ty_to_str(t: IntTy, val: Option<i64>) -> String {
let s = match t {
TyI if val.is_some() => "i",
TyI => "int",
TyI8 => "i8",
TyI16 => "i16",
TyI32 => "i32",
TyI64 => "i64"
};
match val {
// cast to a u64 so we can correctly print INT64_MIN. All integral types
// are parsed as u64, so we wouldn't want to print an extra negative
// sign.
Some(n) => format!("{}{}", n as u64, s),
None => s.to_string()
}
}
pub fn int_ty_max(t: IntTy) -> u64 {
match t {
TyI8 => 0x80u64,
TyI16 => 0x8000u64,
TyI | TyI32 => 0x80000000u64, // actually ni about TyI
TyI64 => 0x8000000000000000u64
}
}
// Get a string representation of an unsigned int type, with its value.
// We want to avoid "42uint" in favor of "42u"
pub fn uint_ty_to_str(t: UintTy, val: Option<u64>) -> String {
let s = match t {
TyU if val.is_some() => "u",
TyU => "uint",
TyU8 => "u8",
TyU16 => "u16",
TyU32 => "u32",
TyU64 => "u64"
};
match val {
Some(n) => format!("{}{}", n, s),
None => s.to_string()
}
}
pub fn uint_ty_max(t: UintTy) -> u64 {
match t {
TyU8 => 0xffu64,
TyU16 => 0xffffu64,
TyU | TyU32 => 0xffffffffu64, // actually ni about TyU
TyU64 => 0xffffffffffffffffu64
}
}
pub fn float_ty_to_str(t: FloatTy) -> String {
match t {
TyF32 => "f32".to_string(),
TyF64 => "f64".to_string(),
}
}
pub fn is_call_expr(e: Gc<Expr>) -> bool {
match e.node { ExprCall(..) => true, _ => false }
}
pub fn block_from_expr(e: Gc<Expr>) -> P<Block> {
P(Block {
view_items: Vec::new(),
stmts: Vec::new(),
expr: Some(e),
id: e.id,
rules: DefaultBlock,
span: e.span
})
}
pub fn ident_to_path(s: Span, identifier: Ident) -> Path {
ast::Path {
span: s,
global: false,
segments: vec!(
ast::PathSegment {
identifier: identifier,
lifetimes: Vec::new(),
types: OwnedSlice::empty(),
}
),
}
}
pub fn ident_to_pat(id: NodeId, s: Span, i: Ident) -> Gc<Pat> {
box(GC) ast::Pat { id: id,
node: PatIdent(BindByValue(MutImmutable), ident_to_path(s, i), None),
span: s }
}
pub fn name_to_dummy_lifetime(name: Name) -> Lifetime {
Lifetime { id: DUMMY_NODE_ID,
span: codemap::DUMMY_SP,
name: name }
}
pub fn is_unguarded(a: &Arm) -> bool {
match a.guard {
None => true,
_ => false
}
}
pub fn unguarded_pat(a: &Arm) -> Option<Vec<Gc<Pat>>> |
/// Generate a "pretty" name for an `impl` from its type and trait.
/// This is designed so that symbols of `impl`'d methods give some
/// hint of where they came from, (previously they would all just be
/// listed as `__extensions__::method_name::hash`, with no indication
/// of the type).
pub fn impl_pretty_name(trait_ref: &Option<TraitRef>, ty: &Ty) -> Ident {
let mut pretty = pprust::ty_to_str(ty);
match *trait_ref {
Some(ref trait_ref) => {
pretty.push_char('.');
pretty.push_str(pprust::path_to_str(&trait_ref.path).as_slice());
}
None => {}
}
token::gensym_ident(pretty.as_slice())
}
pub fn public_methods(ms: Vec<Gc<Method>> ) -> Vec<Gc<Method>> {
ms.move_iter().filter(|m| {
match m.vis {
Public => true,
_ => false
}
}).collect()
}
// extract a TypeMethod from a TraitMethod. if the TraitMethod is
// a default, pull out the useful fields to make a TypeMethod
pub fn trait_method_to_ty_method(method: &TraitMethod) -> TypeMethod {
match *method {
Required(ref m) => (*m).clone(),
Provided(ref m) => {
TypeMethod {
ident: m.ident,
attrs: m.attrs.clone(),
fn_style: m.fn_style,
decl: m.decl,
generics: m.generics.clone(),
explicit_self: m.explicit_self,
id: m.id,
span: m.span,
vis: m.vis,
}
}
}
}
pub fn split_trait_methods(trait_methods: &[TraitMethod])
-> (Vec<TypeMethod>, Vec<Gc<Method>> ) {
let mut reqd = Vec::new();
let mut provd = Vec::new();
for trt_method in trait_methods.iter() {
match *trt_method {
Required(ref tm) => reqd.push((*tm).clone()),
Provided(m) => provd.push(m)
}
};
(reqd, provd)
}
pub fn struct_field_visibility(field: ast::StructField) -> Visibility {
match field.node.kind {
ast::NamedField(_, v) | ast::UnnamedField(v) => v
}
}
/// Maps a binary operator to its precedence
pub fn operator_prec(op: ast::BinOp) -> uint {
match op {
// 'as' sits here with 12
BiMul | BiDiv | BiRem => 11u,
BiAdd | BiSub => 10u,
BiShl | BiShr => 9u,
BiBitAnd => 8u,
BiBitXor => 7u,
BiBitOr => 6u,
BiLt | BiLe | BiGe | BiGt => 4u,
BiEq | BiNe => 3u,
BiAnd => 2u,
BiOr => 1u
}
}
/// Precedence of the `as` operator, which is a binary operator
/// not appearing in the prior table.
pub static as_prec: uint = 12u;
pub fn empty_generics() -> Generics {
Generics {lifetimes: Vec::new(),
ty_params: OwnedSlice::empty()}
}
// ______________________________________________________________________
// Enumerating the IDs which appear in an AST
#[deriving(Encodable, Decodable)]
pub struct IdRange {
pub min: NodeId,
pub max: NodeId,
}
impl IdRange {
pub fn max() -> IdRange {
IdRange {
min: u32::MAX,
max: u32::MIN,
}
}
pub fn empty(&self) -> bool {
self.min >= self.max
}
pub fn add(&mut self, id: NodeId) {
self.min = cmp::min(self.min, id);
self.max = cmp::max(self.max, id + 1);
}
}
pub trait IdVisitingOperation {
fn visit_id(&self, node_id: NodeId);
}
pub struct IdVisitor<'a, O> {
pub operation: &'a O,
pub pass_through_items: bool,
pub visited_outermost: bool,
}
impl<'a, O: IdVisitingOperation> IdVisitor<'a, O> {
fn visit_generics_helper(&self, generics: &Generics) {
for type_parameter in generics.ty_params.iter() {
self.operation.visit_id(type_parameter.id)
}
for lifetime in generics.lifetimes.iter() {
self.operation.visit_id(lifetime.id)
}
}
}
impl<'a, O: IdVisitingOperation> Visitor<()> for IdVisitor<'a, O> {
fn visit_mod(&mut self,
module: &Mod,
_: Span,
node_id: NodeId,
env: ()) {
self.operation.visit_id(node_id);
visit::walk_mod(self, module, env)
}
fn visit_view_item(&mut self, view_item: &ViewItem, env: ()) {
if!self.pass_through_items {
if self.visited_outermost {
return;
} else {
self.visited_outermost = true;
}
}
match view_item.node {
ViewItemExternCrate(_, _, node_id) => {
self.operation.visit_id(node_id)
}
ViewItemUse(ref view_path) => {
match view_path.node {
ViewPathSimple(_, _, node_id) |
ViewPathGlob(_, node_id) => {
self.operation.visit_id(node_id)
}
ViewPathList(_, ref paths, node_id) => {
self.operation.visit_id(node_id);
for path in paths.iter() {
self.operation.visit_id(path.node.id)
}
}
}
}
}
visit::walk_view_item(self, view_item, env);
self.visited_outermost = false;
}
fn visit_foreign_item(&mut self, foreign_item: &ForeignItem, env: ()) {
self.operation.visit_id(foreign_item.id);
visit::walk_foreign_item(self, foreign_item, env)
}
fn visit_item(&mut self, item: &Item, env: ()) {
if!self.pass_through_items {
if self.visited_outermost {
return
} else {
self.visited_outermost = true
}
}
self.operation.visit_id(item.id);
match item.node {
ItemEnum(ref enum_definition, _) => {
for variant in enum_definition.variants.iter() {
self.operation.visit_id(variant.node.id)
}
}
_ => {}
}
visit::walk_item(self, item, env);
self.visited_outermost = false
}
fn visit_local(&mut self, local: &Local, env: ()) {
self.operation.visit_id(local.id);
visit::walk_local(self, local, env)
}
fn visit_block(&mut self, block: &Block, env: ()) {
self.operation.visit_id(block.id);
visit::walk_block(self, block, env)
}
fn visit_stmt(&mut self, statement: &Stmt, env: ()) {
self.operation.visit_id(ast_util::stmt_id(statement));
visit::walk_stmt(self, statement, env)
}
fn visit_pat(&mut self, pattern: &Pat, env: ()) {
self.operation.visit_id(pattern.id);
visit::walk_pat(self, pattern, env)
}
fn visit_expr(&mut self, expression: &Expr, env: ()) {
self.operation.visit_id(expression.id);
visit::walk_expr(self, expression, env)
}
fn visit_ty(&mut self, typ: &Ty, env: ()) {
self.operation.visit_id(typ.id);
match typ.node {
TyPath(_, _, id) => self.operation.visit_id(id),
_ => {}
}
visit::walk_ty(self, typ, env)
}
fn visit_generics(&mut self, generics: &Generics, env: ()) {
self.visit_generics_helper(generics);
visit::walk_generics(self, generics, env)
}
fn visit_fn(&mut self,
function_kind: &visit::FnKind,
function_declaration: &FnDecl,
block: &Block,
span: Span,
node_id: NodeId,
env: ()) {
if!self.pass_through_items {
match *function_kind {
visit::FkMethod(..) if self.visited_outermost => return,
visit::FkMethod(..) => self.visited_outermost = true,
_ => {}
}
}
self.operation.visit_id(node_id);
match *function_kind {
visit::FkItemFn(_, generics, _, _) |
visit::FkMethod(_, generics, _) => {
self.visit_generics_helper(generics)
}
visit::FkFnBlock => {}
}
for argument in function_declaration.inputs.iter() {
self.operation.visit_id(argument.id)
}
visit::walk_fn(self,
function_kind,
function_declaration,
block,
span,
env);
if!self.pass_through_items {
match *function_kind {
visit::FkMethod(..) => self.visited_outermost = false,
_ => {}
}
}
}
fn visit_struct_field(&mut self, struct_field: &StructField, env: ()) {
self.operation.visit_id(struct_field.node.id);
visit::walk_struct_field(self, struct_field, env)
}
fn visit_struct_def(&mut self,
struct_def: &StructDef,
_: ast::Ident,
_: &ast::Generics,
id: NodeId,
_: ()) {
self.operation.visit_id(id);
struct_def.ctor_id.map(|ctor_id| self.operation.visit_id(ctor_id));
visit::walk_struct_def(self, struct_def, ());
}
fn visit_trait_method(&mut self, tm: &ast::TraitMethod, _: ()) {
match *tm {
ast::Required(ref m) => self.operation.visit_id(m.id),
ast::Provided(ref m) => self.operation.visit_id(m.id),
}
visit::walk_trait_method(self, tm, ());
}
}
pub fn visit_ids_for_inlined_item<O: IdVisitingOperation>(item: &InlinedItem,
operation: &O) {
let mut id_visitor = IdVisitor {
operation: operation,
pass_through_items: true,
visited_outermost: false,
};
visit::walk_inlined_item(&mut id_visitor, item, ());
}
struct IdRangeComputingVisitor {
result: Cell<IdRange>,
}
impl IdVisitingOperation for IdRangeComputingVisitor {
fn visit_id(&self, id: NodeId) {
let mut id_range = self.result.get();
id_range.add(id);
self.result.set(id_range)
}
}
pub fn compute_id_range_for_inlined_item(item: &InlinedItem) -> IdRange {
let visitor = IdRangeComputingVisitor {
result: Cell::new(IdRange::max())
};
visit_ids_for_inlined_item(item, &visitor);
visitor.result.get()
}
pub fn compute_id_range_for_fn_body(fk: &visit::FnKind,
decl: &FnDecl,
body: &Block,
sp: Span,
id: NodeId)
-> IdRange
{
/*!
* Computes the id range for a single fn body,
* ignoring nested items.
*/
let visitor = IdRangeComputingVisitor {
result: Cell::new(IdRange::max())
};
let mut id_visitor = IdVisitor {
operation: &visitor,
pass_through_items: false,
visited_outermost: false,
};
id_visitor.visit_fn(fk, decl, body, sp, id, ());
visitor.result.get()
}
pub fn is_item_impl(item: Gc<ast::Item>) -> bool {
match item.node {
ItemImpl(..) => true,
_ => false
}
}
pub fn walk_pat(pat: &Pat, it: |&Pat| -> bool) -> bool {
if!it(pat) {
return false;
}
match pat.node {
PatIdent(_, _, Some(ref p)) => walk_pat(&**p, it),
PatStruct(_, ref fields, _) => {
fields.iter().advance(|f| walk_pat(&*f.pat, |p| it(p)))
}
PatEnum(_, Some(ref s)) | PatTup(ref s) => {
s.iter().advance(|p| walk_pat(&**p, |p| it(p)))
}
PatBox(ref s) | PatRegion(ref s) => {
walk_pat(&**s, it)
}
PatVec(ref before, ref slice, ref after) => {
before.iter().advance(|p| walk_pat(&**p, |p| it(p))) &&
slice.iter().advance(|p| walk_pat(&**p, |p| it(p))) &&
after.iter().advance(|p| walk_pat(&**p, |p| it(p)))
}
PatMac(_) => fail!("attempted to analyze unexpanded pattern"),
PatWild | PatWildMulti | PatLit(_) | PatRange(_, _) | PatIdent(_, _, _) |
PatEnum(_, _) => {
true
}
}
}
pub trait EachViewItem {
fn each_view_item(&self, f: |&ast::ViewItem| -> bool) -> bool;
}
struct EachViewItemData<'a> {
callback: |&ast::ViewItem|: 'a -> bool,
}
impl<'a> Visitor<()> for EachViewItemData<'a> {
fn visit_view_item(&mut self, view_item: &ast::ViewItem, _: ()) {
let _ = (self.callback)(view_item);
}
}
impl EachViewItem for ast::Crate {
fn each_view_item(&self, | {
if is_unguarded(a) {
Some(/* FIXME (#2543) */ a.pats.clone())
} else {
None
}
} | identifier_body |
ast_util.rs | BiLe => "<=",
BiNe => "!=",
BiGe => ">=",
BiGt => ">"
}
}
pub fn lazy_binop(b: BinOp) -> bool {
match b {
BiAnd => true,
BiOr => true,
_ => false
}
}
pub fn is_shift_binop(b: BinOp) -> bool {
match b {
BiShl => true,
BiShr => true,
_ => false
}
}
pub fn unop_to_str(op: UnOp) -> &'static str {
match op {
UnBox => "box(GC) ",
UnUniq => "box() ",
UnDeref => "*",
UnNot => "!",
UnNeg => "-",
}
}
pub fn is_path(e: Gc<Expr>) -> bool {
return match e.node { ExprPath(_) => true, _ => false };
}
// Get a string representation of a signed int type, with its value.
// We want to avoid "45int" and "-3int" in favor of "45" and "-3"
pub fn int_ty_to_str(t: IntTy, val: Option<i64>) -> String {
let s = match t {
TyI if val.is_some() => "i",
TyI => "int",
TyI8 => "i8",
TyI16 => "i16",
TyI32 => "i32",
TyI64 => "i64"
};
match val {
// cast to a u64 so we can correctly print INT64_MIN. All integral types
// are parsed as u64, so we wouldn't want to print an extra negative
// sign.
Some(n) => format!("{}{}", n as u64, s),
None => s.to_string()
}
}
pub fn int_ty_max(t: IntTy) -> u64 {
match t {
TyI8 => 0x80u64,
TyI16 => 0x8000u64,
TyI | TyI32 => 0x80000000u64, // actually ni about TyI
TyI64 => 0x8000000000000000u64
}
}
// Get a string representation of an unsigned int type, with its value.
// We want to avoid "42uint" in favor of "42u"
pub fn uint_ty_to_str(t: UintTy, val: Option<u64>) -> String {
let s = match t {
TyU if val.is_some() => "u",
TyU => "uint",
TyU8 => "u8",
TyU16 => "u16",
TyU32 => "u32",
TyU64 => "u64"
};
match val {
Some(n) => format!("{}{}", n, s),
None => s.to_string()
}
}
pub fn uint_ty_max(t: UintTy) -> u64 {
match t {
TyU8 => 0xffu64,
TyU16 => 0xffffu64,
TyU | TyU32 => 0xffffffffu64, // actually ni about TyU
TyU64 => 0xffffffffffffffffu64
}
}
pub fn float_ty_to_str(t: FloatTy) -> String {
match t {
TyF32 => "f32".to_string(),
TyF64 => "f64".to_string(),
}
}
pub fn is_call_expr(e: Gc<Expr>) -> bool {
match e.node { ExprCall(..) => true, _ => false }
}
pub fn block_from_expr(e: Gc<Expr>) -> P<Block> {
P(Block {
view_items: Vec::new(),
stmts: Vec::new(),
expr: Some(e),
id: e.id,
rules: DefaultBlock,
span: e.span
})
}
pub fn ident_to_path(s: Span, identifier: Ident) -> Path {
ast::Path {
span: s,
global: false,
segments: vec!(
ast::PathSegment {
identifier: identifier,
lifetimes: Vec::new(),
types: OwnedSlice::empty(),
}
),
}
}
pub fn ident_to_pat(id: NodeId, s: Span, i: Ident) -> Gc<Pat> {
box(GC) ast::Pat { id: id,
node: PatIdent(BindByValue(MutImmutable), ident_to_path(s, i), None),
span: s }
}
pub fn name_to_dummy_lifetime(name: Name) -> Lifetime {
Lifetime { id: DUMMY_NODE_ID,
span: codemap::DUMMY_SP,
name: name }
}
pub fn is_unguarded(a: &Arm) -> bool {
match a.guard {
None => true,
_ => false
}
}
pub fn unguarded_pat(a: &Arm) -> Option<Vec<Gc<Pat>>> {
if is_unguarded(a) {
Some(/* FIXME (#2543) */ a.pats.clone())
} else {
None
}
}
/// Generate a "pretty" name for an `impl` from its type and trait.
/// This is designed so that symbols of `impl`'d methods give some
/// hint of where they came from, (previously they would all just be
/// listed as `__extensions__::method_name::hash`, with no indication
/// of the type).
pub fn impl_pretty_name(trait_ref: &Option<TraitRef>, ty: &Ty) -> Ident {
let mut pretty = pprust::ty_to_str(ty);
match *trait_ref {
Some(ref trait_ref) => {
pretty.push_char('.');
pretty.push_str(pprust::path_to_str(&trait_ref.path).as_slice());
}
None => {}
}
token::gensym_ident(pretty.as_slice())
}
pub fn public_methods(ms: Vec<Gc<Method>> ) -> Vec<Gc<Method>> {
ms.move_iter().filter(|m| {
match m.vis {
Public => true,
_ => false
}
}).collect()
}
// extract a TypeMethod from a TraitMethod. if the TraitMethod is
// a default, pull out the useful fields to make a TypeMethod
pub fn trait_method_to_ty_method(method: &TraitMethod) -> TypeMethod {
match *method {
Required(ref m) => (*m).clone(),
Provided(ref m) => {
TypeMethod {
ident: m.ident,
attrs: m.attrs.clone(),
fn_style: m.fn_style,
decl: m.decl,
generics: m.generics.clone(),
explicit_self: m.explicit_self,
id: m.id,
span: m.span,
vis: m.vis,
}
}
}
}
pub fn split_trait_methods(trait_methods: &[TraitMethod])
-> (Vec<TypeMethod>, Vec<Gc<Method>> ) {
let mut reqd = Vec::new();
let mut provd = Vec::new();
for trt_method in trait_methods.iter() {
match *trt_method {
Required(ref tm) => reqd.push((*tm).clone()),
Provided(m) => provd.push(m)
}
};
(reqd, provd)
}
pub fn struct_field_visibility(field: ast::StructField) -> Visibility {
match field.node.kind {
ast::NamedField(_, v) | ast::UnnamedField(v) => v
}
}
/// Maps a binary operator to its precedence
pub fn operator_prec(op: ast::BinOp) -> uint {
match op {
// 'as' sits here with 12
BiMul | BiDiv | BiRem => 11u,
BiAdd | BiSub => 10u,
BiShl | BiShr => 9u,
BiBitAnd => 8u,
BiBitXor => 7u,
BiBitOr => 6u,
BiLt | BiLe | BiGe | BiGt => 4u,
BiEq | BiNe => 3u,
BiAnd => 2u,
BiOr => 1u
}
}
/// Precedence of the `as` operator, which is a binary operator
/// not appearing in the prior table.
pub static as_prec: uint = 12u;
pub fn empty_generics() -> Generics {
Generics {lifetimes: Vec::new(),
ty_params: OwnedSlice::empty()}
}
// ______________________________________________________________________
// Enumerating the IDs which appear in an AST
#[deriving(Encodable, Decodable)]
pub struct IdRange {
pub min: NodeId,
pub max: NodeId,
}
impl IdRange {
pub fn max() -> IdRange {
IdRange {
min: u32::MAX,
max: u32::MIN,
}
}
pub fn empty(&self) -> bool {
self.min >= self.max
}
pub fn add(&mut self, id: NodeId) {
self.min = cmp::min(self.min, id);
self.max = cmp::max(self.max, id + 1);
}
}
pub trait IdVisitingOperation {
fn visit_id(&self, node_id: NodeId);
}
pub struct IdVisitor<'a, O> {
pub operation: &'a O,
pub pass_through_items: bool,
pub visited_outermost: bool,
}
impl<'a, O: IdVisitingOperation> IdVisitor<'a, O> {
fn visit_generics_helper(&self, generics: &Generics) {
for type_parameter in generics.ty_params.iter() {
self.operation.visit_id(type_parameter.id)
}
for lifetime in generics.lifetimes.iter() {
self.operation.visit_id(lifetime.id)
}
}
}
impl<'a, O: IdVisitingOperation> Visitor<()> for IdVisitor<'a, O> {
fn visit_mod(&mut self,
module: &Mod,
_: Span,
node_id: NodeId,
env: ()) {
self.operation.visit_id(node_id);
visit::walk_mod(self, module, env)
}
fn visit_view_item(&mut self, view_item: &ViewItem, env: ()) {
if!self.pass_through_items {
if self.visited_outermost {
return;
} else {
self.visited_outermost = true;
}
}
match view_item.node {
ViewItemExternCrate(_, _, node_id) => {
self.operation.visit_id(node_id)
}
ViewItemUse(ref view_path) => {
match view_path.node {
ViewPathSimple(_, _, node_id) |
ViewPathGlob(_, node_id) => {
self.operation.visit_id(node_id)
}
ViewPathList(_, ref paths, node_id) => {
self.operation.visit_id(node_id);
for path in paths.iter() {
self.operation.visit_id(path.node.id)
}
}
}
}
}
visit::walk_view_item(self, view_item, env);
self.visited_outermost = false;
}
fn visit_foreign_item(&mut self, foreign_item: &ForeignItem, env: ()) {
self.operation.visit_id(foreign_item.id);
visit::walk_foreign_item(self, foreign_item, env)
}
fn visit_item(&mut self, item: &Item, env: ()) {
if!self.pass_through_items {
if self.visited_outermost {
return
} else {
self.visited_outermost = true
}
}
self.operation.visit_id(item.id);
match item.node {
ItemEnum(ref enum_definition, _) => {
for variant in enum_definition.variants.iter() {
self.operation.visit_id(variant.node.id)
}
}
_ => {}
}
visit::walk_item(self, item, env);
self.visited_outermost = false
}
fn visit_local(&mut self, local: &Local, env: ()) {
self.operation.visit_id(local.id);
visit::walk_local(self, local, env)
}
fn visit_block(&mut self, block: &Block, env: ()) {
self.operation.visit_id(block.id);
visit::walk_block(self, block, env)
}
fn visit_stmt(&mut self, statement: &Stmt, env: ()) {
self.operation.visit_id(ast_util::stmt_id(statement));
visit::walk_stmt(self, statement, env)
}
fn visit_pat(&mut self, pattern: &Pat, env: ()) {
self.operation.visit_id(pattern.id);
visit::walk_pat(self, pattern, env)
}
fn visit_expr(&mut self, expression: &Expr, env: ()) {
self.operation.visit_id(expression.id);
visit::walk_expr(self, expression, env)
}
fn visit_ty(&mut self, typ: &Ty, env: ()) {
self.operation.visit_id(typ.id);
match typ.node {
TyPath(_, _, id) => self.operation.visit_id(id),
_ => {}
}
visit::walk_ty(self, typ, env)
}
fn visit_generics(&mut self, generics: &Generics, env: ()) {
self.visit_generics_helper(generics);
visit::walk_generics(self, generics, env)
}
fn visit_fn(&mut self,
function_kind: &visit::FnKind,
function_declaration: &FnDecl,
block: &Block,
span: Span,
node_id: NodeId,
env: ()) {
if!self.pass_through_items {
match *function_kind {
visit::FkMethod(..) if self.visited_outermost => return,
visit::FkMethod(..) => self.visited_outermost = true,
_ => {}
}
}
self.operation.visit_id(node_id);
match *function_kind {
visit::FkItemFn(_, generics, _, _) |
visit::FkMethod(_, generics, _) => {
self.visit_generics_helper(generics)
}
visit::FkFnBlock => {}
}
for argument in function_declaration.inputs.iter() {
self.operation.visit_id(argument.id)
}
visit::walk_fn(self,
function_kind,
function_declaration,
block,
span,
env);
if!self.pass_through_items {
match *function_kind {
visit::FkMethod(..) => self.visited_outermost = false,
_ => {}
}
}
}
fn visit_struct_field(&mut self, struct_field: &StructField, env: ()) {
self.operation.visit_id(struct_field.node.id);
visit::walk_struct_field(self, struct_field, env)
}
fn visit_struct_def(&mut self,
struct_def: &StructDef,
_: ast::Ident,
_: &ast::Generics,
id: NodeId,
_: ()) {
self.operation.visit_id(id);
struct_def.ctor_id.map(|ctor_id| self.operation.visit_id(ctor_id));
visit::walk_struct_def(self, struct_def, ());
}
fn visit_trait_method(&mut self, tm: &ast::TraitMethod, _: ()) {
match *tm {
ast::Required(ref m) => self.operation.visit_id(m.id),
ast::Provided(ref m) => self.operation.visit_id(m.id),
}
visit::walk_trait_method(self, tm, ());
}
}
pub fn visit_ids_for_inlined_item<O: IdVisitingOperation>(item: &InlinedItem,
operation: &O) {
let mut id_visitor = IdVisitor {
operation: operation,
pass_through_items: true,
visited_outermost: false,
};
visit::walk_inlined_item(&mut id_visitor, item, ());
}
struct IdRangeComputingVisitor { | }
impl IdVisitingOperation for IdRangeComputingVisitor {
fn visit_id(&self, id: NodeId) {
let mut id_range = self.result.get();
id_range.add(id);
self.result.set(id_range)
}
}
pub fn compute_id_range_for_inlined_item(item: &InlinedItem) -> IdRange {
let visitor = IdRangeComputingVisitor {
result: Cell::new(IdRange::max())
};
visit_ids_for_inlined_item(item, &visitor);
visitor.result.get()
}
pub fn compute_id_range_for_fn_body(fk: &visit::FnKind,
decl: &FnDecl,
body: &Block,
sp: Span,
id: NodeId)
-> IdRange
{
/*!
* Computes the id range for a single fn body,
* ignoring nested items.
*/
let visitor = IdRangeComputingVisitor {
result: Cell::new(IdRange::max())
};
let mut id_visitor = IdVisitor {
operation: &visitor,
pass_through_items: false,
visited_outermost: false,
};
id_visitor.visit_fn(fk, decl, body, sp, id, ());
visitor.result.get()
}
pub fn is_item_impl(item: Gc<ast::Item>) -> bool {
match item.node {
ItemImpl(..) => true,
_ => false
}
}
pub fn walk_pat(pat: &Pat, it: |&Pat| -> bool) -> bool {
if!it(pat) {
return false;
}
match pat.node {
PatIdent(_, _, Some(ref p)) => walk_pat(&**p, it),
PatStruct(_, ref fields, _) => {
fields.iter().advance(|f| walk_pat(&*f.pat, |p| it(p)))
}
PatEnum(_, Some(ref s)) | PatTup(ref s) => {
s.iter().advance(|p| walk_pat(&**p, |p| it(p)))
}
PatBox(ref s) | PatRegion(ref s) => {
walk_pat(&**s, it)
}
PatVec(ref before, ref slice, ref after) => {
before.iter().advance(|p| walk_pat(&**p, |p| it(p))) &&
slice.iter().advance(|p| walk_pat(&**p, |p| it(p))) &&
after.iter().advance(|p| walk_pat(&**p, |p| it(p)))
}
PatMac(_) => fail!("attempted to analyze unexpanded pattern"),
PatWild | PatWildMulti | PatLit(_) | PatRange(_, _) | PatIdent(_, _, _) |
PatEnum(_, _) => {
true
}
}
}
pub trait EachViewItem {
fn each_view_item(&self, f: |&ast::ViewItem| -> bool) -> bool;
}
struct EachViewItemData<'a> {
callback: |&ast::ViewItem|: 'a -> bool,
}
impl<'a> Visitor<()> for EachViewItemData<'a> {
fn visit_view_item(&mut self, view_item: &ast::ViewItem, _: ()) {
let _ = (self.callback)(view_item);
}
}
impl EachViewItem for ast::Crate {
fn each_view_item(&self, f: | result: Cell<IdRange>, | random_line_split |
ast_util.rs | BiLe => "<=",
BiNe => "!=",
BiGe => ">=",
BiGt => ">"
}
}
pub fn lazy_binop(b: BinOp) -> bool {
match b {
BiAnd => true,
BiOr => true,
_ => false
}
}
pub fn is_shift_binop(b: BinOp) -> bool {
match b {
BiShl => true,
BiShr => true,
_ => false
}
}
pub fn unop_to_str(op: UnOp) -> &'static str {
match op {
UnBox => "box(GC) ",
UnUniq => "box() ",
UnDeref => "*",
UnNot => "!",
UnNeg => "-",
}
}
pub fn is_path(e: Gc<Expr>) -> bool {
return match e.node { ExprPath(_) => true, _ => false };
}
// Get a string representation of a signed int type, with its value.
// We want to avoid "45int" and "-3int" in favor of "45" and "-3"
pub fn int_ty_to_str(t: IntTy, val: Option<i64>) -> String {
let s = match t {
TyI if val.is_some() => "i",
TyI => "int",
TyI8 => "i8",
TyI16 => "i16",
TyI32 => "i32",
TyI64 => "i64"
};
match val {
// cast to a u64 so we can correctly print INT64_MIN. All integral types
// are parsed as u64, so we wouldn't want to print an extra negative
// sign.
Some(n) => format!("{}{}", n as u64, s),
None => s.to_string()
}
}
pub fn int_ty_max(t: IntTy) -> u64 {
match t {
TyI8 => 0x80u64,
TyI16 => 0x8000u64,
TyI | TyI32 => 0x80000000u64, // actually ni about TyI
TyI64 => 0x8000000000000000u64
}
}
// Get a string representation of an unsigned int type, with its value.
// We want to avoid "42uint" in favor of "42u"
pub fn uint_ty_to_str(t: UintTy, val: Option<u64>) -> String {
let s = match t {
TyU if val.is_some() => "u",
TyU => "uint",
TyU8 => "u8",
TyU16 => "u16",
TyU32 => "u32",
TyU64 => "u64"
};
match val {
Some(n) => format!("{}{}", n, s),
None => s.to_string()
}
}
pub fn uint_ty_max(t: UintTy) -> u64 {
match t {
TyU8 => 0xffu64,
TyU16 => 0xffffu64,
TyU | TyU32 => 0xffffffffu64, // actually ni about TyU
TyU64 => 0xffffffffffffffffu64
}
}
pub fn float_ty_to_str(t: FloatTy) -> String {
match t {
TyF32 => "f32".to_string(),
TyF64 => "f64".to_string(),
}
}
pub fn is_call_expr(e: Gc<Expr>) -> bool {
match e.node { ExprCall(..) => true, _ => false }
}
pub fn block_from_expr(e: Gc<Expr>) -> P<Block> {
P(Block {
view_items: Vec::new(),
stmts: Vec::new(),
expr: Some(e),
id: e.id,
rules: DefaultBlock,
span: e.span
})
}
pub fn ident_to_path(s: Span, identifier: Ident) -> Path {
ast::Path {
span: s,
global: false,
segments: vec!(
ast::PathSegment {
identifier: identifier,
lifetimes: Vec::new(),
types: OwnedSlice::empty(),
}
),
}
}
pub fn ident_to_pat(id: NodeId, s: Span, i: Ident) -> Gc<Pat> {
box(GC) ast::Pat { id: id,
node: PatIdent(BindByValue(MutImmutable), ident_to_path(s, i), None),
span: s }
}
pub fn name_to_dummy_lifetime(name: Name) -> Lifetime {
Lifetime { id: DUMMY_NODE_ID,
span: codemap::DUMMY_SP,
name: name }
}
pub fn is_unguarded(a: &Arm) -> bool {
match a.guard {
None => true,
_ => false
}
}
pub fn unguarded_pat(a: &Arm) -> Option<Vec<Gc<Pat>>> {
if is_unguarded(a) {
Some(/* FIXME (#2543) */ a.pats.clone())
} else {
None
}
}
/// Generate a "pretty" name for an `impl` from its type and trait.
/// This is designed so that symbols of `impl`'d methods give some
/// hint of where they came from, (previously they would all just be
/// listed as `__extensions__::method_name::hash`, with no indication
/// of the type).
pub fn | (trait_ref: &Option<TraitRef>, ty: &Ty) -> Ident {
let mut pretty = pprust::ty_to_str(ty);
match *trait_ref {
Some(ref trait_ref) => {
pretty.push_char('.');
pretty.push_str(pprust::path_to_str(&trait_ref.path).as_slice());
}
None => {}
}
token::gensym_ident(pretty.as_slice())
}
pub fn public_methods(ms: Vec<Gc<Method>> ) -> Vec<Gc<Method>> {
ms.move_iter().filter(|m| {
match m.vis {
Public => true,
_ => false
}
}).collect()
}
// extract a TypeMethod from a TraitMethod. if the TraitMethod is
// a default, pull out the useful fields to make a TypeMethod
pub fn trait_method_to_ty_method(method: &TraitMethod) -> TypeMethod {
match *method {
Required(ref m) => (*m).clone(),
Provided(ref m) => {
TypeMethod {
ident: m.ident,
attrs: m.attrs.clone(),
fn_style: m.fn_style,
decl: m.decl,
generics: m.generics.clone(),
explicit_self: m.explicit_self,
id: m.id,
span: m.span,
vis: m.vis,
}
}
}
}
pub fn split_trait_methods(trait_methods: &[TraitMethod])
-> (Vec<TypeMethod>, Vec<Gc<Method>> ) {
let mut reqd = Vec::new();
let mut provd = Vec::new();
for trt_method in trait_methods.iter() {
match *trt_method {
Required(ref tm) => reqd.push((*tm).clone()),
Provided(m) => provd.push(m)
}
};
(reqd, provd)
}
pub fn struct_field_visibility(field: ast::StructField) -> Visibility {
match field.node.kind {
ast::NamedField(_, v) | ast::UnnamedField(v) => v
}
}
/// Maps a binary operator to its precedence
pub fn operator_prec(op: ast::BinOp) -> uint {
match op {
// 'as' sits here with 12
BiMul | BiDiv | BiRem => 11u,
BiAdd | BiSub => 10u,
BiShl | BiShr => 9u,
BiBitAnd => 8u,
BiBitXor => 7u,
BiBitOr => 6u,
BiLt | BiLe | BiGe | BiGt => 4u,
BiEq | BiNe => 3u,
BiAnd => 2u,
BiOr => 1u
}
}
/// Precedence of the `as` operator, which is a binary operator
/// not appearing in the prior table.
pub static as_prec: uint = 12u;
pub fn empty_generics() -> Generics {
Generics {lifetimes: Vec::new(),
ty_params: OwnedSlice::empty()}
}
// ______________________________________________________________________
// Enumerating the IDs which appear in an AST
#[deriving(Encodable, Decodable)]
pub struct IdRange {
pub min: NodeId,
pub max: NodeId,
}
impl IdRange {
pub fn max() -> IdRange {
IdRange {
min: u32::MAX,
max: u32::MIN,
}
}
pub fn empty(&self) -> bool {
self.min >= self.max
}
pub fn add(&mut self, id: NodeId) {
self.min = cmp::min(self.min, id);
self.max = cmp::max(self.max, id + 1);
}
}
pub trait IdVisitingOperation {
fn visit_id(&self, node_id: NodeId);
}
pub struct IdVisitor<'a, O> {
pub operation: &'a O,
pub pass_through_items: bool,
pub visited_outermost: bool,
}
impl<'a, O: IdVisitingOperation> IdVisitor<'a, O> {
fn visit_generics_helper(&self, generics: &Generics) {
for type_parameter in generics.ty_params.iter() {
self.operation.visit_id(type_parameter.id)
}
for lifetime in generics.lifetimes.iter() {
self.operation.visit_id(lifetime.id)
}
}
}
impl<'a, O: IdVisitingOperation> Visitor<()> for IdVisitor<'a, O> {
fn visit_mod(&mut self,
module: &Mod,
_: Span,
node_id: NodeId,
env: ()) {
self.operation.visit_id(node_id);
visit::walk_mod(self, module, env)
}
fn visit_view_item(&mut self, view_item: &ViewItem, env: ()) {
if!self.pass_through_items {
if self.visited_outermost {
return;
} else {
self.visited_outermost = true;
}
}
match view_item.node {
ViewItemExternCrate(_, _, node_id) => {
self.operation.visit_id(node_id)
}
ViewItemUse(ref view_path) => {
match view_path.node {
ViewPathSimple(_, _, node_id) |
ViewPathGlob(_, node_id) => {
self.operation.visit_id(node_id)
}
ViewPathList(_, ref paths, node_id) => {
self.operation.visit_id(node_id);
for path in paths.iter() {
self.operation.visit_id(path.node.id)
}
}
}
}
}
visit::walk_view_item(self, view_item, env);
self.visited_outermost = false;
}
fn visit_foreign_item(&mut self, foreign_item: &ForeignItem, env: ()) {
self.operation.visit_id(foreign_item.id);
visit::walk_foreign_item(self, foreign_item, env)
}
fn visit_item(&mut self, item: &Item, env: ()) {
if!self.pass_through_items {
if self.visited_outermost {
return
} else {
self.visited_outermost = true
}
}
self.operation.visit_id(item.id);
match item.node {
ItemEnum(ref enum_definition, _) => {
for variant in enum_definition.variants.iter() {
self.operation.visit_id(variant.node.id)
}
}
_ => {}
}
visit::walk_item(self, item, env);
self.visited_outermost = false
}
fn visit_local(&mut self, local: &Local, env: ()) {
self.operation.visit_id(local.id);
visit::walk_local(self, local, env)
}
fn visit_block(&mut self, block: &Block, env: ()) {
self.operation.visit_id(block.id);
visit::walk_block(self, block, env)
}
fn visit_stmt(&mut self, statement: &Stmt, env: ()) {
self.operation.visit_id(ast_util::stmt_id(statement));
visit::walk_stmt(self, statement, env)
}
fn visit_pat(&mut self, pattern: &Pat, env: ()) {
self.operation.visit_id(pattern.id);
visit::walk_pat(self, pattern, env)
}
fn visit_expr(&mut self, expression: &Expr, env: ()) {
self.operation.visit_id(expression.id);
visit::walk_expr(self, expression, env)
}
fn visit_ty(&mut self, typ: &Ty, env: ()) {
self.operation.visit_id(typ.id);
match typ.node {
TyPath(_, _, id) => self.operation.visit_id(id),
_ => {}
}
visit::walk_ty(self, typ, env)
}
fn visit_generics(&mut self, generics: &Generics, env: ()) {
self.visit_generics_helper(generics);
visit::walk_generics(self, generics, env)
}
fn visit_fn(&mut self,
function_kind: &visit::FnKind,
function_declaration: &FnDecl,
block: &Block,
span: Span,
node_id: NodeId,
env: ()) {
if!self.pass_through_items {
match *function_kind {
visit::FkMethod(..) if self.visited_outermost => return,
visit::FkMethod(..) => self.visited_outermost = true,
_ => {}
}
}
self.operation.visit_id(node_id);
match *function_kind {
visit::FkItemFn(_, generics, _, _) |
visit::FkMethod(_, generics, _) => {
self.visit_generics_helper(generics)
}
visit::FkFnBlock => {}
}
for argument in function_declaration.inputs.iter() {
self.operation.visit_id(argument.id)
}
visit::walk_fn(self,
function_kind,
function_declaration,
block,
span,
env);
if!self.pass_through_items {
match *function_kind {
visit::FkMethod(..) => self.visited_outermost = false,
_ => {}
}
}
}
fn visit_struct_field(&mut self, struct_field: &StructField, env: ()) {
self.operation.visit_id(struct_field.node.id);
visit::walk_struct_field(self, struct_field, env)
}
fn visit_struct_def(&mut self,
struct_def: &StructDef,
_: ast::Ident,
_: &ast::Generics,
id: NodeId,
_: ()) {
self.operation.visit_id(id);
struct_def.ctor_id.map(|ctor_id| self.operation.visit_id(ctor_id));
visit::walk_struct_def(self, struct_def, ());
}
fn visit_trait_method(&mut self, tm: &ast::TraitMethod, _: ()) {
match *tm {
ast::Required(ref m) => self.operation.visit_id(m.id),
ast::Provided(ref m) => self.operation.visit_id(m.id),
}
visit::walk_trait_method(self, tm, ());
}
}
pub fn visit_ids_for_inlined_item<O: IdVisitingOperation>(item: &InlinedItem,
operation: &O) {
let mut id_visitor = IdVisitor {
operation: operation,
pass_through_items: true,
visited_outermost: false,
};
visit::walk_inlined_item(&mut id_visitor, item, ());
}
struct IdRangeComputingVisitor {
result: Cell<IdRange>,
}
impl IdVisitingOperation for IdRangeComputingVisitor {
fn visit_id(&self, id: NodeId) {
let mut id_range = self.result.get();
id_range.add(id);
self.result.set(id_range)
}
}
pub fn compute_id_range_for_inlined_item(item: &InlinedItem) -> IdRange {
let visitor = IdRangeComputingVisitor {
result: Cell::new(IdRange::max())
};
visit_ids_for_inlined_item(item, &visitor);
visitor.result.get()
}
pub fn compute_id_range_for_fn_body(fk: &visit::FnKind,
decl: &FnDecl,
body: &Block,
sp: Span,
id: NodeId)
-> IdRange
{
/*!
* Computes the id range for a single fn body,
* ignoring nested items.
*/
let visitor = IdRangeComputingVisitor {
result: Cell::new(IdRange::max())
};
let mut id_visitor = IdVisitor {
operation: &visitor,
pass_through_items: false,
visited_outermost: false,
};
id_visitor.visit_fn(fk, decl, body, sp, id, ());
visitor.result.get()
}
pub fn is_item_impl(item: Gc<ast::Item>) -> bool {
match item.node {
ItemImpl(..) => true,
_ => false
}
}
pub fn walk_pat(pat: &Pat, it: |&Pat| -> bool) -> bool {
if!it(pat) {
return false;
}
match pat.node {
PatIdent(_, _, Some(ref p)) => walk_pat(&**p, it),
PatStruct(_, ref fields, _) => {
fields.iter().advance(|f| walk_pat(&*f.pat, |p| it(p)))
}
PatEnum(_, Some(ref s)) | PatTup(ref s) => {
s.iter().advance(|p| walk_pat(&**p, |p| it(p)))
}
PatBox(ref s) | PatRegion(ref s) => {
walk_pat(&**s, it)
}
PatVec(ref before, ref slice, ref after) => {
before.iter().advance(|p| walk_pat(&**p, |p| it(p))) &&
slice.iter().advance(|p| walk_pat(&**p, |p| it(p))) &&
after.iter().advance(|p| walk_pat(&**p, |p| it(p)))
}
PatMac(_) => fail!("attempted to analyze unexpanded pattern"),
PatWild | PatWildMulti | PatLit(_) | PatRange(_, _) | PatIdent(_, _, _) |
PatEnum(_, _) => {
true
}
}
}
pub trait EachViewItem {
fn each_view_item(&self, f: |&ast::ViewItem| -> bool) -> bool;
}
struct EachViewItemData<'a> {
callback: |&ast::ViewItem|: 'a -> bool,
}
impl<'a> Visitor<()> for EachViewItemData<'a> {
fn visit_view_item(&mut self, view_item: &ast::ViewItem, _: ()) {
let _ = (self.callback)(view_item);
}
}
impl EachViewItem for ast::Crate {
fn each_view_item(&self, | impl_pretty_name | identifier_name |
float.rs | // Copyright 2013-2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
#![allow(missing_doc)]
use char;
use collections::Collection;
use fmt;
use iter::{range, DoubleEndedIterator};
use num::{Float, FPNaN, FPInfinite, ToPrimitive, Primitive};
use num::{Zero, One, cast};
use result::Ok;
use slice::{ImmutableSlice, MutableSlice};
use slice;
use str::StrSlice;
/// A flag that specifies whether to use exponential (scientific) notation.
pub enum ExponentFormat {
/// Do not use exponential notation.
ExpNone,
/// Use exponential notation with the exponent having a base of 10 and the
/// exponent sign being `e` or `E`. For example, 1000 would be printed
/// 1e3.
ExpDec
}
/// The number of digits used for emitting the fractional part of a number, if
/// any.
pub enum SignificantDigits {
/// At most the given number of digits will be printed, truncating any
/// trailing zeroes.
DigMax(uint),
/// Precisely the given number of digits will be printed.
DigExact(uint)
}
/// How to emit the sign of a number.
pub enum SignFormat {
/// `-` will be printed for negative values, but no sign will be emitted
/// for positive numbers.
SignNeg
}
static DIGIT_E_RADIX: uint = ('e' as uint) - ('a' as uint) + 11u;
/**
* Converts a number to its string representation as a byte vector.
* This is meant to be a common base implementation for all numeric string
* conversion functions like `to_string()` or `to_str_radix()`.
*
* # Arguments
* - `num` - The number to convert. Accepts any number that
* implements the numeric traits.
* - `radix` - Base to use. Accepts only the values 2-36. If the exponential notation
* is used, then this base is only used for the significand. The exponent
* itself always printed using a base of 10.
* - `negative_zero` - Whether to treat the special value `-0` as
* `-0` or as `+0`.
* - `sign` - How to emit the sign. See `SignFormat`.
* - `digits` - The amount of digits to use for emitting the fractional
* part, if any. See `SignificantDigits`.
* - `exp_format` - Whether or not to use the exponential (scientific) notation.
* See `ExponentFormat`.
* - `exp_capital` - Whether or not to use a capital letter for the exponent sign, if
* exponential notation is desired.
* - `f` - A closure to invoke with the bytes representing the
* float.
*
* # Failure
* - Fails if `radix` < 2 or `radix` > 36.
* - Fails if `radix` > 14 and `exp_format` is `ExpDec` due to conflict
* between digit and exponent sign `'e'`.
* - Fails if `radix` > 25 and `exp_format` is `ExpBin` due to conflict
* between digit and exponent sign `'p'`.
*/
pub fn float_to_str_bytes_common<T: Primitive + Float, U>(
num: T,
radix: uint,
negative_zero: bool,
sign: SignFormat,
digits: SignificantDigits,
exp_format: ExponentFormat,
exp_upper: bool,
f: |&[u8]| -> U
) -> U {
assert!(2 <= radix && radix <= 36);
match exp_format {
ExpDec if radix >= DIGIT_E_RADIX // decimal exponent 'e'
=> fail!("float_to_str_bytes_common: radix {} incompatible with \
use of 'e' as decimal exponent", radix),
_ => ()
}
let _0: T = Zero::zero();
let _1: T = One::one();
match num.classify() {
FPNaN => return f("NaN".as_bytes()),
FPInfinite if num > _0 => {
return f("inf".as_bytes());
}
FPInfinite if num < _0 => {
return f("-inf".as_bytes());
}
_ => {}
}
let neg = num < _0 || (negative_zero && _1 / num == Float::neg_infinity());
// For an f64 the exponent is in the range of [-1022, 1023] for base 2, so
// we may have up to that many digits. Give ourselves some extra wiggle room
// otherwise as well.
let mut buf = [0u8,..1536];
let mut end = 0;
let radix_gen: T = cast(radix as int).unwrap();
let (num, exp) = match exp_format {
ExpNone => (num, 0i32),
ExpDec if num == _0 => (num, 0i32),
ExpDec => {
let (exp, exp_base) = match exp_format {
ExpDec => (num.abs().log10().floor(), cast::<f64, T>(10.0f64).unwrap()),
ExpNone => fail!("unreachable"),
};
(num / exp_base.powf(exp), cast::<T, i32>(exp).unwrap())
}
};
// First emit the non-fractional part, looping at least once to make
// sure at least a `0` gets emitted.
let mut deccum = num.trunc();
loop {
// Calculate the absolute value of each digit instead of only
// doing it once for the whole number because a
// representable negative number doesn't necessary have an
// representable additive inverse of the same type
// (See twos complement). But we assume that for the
// numbers [-35.. 0] we always have [0.. 35].
let current_digit = (deccum % radix_gen).abs();
// Decrease the deccumulator one digit at a time
deccum = deccum / radix_gen;
deccum = deccum.trunc();
let c = char::from_digit(current_digit.to_int().unwrap() as uint, radix);
buf[end] = c.unwrap() as u8;
end += 1;
// No more digits to calculate for the non-fractional part -> break
if deccum == _0 { break; }
}
// If limited digits, calculate one digit more for rounding.
let (limit_digits, digit_count, exact) = match digits {
DigMax(count) => (true, count + 1, false),
DigExact(count) => (true, count + 1, true)
};
// Decide what sign to put in front
match sign {
SignNeg if neg => {
buf[end] = b'-';
end += 1;
}
_ => ()
}
buf.slice_to_mut(end).reverse();
// Remember start of the fractional digits.
// Points one beyond end of buf if none get generated,
// or at the '.' otherwise.
let start_fractional_digits = end;
// Now emit the fractional part, if any
deccum = num.fract();
if deccum!= _0 || (limit_digits && exact && digit_count > 0) {
buf[end] = b'.';
end += 1;
let mut dig = 0u;
// calculate new digits while
// - there is no limit and there are digits left
// - or there is a limit, it's not reached yet and
// - it's exact
// - or it's a maximum, and there are still digits left
while (!limit_digits && deccum!= _0)
|| (limit_digits && dig < digit_count && (
exact
|| (!exact && deccum!= _0)
)
) {
// Shift first fractional digit into the integer part
deccum = deccum * radix_gen;
// Calculate the absolute value of each digit.
// See note in first loop.
let current_digit = deccum.trunc().abs();
let c = char::from_digit(current_digit.to_int().unwrap() as uint,
radix);
buf[end] = c.unwrap() as u8;
end += 1;
// Decrease the deccumulator one fractional digit at a time
deccum = deccum.fract();
dig += 1u;
}
// If digits are limited, and that limit has been reached,
// cut off the one extra digit, and depending on its value
// round the remaining ones.
if limit_digits && dig == digit_count {
let ascii2value = |chr: u8| {
char::to_digit(chr as char, radix).unwrap()
};
let value2ascii = |val: uint| {
char::from_digit(val, radix).unwrap() as u8
};
let extra_digit = ascii2value(buf[end - 1]);
end -= 1;
if extra_digit >= radix / 2 { // -> need to round
let mut i: int = end as int - 1;
loop {
// If reached left end of number, have to
// insert additional digit:
if i < 0 | for j in range(i as uint + 1, end).rev() {
buf[j + 1] = buf[j];
}
buf[(i + 1) as uint] = value2ascii(1);
end += 1;
break;
}
// Skip the '.'
if buf[i as uint] == b'.' { i -= 1; continue; }
// Either increment the digit,
// or set to 0 if max and carry the 1.
let current_digit = ascii2value(buf[i as uint]);
if current_digit < (radix - 1) {
buf[i as uint] = value2ascii(current_digit+1);
break;
} else {
buf[i as uint] = value2ascii(0);
i -= 1;
}
}
}
}
}
// if number of digits is not exact, remove all trailing '0's up to
// and including the '.'
if!exact {
let buf_max_i = end - 1;
// index to truncate from
let mut i = buf_max_i;
// discover trailing zeros of fractional part
while i > start_fractional_digits && buf[i] == b'0' {
i -= 1;
}
// Only attempt to truncate digits if buf has fractional digits
if i >= start_fractional_digits {
// If buf ends with '.', cut that too.
if buf[i] == b'.' { i -= 1 }
// only resize buf if we actually remove digits
if i < buf_max_i {
end = i + 1;
}
}
} // If exact and trailing '.', just cut that
else {
let max_i = end - 1;
if buf[max_i] == b'.' {
end = max_i;
}
}
match exp_format {
ExpNone => {},
_ => {
buf[end] = match exp_format {
ExpDec if exp_upper => 'E',
ExpDec if!exp_upper => 'e',
_ => fail!("unreachable"),
} as u8;
end += 1;
struct Filler<'a> {
buf: &'a mut [u8],
end: &'a mut uint,
}
impl<'a> fmt::FormatWriter for Filler<'a> {
fn write(&mut self, bytes: &[u8]) -> fmt::Result {
slice::bytes::copy_memory(self.buf.slice_from_mut(*self.end),
bytes);
*self.end += bytes.len();
Ok(())
}
}
let mut filler = Filler { buf: buf, end: &mut end };
match sign {
SignNeg => {
let _ = format_args!(|args| {
fmt::write(&mut filler, args)
}, "{:-}", exp);
}
}
}
}
f(buf.slice_to(end))
} | || buf[i as uint] == b'-'
|| buf[i as uint] == b'+' { | random_line_split |
float.rs | // Copyright 2013-2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
#![allow(missing_doc)]
use char;
use collections::Collection;
use fmt;
use iter::{range, DoubleEndedIterator};
use num::{Float, FPNaN, FPInfinite, ToPrimitive, Primitive};
use num::{Zero, One, cast};
use result::Ok;
use slice::{ImmutableSlice, MutableSlice};
use slice;
use str::StrSlice;
/// A flag that specifies whether to use exponential (scientific) notation.
pub enum ExponentFormat {
/// Do not use exponential notation.
ExpNone,
/// Use exponential notation with the exponent having a base of 10 and the
/// exponent sign being `e` or `E`. For example, 1000 would be printed
/// 1e3.
ExpDec
}
/// The number of digits used for emitting the fractional part of a number, if
/// any.
pub enum SignificantDigits {
/// At most the given number of digits will be printed, truncating any
/// trailing zeroes.
DigMax(uint),
/// Precisely the given number of digits will be printed.
DigExact(uint)
}
/// How to emit the sign of a number.
pub enum SignFormat {
/// `-` will be printed for negative values, but no sign will be emitted
/// for positive numbers.
SignNeg
}
static DIGIT_E_RADIX: uint = ('e' as uint) - ('a' as uint) + 11u;
/**
* Converts a number to its string representation as a byte vector.
* This is meant to be a common base implementation for all numeric string
* conversion functions like `to_string()` or `to_str_radix()`.
*
* # Arguments
* - `num` - The number to convert. Accepts any number that
* implements the numeric traits.
* - `radix` - Base to use. Accepts only the values 2-36. If the exponential notation
* is used, then this base is only used for the significand. The exponent
* itself always printed using a base of 10.
* - `negative_zero` - Whether to treat the special value `-0` as
* `-0` or as `+0`.
* - `sign` - How to emit the sign. See `SignFormat`.
* - `digits` - The amount of digits to use for emitting the fractional
* part, if any. See `SignificantDigits`.
* - `exp_format` - Whether or not to use the exponential (scientific) notation.
* See `ExponentFormat`.
* - `exp_capital` - Whether or not to use a capital letter for the exponent sign, if
* exponential notation is desired.
* - `f` - A closure to invoke with the bytes representing the
* float.
*
* # Failure
* - Fails if `radix` < 2 or `radix` > 36.
* - Fails if `radix` > 14 and `exp_format` is `ExpDec` due to conflict
* between digit and exponent sign `'e'`.
* - Fails if `radix` > 25 and `exp_format` is `ExpBin` due to conflict
* between digit and exponent sign `'p'`.
*/
pub fn float_to_str_bytes_common<T: Primitive + Float, U>(
num: T,
radix: uint,
negative_zero: bool,
sign: SignFormat,
digits: SignificantDigits,
exp_format: ExponentFormat,
exp_upper: bool,
f: |&[u8]| -> U
) -> U {
assert!(2 <= radix && radix <= 36);
match exp_format {
ExpDec if radix >= DIGIT_E_RADIX // decimal exponent 'e'
=> fail!("float_to_str_bytes_common: radix {} incompatible with \
use of 'e' as decimal exponent", radix),
_ => ()
}
let _0: T = Zero::zero();
let _1: T = One::one();
match num.classify() {
FPNaN => return f("NaN".as_bytes()),
FPInfinite if num > _0 => {
return f("inf".as_bytes());
}
FPInfinite if num < _0 => {
return f("-inf".as_bytes());
}
_ => {}
}
let neg = num < _0 || (negative_zero && _1 / num == Float::neg_infinity());
// For an f64 the exponent is in the range of [-1022, 1023] for base 2, so
// we may have up to that many digits. Give ourselves some extra wiggle room
// otherwise as well.
let mut buf = [0u8,..1536];
let mut end = 0;
let radix_gen: T = cast(radix as int).unwrap();
let (num, exp) = match exp_format {
ExpNone => (num, 0i32),
ExpDec if num == _0 => (num, 0i32),
ExpDec => {
let (exp, exp_base) = match exp_format {
ExpDec => (num.abs().log10().floor(), cast::<f64, T>(10.0f64).unwrap()),
ExpNone => fail!("unreachable"),
};
(num / exp_base.powf(exp), cast::<T, i32>(exp).unwrap())
}
};
// First emit the non-fractional part, looping at least once to make
// sure at least a `0` gets emitted.
let mut deccum = num.trunc();
loop {
// Calculate the absolute value of each digit instead of only
// doing it once for the whole number because a
// representable negative number doesn't necessary have an
// representable additive inverse of the same type
// (See twos complement). But we assume that for the
// numbers [-35.. 0] we always have [0.. 35].
let current_digit = (deccum % radix_gen).abs();
// Decrease the deccumulator one digit at a time
deccum = deccum / radix_gen;
deccum = deccum.trunc();
let c = char::from_digit(current_digit.to_int().unwrap() as uint, radix);
buf[end] = c.unwrap() as u8;
end += 1;
// No more digits to calculate for the non-fractional part -> break
if deccum == _0 { break; }
}
// If limited digits, calculate one digit more for rounding.
let (limit_digits, digit_count, exact) = match digits {
DigMax(count) => (true, count + 1, false),
DigExact(count) => (true, count + 1, true)
};
// Decide what sign to put in front
match sign {
SignNeg if neg => {
buf[end] = b'-';
end += 1;
}
_ => ()
}
buf.slice_to_mut(end).reverse();
// Remember start of the fractional digits.
// Points one beyond end of buf if none get generated,
// or at the '.' otherwise.
let start_fractional_digits = end;
// Now emit the fractional part, if any
deccum = num.fract();
if deccum!= _0 || (limit_digits && exact && digit_count > 0) {
buf[end] = b'.';
end += 1;
let mut dig = 0u;
// calculate new digits while
// - there is no limit and there are digits left
// - or there is a limit, it's not reached yet and
// - it's exact
// - or it's a maximum, and there are still digits left
while (!limit_digits && deccum!= _0)
|| (limit_digits && dig < digit_count && (
exact
|| (!exact && deccum!= _0)
)
) {
// Shift first fractional digit into the integer part
deccum = deccum * radix_gen;
// Calculate the absolute value of each digit.
// See note in first loop.
let current_digit = deccum.trunc().abs();
let c = char::from_digit(current_digit.to_int().unwrap() as uint,
radix);
buf[end] = c.unwrap() as u8;
end += 1;
// Decrease the deccumulator one fractional digit at a time
deccum = deccum.fract();
dig += 1u;
}
// If digits are limited, and that limit has been reached,
// cut off the one extra digit, and depending on its value
// round the remaining ones.
if limit_digits && dig == digit_count {
let ascii2value = |chr: u8| {
char::to_digit(chr as char, radix).unwrap()
};
let value2ascii = |val: uint| {
char::from_digit(val, radix).unwrap() as u8
};
let extra_digit = ascii2value(buf[end - 1]);
end -= 1;
if extra_digit >= radix / 2 { // -> need to round
let mut i: int = end as int - 1;
loop {
// If reached left end of number, have to
// insert additional digit:
if i < 0
|| buf[i as uint] == b'-'
|| buf[i as uint] == b'+' {
for j in range(i as uint + 1, end).rev() {
buf[j + 1] = buf[j];
}
buf[(i + 1) as uint] = value2ascii(1);
end += 1;
break;
}
// Skip the '.'
if buf[i as uint] == b'.' { i -= 1; continue; }
// Either increment the digit,
// or set to 0 if max and carry the 1.
let current_digit = ascii2value(buf[i as uint]);
if current_digit < (radix - 1) {
buf[i as uint] = value2ascii(current_digit+1);
break;
} else {
buf[i as uint] = value2ascii(0);
i -= 1;
}
}
}
}
}
// if number of digits is not exact, remove all trailing '0's up to
// and including the '.'
if!exact {
let buf_max_i = end - 1;
// index to truncate from
let mut i = buf_max_i;
// discover trailing zeros of fractional part
while i > start_fractional_digits && buf[i] == b'0' {
i -= 1;
}
// Only attempt to truncate digits if buf has fractional digits
if i >= start_fractional_digits {
// If buf ends with '.', cut that too.
if buf[i] == b'.' { i -= 1 }
// only resize buf if we actually remove digits
if i < buf_max_i {
end = i + 1;
}
}
} // If exact and trailing '.', just cut that
else {
let max_i = end - 1;
if buf[max_i] == b'.' {
end = max_i;
}
}
match exp_format {
ExpNone => {},
_ => {
buf[end] = match exp_format {
ExpDec if exp_upper => 'E',
ExpDec if!exp_upper => 'e',
_ => fail!("unreachable"),
} as u8;
end += 1;
struct Filler<'a> {
buf: &'a mut [u8],
end: &'a mut uint,
}
impl<'a> fmt::FormatWriter for Filler<'a> {
fn | (&mut self, bytes: &[u8]) -> fmt::Result {
slice::bytes::copy_memory(self.buf.slice_from_mut(*self.end),
bytes);
*self.end += bytes.len();
Ok(())
}
}
let mut filler = Filler { buf: buf, end: &mut end };
match sign {
SignNeg => {
let _ = format_args!(|args| {
fmt::write(&mut filler, args)
}, "{:-}", exp);
}
}
}
}
f(buf.slice_to(end))
}
| write | identifier_name |
dst-coerce-rc.rs | // Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// Test a very simple custom DST coercion.
#![feature(core)]
use std::rc::Rc;
trait Baz {
fn get(&self) -> i32;
}
impl Baz for i32 {
fn | (&self) -> i32 {
*self
}
}
fn main() {
let a: Rc<[i32; 3]> = Rc::new([1, 2, 3]);
let b: Rc<[i32]> = a;
assert_eq!(b[0], 1);
assert_eq!(b[1], 2);
assert_eq!(b[2], 3);
let a: Rc<i32> = Rc::new(42);
let b: Rc<Baz> = a.clone();
assert_eq!(b.get(), 42);
let _c = b.clone();
}
| get | identifier_name |
dst-coerce-rc.rs | // Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at | // http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// Test a very simple custom DST coercion.
#![feature(core)]
use std::rc::Rc;
trait Baz {
fn get(&self) -> i32;
}
impl Baz for i32 {
fn get(&self) -> i32 {
*self
}
}
fn main() {
let a: Rc<[i32; 3]> = Rc::new([1, 2, 3]);
let b: Rc<[i32]> = a;
assert_eq!(b[0], 1);
assert_eq!(b[1], 2);
assert_eq!(b[2], 3);
let a: Rc<i32> = Rc::new(42);
let b: Rc<Baz> = a.clone();
assert_eq!(b.get(), 42);
let _c = b.clone();
} | // http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or | random_line_split |
dst-coerce-rc.rs | // Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// Test a very simple custom DST coercion.
#![feature(core)]
use std::rc::Rc;
trait Baz {
fn get(&self) -> i32;
}
impl Baz for i32 {
fn get(&self) -> i32 |
}
fn main() {
let a: Rc<[i32; 3]> = Rc::new([1, 2, 3]);
let b: Rc<[i32]> = a;
assert_eq!(b[0], 1);
assert_eq!(b[1], 2);
assert_eq!(b[2], 3);
let a: Rc<i32> = Rc::new(42);
let b: Rc<Baz> = a.clone();
assert_eq!(b.get(), 42);
let _c = b.clone();
}
| {
*self
} | identifier_body |
appthread.rs | //! Types for the mutator to use to build data structures
use std::cell::Cell;
use std::mem::transmute;
use std::ops::{Deref, DerefMut};
use std::ptr::{null, null_mut};
use std::raw::TraitObject;
use std::sync::atomic::{AtomicPtr, Ordering};
use std::thread;
use constants::{INC_BIT, JOURNAL_BUFFER_SIZE, NEW_BIT, TRAVERSE_BIT};
use gcthread::{JournalSender, EntrySender};
use heap::{Object, TraceStack};
use journal;
use trace::Trace;
/// Each thread gets it's own EntrySender
thread_local!(
static GC_JOURNAL: Cell<*const EntrySender> = Cell::new(null())
);
/// GcBox struct and traits: a boxed object that is GC managed
pub struct GcBox<T: Trace> {
value: T,
}
/// Root smart pointer, sends reference count changes to the journal.
///
/// Whenever a reference to an object on the heap must be retained on the stack, this type must be
/// used. It's use will ensure that the object will be seen as a root.
pub struct GcRoot<T: Trace> {
ptr: *mut GcBox<T>,
}
/// Non-atomic pointer type. This type is `!Sync` and thus is useful for presenting a Rust-ish
/// API to a data structure where aliasing and mutability must follow the standard rules: there
/// can be only one mutator.
///
/// *Important note:* even though this type is `!Sync`, any data structures that are composed of
/// `Gc` pointers must still be designed with the awareness that the GC thread will call `trace()`
/// at any point and so, must still be thread safe!
///
/// This is not a root pointer type. It should be used inside data structures to reference other
/// GC-managed objects.
pub struct Gc<T: Trace> {
ptr: *mut GcBox<T>,
}
/// Atomic pointer type that points at a traceable object. This type is `Sync` and can be used to
/// build concurrent data structures.
///
/// This type should be used inside data structures to reference other GC-managed objects, but
/// provides interior mutability and atomic methods.
///
/// TODO: cas, swap etc for GcRoot and Gc
pub struct GcAtomic<T: Trace> {
ptr: AtomicPtr<GcBox<T>>,
}
/// An Application Thread, manages a thread-local reference to a tx channel
///
/// TODO: a version of `spawn()` is required that can be called from an existing mutator thread.
pub struct AppThread;
impl AppThread {
/// As thread::spawn but takes a journal Sender to initialize the thread_local instance with.
pub fn spawn_from_gc<F, T>(tx: JournalSender, f: F) -> thread::JoinHandle<T>
where F: FnOnce() -> T,
F: Send +'static,
T: Send +'static
{
thread::spawn(move || {
let (jtx, jrx) = journal::make_journal(JOURNAL_BUFFER_SIZE);
tx.send(jrx).expect("Failed to send a new Journal to the GC thread!");
GC_JOURNAL.with(|j| {
j.set(&jtx);
});
f()
})
}
}
// Reference count functions. Only new-objects need to specify the traverse bit.
#[inline]
fn | <T: Trace>(object: &T) -> TraitObject {
let trace: &Trace = object;
unsafe { transmute(trace) }
}
/// Write a reference count increment to the journal for a newly allocated object
#[inline]
fn write<T: Trace>(object: &T, is_new: bool, flags: usize) {
GC_JOURNAL.with(|j| {
let tx = unsafe { &*j.get() };
let tobj = as_traitobject(object);
// set the refcount-increment bit
let ptr = (tobj.data as usize) | flags;
// set the traversible bit
let mut vtable = tobj.vtable as usize;
if is_new && object.traversible() {
vtable |= TRAVERSE_BIT;
}
tx.send(Object {
ptr: ptr,
vtable: vtable,
});
});
}
// GcBox implementation
impl<T: Trace> GcBox<T> {
fn new(value: T) -> GcBox<T> {
GcBox {
value: value,
}
}
}
unsafe impl<T: Trace> Trace for GcBox<T> {
#[inline]
fn traversible(&self) -> bool {
self.value.traversible()
}
#[inline]
unsafe fn trace(&self, heap: &mut TraceStack) {
self.value.trace(heap);
}
}
// GcRoot implementation
impl<T: Trace> GcRoot<T> {
/// Put a new object on the heap and hand ownership to the GC, writing a reference count
/// increment to the journal.
pub fn new(value: T) -> GcRoot<T> {
let boxed = Box::new(GcBox::new(value));
write(&*boxed, true, NEW_BIT | INC_BIT);
GcRoot {
ptr: Box::into_raw(boxed)
}
}
fn from_raw(ptr: *mut GcBox<T>) -> GcRoot<T> {
let root = GcRoot { ptr: ptr };
write(&*root, false, INC_BIT);
root
}
fn ptr(&self) -> *mut GcBox<T> {
self.ptr
}
fn value(&self) -> &T {
unsafe { &(*self.ptr).value }
}
fn value_mut(&mut self) -> &mut T {
unsafe { &mut (*self.ptr).value }
}
}
impl<T: Trace> Drop for GcRoot<T> {
fn drop(&mut self) {
write(&**self, false, 0);
}
}
impl<T: Trace> Deref for GcRoot<T> {
type Target = T;
fn deref(&self) -> &T {
self.value()
}
}
impl<T: Trace> DerefMut for GcRoot<T> {
fn deref_mut(&mut self) -> &mut T {
self.value_mut()
}
}
impl<T: Trace> Clone for GcRoot<T> {
fn clone(&self) -> Self {
GcRoot::from_raw(self.ptr())
}
}
// Gc implementation
impl<T: Trace> Gc<T> {
/// Creates a new null pointer.
pub fn null() -> Gc<T> {
Gc {
ptr: null_mut(),
}
}
/// Move a value to the heap and create a pointer to it.
pub fn new(value: T) -> Gc<T> {
let boxed = Box::new(GcBox::new(value));
write(&*boxed, true, NEW_BIT);
Gc {
ptr: Box::into_raw(boxed)
}
}
/// Return the raw pointer value, or None if it is a null pointer.
pub fn as_raw(&self) -> Option<*mut GcBox<T>> {
if self.ptr.is_null() {
None
} else {
Some(self.ptr)
}
}
/// Pointer equality comparison.
pub fn is(&self, other: Gc<T>) -> bool {
self.ptr == other.ptr
}
fn from_raw(ptr: *mut GcBox<T>) -> Gc<T> {
Gc {
ptr: ptr,
}
}
fn ptr(&self) -> *mut GcBox<T> {
self.ptr
}
fn value(&self) -> &T {
unsafe { &(*self.ptr).value }
}
fn value_mut(&mut self) -> &mut T {
unsafe { &mut (*self.ptr).value }
}
}
impl<T: Trace> Deref for Gc<T> {
type Target = T;
fn deref(&self) -> &T {
self.value()
}
}
impl<T: Trace> DerefMut for Gc<T> {
fn deref_mut(&mut self) -> &mut T {
self.value_mut()
}
}
impl<T: Trace> Clone for Gc<T> {
fn clone(&self) -> Self {
Gc {
ptr: self.ptr,
}
}
}
impl<T: Trace> Copy for Gc<T> {}
// GcAtomic implementation
impl<T: Trace> GcAtomic<T> {
/// Instantiate a new null pointer
pub fn null() -> GcAtomic<T> {
GcAtomic {
ptr: AtomicPtr::new(null_mut())
}
}
/// Instantiate a new pointer, moving `value` to the heap. Writes to the journal.
pub fn new(value: T) -> GcAtomic<T> {
let boxed = Box::new(GcBox::new(value));
write(&*boxed, true, NEW_BIT);
GcAtomic {
ptr: AtomicPtr::new(Box::into_raw(boxed)),
}
}
/// Root the pointer by loading it into a `GcRoot<T>`
///
/// Panics if `order` is `Release` or `AcqRel`.
pub fn load_into_root(&self, order: Ordering) -> GcRoot<T> {
let root = GcRoot {
ptr: self.ptr.load(order),
};
write(&*root, false, INC_BIT);
root
}
/// Copy the pointer into a new `Gc` instance.
///
/// Panics if `order` is `Release` or `AcqRel`.
pub fn load_into_gc(&self, order: Ordering) -> Gc<T> {
Gc::from_raw(self.ptr.load(order))
}
/// Fetch the current raw pointer value
///
/// Panics if `order` is `Release` or `AcqRel`.
pub fn load_raw(&self, order: Ordering) -> *mut GcBox<T> {
self.ptr.load(order)
}
/// Replace the current pointer value with the pointer from the given `GcRoot`.
///
/// Panics if `order` is `Acquire` or `AcqRel`.
pub fn store_from_root(&self, root: GcRoot<T>, order: Ordering) {
self.ptr.store(root.ptr(), order);
}
/// Replace the current pointer value with the pointer from the given `Gc`.
///
/// Panics of `order` is `Acquire` or `AcqRel`.
pub fn store_from_gc(&self, gc: Gc<T>, order: Ordering) {
self.ptr.store(gc.ptr(), order);
}
/// Replace the current pointer value with the given raw pointer
///
/// Panics if `order` is `Acquire` or `AcqRel`.
pub fn store_raw(&self, ptr: *mut GcBox<T>, order: Ordering) {
self.ptr.store(ptr, order);
}
}
| as_traitobject | identifier_name |
appthread.rs | //! Types for the mutator to use to build data structures
use std::cell::Cell;
use std::mem::transmute;
use std::ops::{Deref, DerefMut};
use std::ptr::{null, null_mut};
use std::raw::TraitObject;
use std::sync::atomic::{AtomicPtr, Ordering};
use std::thread;
use constants::{INC_BIT, JOURNAL_BUFFER_SIZE, NEW_BIT, TRAVERSE_BIT};
use gcthread::{JournalSender, EntrySender};
use heap::{Object, TraceStack};
use journal;
use trace::Trace;
/// Each thread gets it's own EntrySender
thread_local!(
static GC_JOURNAL: Cell<*const EntrySender> = Cell::new(null())
);
/// GcBox struct and traits: a boxed object that is GC managed
pub struct GcBox<T: Trace> {
value: T,
}
/// Root smart pointer, sends reference count changes to the journal.
///
/// Whenever a reference to an object on the heap must be retained on the stack, this type must be
/// used. It's use will ensure that the object will be seen as a root.
pub struct GcRoot<T: Trace> {
ptr: *mut GcBox<T>,
}
/// Non-atomic pointer type. This type is `!Sync` and thus is useful for presenting a Rust-ish
/// API to a data structure where aliasing and mutability must follow the standard rules: there
/// can be only one mutator.
///
/// *Important note:* even though this type is `!Sync`, any data structures that are composed of
/// `Gc` pointers must still be designed with the awareness that the GC thread will call `trace()`
/// at any point and so, must still be thread safe!
///
/// This is not a root pointer type. It should be used inside data structures to reference other
/// GC-managed objects.
pub struct Gc<T: Trace> {
ptr: *mut GcBox<T>,
}
/// Atomic pointer type that points at a traceable object. This type is `Sync` and can be used to
/// build concurrent data structures.
///
/// This type should be used inside data structures to reference other GC-managed objects, but
/// provides interior mutability and atomic methods.
///
/// TODO: cas, swap etc for GcRoot and Gc
pub struct GcAtomic<T: Trace> {
ptr: AtomicPtr<GcBox<T>>,
}
/// An Application Thread, manages a thread-local reference to a tx channel
///
/// TODO: a version of `spawn()` is required that can be called from an existing mutator thread.
pub struct AppThread;
impl AppThread {
/// As thread::spawn but takes a journal Sender to initialize the thread_local instance with.
pub fn spawn_from_gc<F, T>(tx: JournalSender, f: F) -> thread::JoinHandle<T>
where F: FnOnce() -> T,
F: Send +'static,
T: Send +'static
{
thread::spawn(move || {
let (jtx, jrx) = journal::make_journal(JOURNAL_BUFFER_SIZE);
tx.send(jrx).expect("Failed to send a new Journal to the GC thread!");
GC_JOURNAL.with(|j| {
j.set(&jtx);
});
f()
})
}
}
// Reference count functions. Only new-objects need to specify the traverse bit.
#[inline]
fn as_traitobject<T: Trace>(object: &T) -> TraitObject {
let trace: &Trace = object;
unsafe { transmute(trace) }
}
/// Write a reference count increment to the journal for a newly allocated object
#[inline]
fn write<T: Trace>(object: &T, is_new: bool, flags: usize) | }
// GcBox implementation
impl<T: Trace> GcBox<T> {
fn new(value: T) -> GcBox<T> {
GcBox {
value: value,
}
}
}
unsafe impl<T: Trace> Trace for GcBox<T> {
#[inline]
fn traversible(&self) -> bool {
self.value.traversible()
}
#[inline]
unsafe fn trace(&self, heap: &mut TraceStack) {
self.value.trace(heap);
}
}
// GcRoot implementation
impl<T: Trace> GcRoot<T> {
/// Put a new object on the heap and hand ownership to the GC, writing a reference count
/// increment to the journal.
pub fn new(value: T) -> GcRoot<T> {
let boxed = Box::new(GcBox::new(value));
write(&*boxed, true, NEW_BIT | INC_BIT);
GcRoot {
ptr: Box::into_raw(boxed)
}
}
fn from_raw(ptr: *mut GcBox<T>) -> GcRoot<T> {
let root = GcRoot { ptr: ptr };
write(&*root, false, INC_BIT);
root
}
fn ptr(&self) -> *mut GcBox<T> {
self.ptr
}
fn value(&self) -> &T {
unsafe { &(*self.ptr).value }
}
fn value_mut(&mut self) -> &mut T {
unsafe { &mut (*self.ptr).value }
}
}
impl<T: Trace> Drop for GcRoot<T> {
fn drop(&mut self) {
write(&**self, false, 0);
}
}
impl<T: Trace> Deref for GcRoot<T> {
type Target = T;
fn deref(&self) -> &T {
self.value()
}
}
impl<T: Trace> DerefMut for GcRoot<T> {
fn deref_mut(&mut self) -> &mut T {
self.value_mut()
}
}
impl<T: Trace> Clone for GcRoot<T> {
fn clone(&self) -> Self {
GcRoot::from_raw(self.ptr())
}
}
// Gc implementation
impl<T: Trace> Gc<T> {
/// Creates a new null pointer.
pub fn null() -> Gc<T> {
Gc {
ptr: null_mut(),
}
}
/// Move a value to the heap and create a pointer to it.
pub fn new(value: T) -> Gc<T> {
let boxed = Box::new(GcBox::new(value));
write(&*boxed, true, NEW_BIT);
Gc {
ptr: Box::into_raw(boxed)
}
}
/// Return the raw pointer value, or None if it is a null pointer.
pub fn as_raw(&self) -> Option<*mut GcBox<T>> {
if self.ptr.is_null() {
None
} else {
Some(self.ptr)
}
}
/// Pointer equality comparison.
pub fn is(&self, other: Gc<T>) -> bool {
self.ptr == other.ptr
}
fn from_raw(ptr: *mut GcBox<T>) -> Gc<T> {
Gc {
ptr: ptr,
}
}
fn ptr(&self) -> *mut GcBox<T> {
self.ptr
}
fn value(&self) -> &T {
unsafe { &(*self.ptr).value }
}
fn value_mut(&mut self) -> &mut T {
unsafe { &mut (*self.ptr).value }
}
}
impl<T: Trace> Deref for Gc<T> {
type Target = T;
fn deref(&self) -> &T {
self.value()
}
}
impl<T: Trace> DerefMut for Gc<T> {
fn deref_mut(&mut self) -> &mut T {
self.value_mut()
}
}
impl<T: Trace> Clone for Gc<T> {
fn clone(&self) -> Self {
Gc {
ptr: self.ptr,
}
}
}
impl<T: Trace> Copy for Gc<T> {}
// GcAtomic implementation
impl<T: Trace> GcAtomic<T> {
/// Instantiate a new null pointer
pub fn null() -> GcAtomic<T> {
GcAtomic {
ptr: AtomicPtr::new(null_mut())
}
}
/// Instantiate a new pointer, moving `value` to the heap. Writes to the journal.
pub fn new(value: T) -> GcAtomic<T> {
let boxed = Box::new(GcBox::new(value));
write(&*boxed, true, NEW_BIT);
GcAtomic {
ptr: AtomicPtr::new(Box::into_raw(boxed)),
}
}
/// Root the pointer by loading it into a `GcRoot<T>`
///
/// Panics if `order` is `Release` or `AcqRel`.
pub fn load_into_root(&self, order: Ordering) -> GcRoot<T> {
let root = GcRoot {
ptr: self.ptr.load(order),
};
write(&*root, false, INC_BIT);
root
}
/// Copy the pointer into a new `Gc` instance.
///
/// Panics if `order` is `Release` or `AcqRel`.
pub fn load_into_gc(&self, order: Ordering) -> Gc<T> {
Gc::from_raw(self.ptr.load(order))
}
/// Fetch the current raw pointer value
///
/// Panics if `order` is `Release` or `AcqRel`.
pub fn load_raw(&self, order: Ordering) -> *mut GcBox<T> {
self.ptr.load(order)
}
/// Replace the current pointer value with the pointer from the given `GcRoot`.
///
/// Panics if `order` is `Acquire` or `AcqRel`.
pub fn store_from_root(&self, root: GcRoot<T>, order: Ordering) {
self.ptr.store(root.ptr(), order);
}
/// Replace the current pointer value with the pointer from the given `Gc`.
///
/// Panics of `order` is `Acquire` or `AcqRel`.
pub fn store_from_gc(&self, gc: Gc<T>, order: Ordering) {
self.ptr.store(gc.ptr(), order);
}
/// Replace the current pointer value with the given raw pointer
///
/// Panics if `order` is `Acquire` or `AcqRel`.
pub fn store_raw(&self, ptr: *mut GcBox<T>, order: Ordering) {
self.ptr.store(ptr, order);
}
}
| {
GC_JOURNAL.with(|j| {
let tx = unsafe { &*j.get() };
let tobj = as_traitobject(object);
// set the refcount-increment bit
let ptr = (tobj.data as usize) | flags;
// set the traversible bit
let mut vtable = tobj.vtable as usize;
if is_new && object.traversible() {
vtable |= TRAVERSE_BIT;
}
tx.send(Object {
ptr: ptr,
vtable: vtable,
});
}); | identifier_body |
appthread.rs | //! Types for the mutator to use to build data structures
use std::cell::Cell;
use std::mem::transmute;
use std::ops::{Deref, DerefMut};
use std::ptr::{null, null_mut};
use std::raw::TraitObject;
use std::sync::atomic::{AtomicPtr, Ordering};
use std::thread;
use constants::{INC_BIT, JOURNAL_BUFFER_SIZE, NEW_BIT, TRAVERSE_BIT};
use gcthread::{JournalSender, EntrySender};
use heap::{Object, TraceStack};
use journal;
use trace::Trace;
/// Each thread gets it's own EntrySender
thread_local!(
static GC_JOURNAL: Cell<*const EntrySender> = Cell::new(null())
);
/// GcBox struct and traits: a boxed object that is GC managed
pub struct GcBox<T: Trace> {
value: T,
}
/// Root smart pointer, sends reference count changes to the journal.
///
/// Whenever a reference to an object on the heap must be retained on the stack, this type must be
/// used. It's use will ensure that the object will be seen as a root.
pub struct GcRoot<T: Trace> {
ptr: *mut GcBox<T>,
}
/// Non-atomic pointer type. This type is `!Sync` and thus is useful for presenting a Rust-ish
/// API to a data structure where aliasing and mutability must follow the standard rules: there
/// can be only one mutator.
///
/// *Important note:* even though this type is `!Sync`, any data structures that are composed of
/// `Gc` pointers must still be designed with the awareness that the GC thread will call `trace()`
/// at any point and so, must still be thread safe!
///
/// This is not a root pointer type. It should be used inside data structures to reference other
/// GC-managed objects.
pub struct Gc<T: Trace> {
ptr: *mut GcBox<T>,
}
/// Atomic pointer type that points at a traceable object. This type is `Sync` and can be used to
/// build concurrent data structures.
///
/// This type should be used inside data structures to reference other GC-managed objects, but
/// provides interior mutability and atomic methods.
///
/// TODO: cas, swap etc for GcRoot and Gc
pub struct GcAtomic<T: Trace> {
ptr: AtomicPtr<GcBox<T>>,
}
/// An Application Thread, manages a thread-local reference to a tx channel
///
/// TODO: a version of `spawn()` is required that can be called from an existing mutator thread.
pub struct AppThread;
impl AppThread {
/// As thread::spawn but takes a journal Sender to initialize the thread_local instance with.
pub fn spawn_from_gc<F, T>(tx: JournalSender, f: F) -> thread::JoinHandle<T>
where F: FnOnce() -> T,
F: Send +'static,
T: Send +'static
{
thread::spawn(move || {
let (jtx, jrx) = journal::make_journal(JOURNAL_BUFFER_SIZE);
tx.send(jrx).expect("Failed to send a new Journal to the GC thread!");
GC_JOURNAL.with(|j| {
j.set(&jtx);
});
f()
})
}
}
// Reference count functions. Only new-objects need to specify the traverse bit.
#[inline]
fn as_traitobject<T: Trace>(object: &T) -> TraitObject {
let trace: &Trace = object;
unsafe { transmute(trace) }
}
/// Write a reference count increment to the journal for a newly allocated object
#[inline]
fn write<T: Trace>(object: &T, is_new: bool, flags: usize) {
GC_JOURNAL.with(|j| {
let tx = unsafe { &*j.get() };
let tobj = as_traitobject(object);
// set the refcount-increment bit
let ptr = (tobj.data as usize) | flags;
// set the traversible bit
let mut vtable = tobj.vtable as usize;
if is_new && object.traversible() {
vtable |= TRAVERSE_BIT;
}
tx.send(Object {
ptr: ptr,
vtable: vtable,
});
});
}
// GcBox implementation
impl<T: Trace> GcBox<T> {
fn new(value: T) -> GcBox<T> {
GcBox {
value: value,
}
}
}
unsafe impl<T: Trace> Trace for GcBox<T> {
#[inline]
fn traversible(&self) -> bool {
self.value.traversible()
}
#[inline]
unsafe fn trace(&self, heap: &mut TraceStack) {
self.value.trace(heap);
}
}
// GcRoot implementation
impl<T: Trace> GcRoot<T> {
/// Put a new object on the heap and hand ownership to the GC, writing a reference count
/// increment to the journal.
pub fn new(value: T) -> GcRoot<T> {
let boxed = Box::new(GcBox::new(value));
write(&*boxed, true, NEW_BIT | INC_BIT);
GcRoot {
ptr: Box::into_raw(boxed)
}
}
fn from_raw(ptr: *mut GcBox<T>) -> GcRoot<T> {
let root = GcRoot { ptr: ptr };
write(&*root, false, INC_BIT);
root
}
fn ptr(&self) -> *mut GcBox<T> {
self.ptr
}
fn value(&self) -> &T {
unsafe { &(*self.ptr).value }
}
fn value_mut(&mut self) -> &mut T {
unsafe { &mut (*self.ptr).value }
}
}
impl<T: Trace> Drop for GcRoot<T> {
fn drop(&mut self) {
write(&**self, false, 0);
}
}
impl<T: Trace> Deref for GcRoot<T> {
type Target = T;
fn deref(&self) -> &T {
self.value()
}
}
impl<T: Trace> DerefMut for GcRoot<T> {
fn deref_mut(&mut self) -> &mut T {
self.value_mut()
}
}
impl<T: Trace> Clone for GcRoot<T> {
fn clone(&self) -> Self {
GcRoot::from_raw(self.ptr())
}
}
// Gc implementation
impl<T: Trace> Gc<T> {
/// Creates a new null pointer.
pub fn null() -> Gc<T> {
Gc {
ptr: null_mut(),
}
}
/// Move a value to the heap and create a pointer to it.
pub fn new(value: T) -> Gc<T> {
let boxed = Box::new(GcBox::new(value));
write(&*boxed, true, NEW_BIT);
Gc {
ptr: Box::into_raw(boxed)
}
}
/// Return the raw pointer value, or None if it is a null pointer.
pub fn as_raw(&self) -> Option<*mut GcBox<T>> {
if self.ptr.is_null() {
None
} else {
Some(self.ptr)
}
}
/// Pointer equality comparison.
pub fn is(&self, other: Gc<T>) -> bool {
self.ptr == other.ptr
}
fn from_raw(ptr: *mut GcBox<T>) -> Gc<T> {
Gc { | }
}
fn ptr(&self) -> *mut GcBox<T> {
self.ptr
}
fn value(&self) -> &T {
unsafe { &(*self.ptr).value }
}
fn value_mut(&mut self) -> &mut T {
unsafe { &mut (*self.ptr).value }
}
}
impl<T: Trace> Deref for Gc<T> {
type Target = T;
fn deref(&self) -> &T {
self.value()
}
}
impl<T: Trace> DerefMut for Gc<T> {
fn deref_mut(&mut self) -> &mut T {
self.value_mut()
}
}
impl<T: Trace> Clone for Gc<T> {
fn clone(&self) -> Self {
Gc {
ptr: self.ptr,
}
}
}
impl<T: Trace> Copy for Gc<T> {}
// GcAtomic implementation
impl<T: Trace> GcAtomic<T> {
/// Instantiate a new null pointer
pub fn null() -> GcAtomic<T> {
GcAtomic {
ptr: AtomicPtr::new(null_mut())
}
}
/// Instantiate a new pointer, moving `value` to the heap. Writes to the journal.
pub fn new(value: T) -> GcAtomic<T> {
let boxed = Box::new(GcBox::new(value));
write(&*boxed, true, NEW_BIT);
GcAtomic {
ptr: AtomicPtr::new(Box::into_raw(boxed)),
}
}
/// Root the pointer by loading it into a `GcRoot<T>`
///
/// Panics if `order` is `Release` or `AcqRel`.
pub fn load_into_root(&self, order: Ordering) -> GcRoot<T> {
let root = GcRoot {
ptr: self.ptr.load(order),
};
write(&*root, false, INC_BIT);
root
}
/// Copy the pointer into a new `Gc` instance.
///
/// Panics if `order` is `Release` or `AcqRel`.
pub fn load_into_gc(&self, order: Ordering) -> Gc<T> {
Gc::from_raw(self.ptr.load(order))
}
/// Fetch the current raw pointer value
///
/// Panics if `order` is `Release` or `AcqRel`.
pub fn load_raw(&self, order: Ordering) -> *mut GcBox<T> {
self.ptr.load(order)
}
/// Replace the current pointer value with the pointer from the given `GcRoot`.
///
/// Panics if `order` is `Acquire` or `AcqRel`.
pub fn store_from_root(&self, root: GcRoot<T>, order: Ordering) {
self.ptr.store(root.ptr(), order);
}
/// Replace the current pointer value with the pointer from the given `Gc`.
///
/// Panics of `order` is `Acquire` or `AcqRel`.
pub fn store_from_gc(&self, gc: Gc<T>, order: Ordering) {
self.ptr.store(gc.ptr(), order);
}
/// Replace the current pointer value with the given raw pointer
///
/// Panics if `order` is `Acquire` or `AcqRel`.
pub fn store_raw(&self, ptr: *mut GcBox<T>, order: Ordering) {
self.ptr.store(ptr, order);
}
} | ptr: ptr, | random_line_split |
appthread.rs | //! Types for the mutator to use to build data structures
use std::cell::Cell;
use std::mem::transmute;
use std::ops::{Deref, DerefMut};
use std::ptr::{null, null_mut};
use std::raw::TraitObject;
use std::sync::atomic::{AtomicPtr, Ordering};
use std::thread;
use constants::{INC_BIT, JOURNAL_BUFFER_SIZE, NEW_BIT, TRAVERSE_BIT};
use gcthread::{JournalSender, EntrySender};
use heap::{Object, TraceStack};
use journal;
use trace::Trace;
/// Each thread gets it's own EntrySender
thread_local!(
static GC_JOURNAL: Cell<*const EntrySender> = Cell::new(null())
);
/// GcBox struct and traits: a boxed object that is GC managed
pub struct GcBox<T: Trace> {
value: T,
}
/// Root smart pointer, sends reference count changes to the journal.
///
/// Whenever a reference to an object on the heap must be retained on the stack, this type must be
/// used. It's use will ensure that the object will be seen as a root.
pub struct GcRoot<T: Trace> {
ptr: *mut GcBox<T>,
}
/// Non-atomic pointer type. This type is `!Sync` and thus is useful for presenting a Rust-ish
/// API to a data structure where aliasing and mutability must follow the standard rules: there
/// can be only one mutator.
///
/// *Important note:* even though this type is `!Sync`, any data structures that are composed of
/// `Gc` pointers must still be designed with the awareness that the GC thread will call `trace()`
/// at any point and so, must still be thread safe!
///
/// This is not a root pointer type. It should be used inside data structures to reference other
/// GC-managed objects.
pub struct Gc<T: Trace> {
ptr: *mut GcBox<T>,
}
/// Atomic pointer type that points at a traceable object. This type is `Sync` and can be used to
/// build concurrent data structures.
///
/// This type should be used inside data structures to reference other GC-managed objects, but
/// provides interior mutability and atomic methods.
///
/// TODO: cas, swap etc for GcRoot and Gc
pub struct GcAtomic<T: Trace> {
ptr: AtomicPtr<GcBox<T>>,
}
/// An Application Thread, manages a thread-local reference to a tx channel
///
/// TODO: a version of `spawn()` is required that can be called from an existing mutator thread.
pub struct AppThread;
impl AppThread {
/// As thread::spawn but takes a journal Sender to initialize the thread_local instance with.
pub fn spawn_from_gc<F, T>(tx: JournalSender, f: F) -> thread::JoinHandle<T>
where F: FnOnce() -> T,
F: Send +'static,
T: Send +'static
{
thread::spawn(move || {
let (jtx, jrx) = journal::make_journal(JOURNAL_BUFFER_SIZE);
tx.send(jrx).expect("Failed to send a new Journal to the GC thread!");
GC_JOURNAL.with(|j| {
j.set(&jtx);
});
f()
})
}
}
// Reference count functions. Only new-objects need to specify the traverse bit.
#[inline]
fn as_traitobject<T: Trace>(object: &T) -> TraitObject {
let trace: &Trace = object;
unsafe { transmute(trace) }
}
/// Write a reference count increment to the journal for a newly allocated object
#[inline]
fn write<T: Trace>(object: &T, is_new: bool, flags: usize) {
GC_JOURNAL.with(|j| {
let tx = unsafe { &*j.get() };
let tobj = as_traitobject(object);
// set the refcount-increment bit
let ptr = (tobj.data as usize) | flags;
// set the traversible bit
let mut vtable = tobj.vtable as usize;
if is_new && object.traversible() {
vtable |= TRAVERSE_BIT;
}
tx.send(Object {
ptr: ptr,
vtable: vtable,
});
});
}
// GcBox implementation
impl<T: Trace> GcBox<T> {
fn new(value: T) -> GcBox<T> {
GcBox {
value: value,
}
}
}
unsafe impl<T: Trace> Trace for GcBox<T> {
#[inline]
fn traversible(&self) -> bool {
self.value.traversible()
}
#[inline]
unsafe fn trace(&self, heap: &mut TraceStack) {
self.value.trace(heap);
}
}
// GcRoot implementation
impl<T: Trace> GcRoot<T> {
/// Put a new object on the heap and hand ownership to the GC, writing a reference count
/// increment to the journal.
pub fn new(value: T) -> GcRoot<T> {
let boxed = Box::new(GcBox::new(value));
write(&*boxed, true, NEW_BIT | INC_BIT);
GcRoot {
ptr: Box::into_raw(boxed)
}
}
fn from_raw(ptr: *mut GcBox<T>) -> GcRoot<T> {
let root = GcRoot { ptr: ptr };
write(&*root, false, INC_BIT);
root
}
fn ptr(&self) -> *mut GcBox<T> {
self.ptr
}
fn value(&self) -> &T {
unsafe { &(*self.ptr).value }
}
fn value_mut(&mut self) -> &mut T {
unsafe { &mut (*self.ptr).value }
}
}
impl<T: Trace> Drop for GcRoot<T> {
fn drop(&mut self) {
write(&**self, false, 0);
}
}
impl<T: Trace> Deref for GcRoot<T> {
type Target = T;
fn deref(&self) -> &T {
self.value()
}
}
impl<T: Trace> DerefMut for GcRoot<T> {
fn deref_mut(&mut self) -> &mut T {
self.value_mut()
}
}
impl<T: Trace> Clone for GcRoot<T> {
fn clone(&self) -> Self {
GcRoot::from_raw(self.ptr())
}
}
// Gc implementation
impl<T: Trace> Gc<T> {
/// Creates a new null pointer.
pub fn null() -> Gc<T> {
Gc {
ptr: null_mut(),
}
}
/// Move a value to the heap and create a pointer to it.
pub fn new(value: T) -> Gc<T> {
let boxed = Box::new(GcBox::new(value));
write(&*boxed, true, NEW_BIT);
Gc {
ptr: Box::into_raw(boxed)
}
}
/// Return the raw pointer value, or None if it is a null pointer.
pub fn as_raw(&self) -> Option<*mut GcBox<T>> {
if self.ptr.is_null() {
None
} else |
}
/// Pointer equality comparison.
pub fn is(&self, other: Gc<T>) -> bool {
self.ptr == other.ptr
}
fn from_raw(ptr: *mut GcBox<T>) -> Gc<T> {
Gc {
ptr: ptr,
}
}
fn ptr(&self) -> *mut GcBox<T> {
self.ptr
}
fn value(&self) -> &T {
unsafe { &(*self.ptr).value }
}
fn value_mut(&mut self) -> &mut T {
unsafe { &mut (*self.ptr).value }
}
}
impl<T: Trace> Deref for Gc<T> {
type Target = T;
fn deref(&self) -> &T {
self.value()
}
}
impl<T: Trace> DerefMut for Gc<T> {
fn deref_mut(&mut self) -> &mut T {
self.value_mut()
}
}
impl<T: Trace> Clone for Gc<T> {
fn clone(&self) -> Self {
Gc {
ptr: self.ptr,
}
}
}
impl<T: Trace> Copy for Gc<T> {}
// GcAtomic implementation
impl<T: Trace> GcAtomic<T> {
/// Instantiate a new null pointer
pub fn null() -> GcAtomic<T> {
GcAtomic {
ptr: AtomicPtr::new(null_mut())
}
}
/// Instantiate a new pointer, moving `value` to the heap. Writes to the journal.
pub fn new(value: T) -> GcAtomic<T> {
let boxed = Box::new(GcBox::new(value));
write(&*boxed, true, NEW_BIT);
GcAtomic {
ptr: AtomicPtr::new(Box::into_raw(boxed)),
}
}
/// Root the pointer by loading it into a `GcRoot<T>`
///
/// Panics if `order` is `Release` or `AcqRel`.
pub fn load_into_root(&self, order: Ordering) -> GcRoot<T> {
let root = GcRoot {
ptr: self.ptr.load(order),
};
write(&*root, false, INC_BIT);
root
}
/// Copy the pointer into a new `Gc` instance.
///
/// Panics if `order` is `Release` or `AcqRel`.
pub fn load_into_gc(&self, order: Ordering) -> Gc<T> {
Gc::from_raw(self.ptr.load(order))
}
/// Fetch the current raw pointer value
///
/// Panics if `order` is `Release` or `AcqRel`.
pub fn load_raw(&self, order: Ordering) -> *mut GcBox<T> {
self.ptr.load(order)
}
/// Replace the current pointer value with the pointer from the given `GcRoot`.
///
/// Panics if `order` is `Acquire` or `AcqRel`.
pub fn store_from_root(&self, root: GcRoot<T>, order: Ordering) {
self.ptr.store(root.ptr(), order);
}
/// Replace the current pointer value with the pointer from the given `Gc`.
///
/// Panics of `order` is `Acquire` or `AcqRel`.
pub fn store_from_gc(&self, gc: Gc<T>, order: Ordering) {
self.ptr.store(gc.ptr(), order);
}
/// Replace the current pointer value with the given raw pointer
///
/// Panics if `order` is `Acquire` or `AcqRel`.
pub fn store_raw(&self, ptr: *mut GcBox<T>, order: Ordering) {
self.ptr.store(ptr, order);
}
}
| {
Some(self.ptr)
} | conditional_block |
self-impl-2.rs | // run-pass
#![allow(dead_code)]
#![allow(unused_variables)]
// Test that we can use `Self` types in impls in the expected way.
// pretty-expanded FIXME #23616
struct Foo;
// Test uses on inherent impl.
impl Foo {
fn | (_x: Self, _y: &Self, _z: Box<Self>) -> Self {
Foo
}
fn baz() {
// Test that Self cannot be shadowed.
type Foo = i32;
// There is no empty method on i32.
Self::empty();
let _: Self = Foo;
}
fn empty() {}
}
// Test uses when implementing a trait and with a type parameter.
pub struct Baz<X> {
pub f: X,
}
trait SuperBar {
type SuperQux;
}
trait Bar<X>: SuperBar {
type Qux;
fn bar(x: Self, y: &Self, z: Box<Self>, _: Self::SuperQux) -> Self;
fn dummy(&self, x: X) { }
}
impl SuperBar for Box<Baz<isize>> {
type SuperQux = bool;
}
impl Bar<isize> for Box<Baz<isize>> {
type Qux = i32;
fn bar(_x: Self, _y: &Self, _z: Box<Self>, _: Self::SuperQux) -> Self {
let _: Self::Qux = 42;
let _: <Self as Bar<isize>>::Qux = 42;
let _: Self::SuperQux = true;
let _: <Self as SuperBar>::SuperQux = true;
Box::new(Baz { f: 42 })
}
}
fn main() {
let _: Foo = Foo::foo(Foo, &Foo, Box::new(Foo));
let _: Box<Baz<isize>> = Bar::bar(Box::new(Baz { f: 42 }),
&Box::new(Baz { f: 42 }),
Box::new(Box::new(Baz { f: 42 })),
true);
}
| foo | identifier_name |
self-impl-2.rs | // run-pass
#![allow(dead_code)]
#![allow(unused_variables)]
// Test that we can use `Self` types in impls in the expected way.
// pretty-expanded FIXME #23616
struct Foo;
// Test uses on inherent impl.
impl Foo {
fn foo(_x: Self, _y: &Self, _z: Box<Self>) -> Self {
Foo
}
fn baz() {
// Test that Self cannot be shadowed.
type Foo = i32;
// There is no empty method on i32.
Self::empty();
let _: Self = Foo;
}
fn empty() {}
}
// Test uses when implementing a trait and with a type parameter.
pub struct Baz<X> {
pub f: X,
}
trait SuperBar {
type SuperQux;
}
trait Bar<X>: SuperBar {
type Qux;
| impl SuperBar for Box<Baz<isize>> {
type SuperQux = bool;
}
impl Bar<isize> for Box<Baz<isize>> {
type Qux = i32;
fn bar(_x: Self, _y: &Self, _z: Box<Self>, _: Self::SuperQux) -> Self {
let _: Self::Qux = 42;
let _: <Self as Bar<isize>>::Qux = 42;
let _: Self::SuperQux = true;
let _: <Self as SuperBar>::SuperQux = true;
Box::new(Baz { f: 42 })
}
}
fn main() {
let _: Foo = Foo::foo(Foo, &Foo, Box::new(Foo));
let _: Box<Baz<isize>> = Bar::bar(Box::new(Baz { f: 42 }),
&Box::new(Baz { f: 42 }),
Box::new(Box::new(Baz { f: 42 })),
true);
} | fn bar(x: Self, y: &Self, z: Box<Self>, _: Self::SuperQux) -> Self;
fn dummy(&self, x: X) { }
}
| random_line_split |
self-impl-2.rs | // run-pass
#![allow(dead_code)]
#![allow(unused_variables)]
// Test that we can use `Self` types in impls in the expected way.
// pretty-expanded FIXME #23616
struct Foo;
// Test uses on inherent impl.
impl Foo {
fn foo(_x: Self, _y: &Self, _z: Box<Self>) -> Self {
Foo
}
fn baz() |
fn empty() {}
}
// Test uses when implementing a trait and with a type parameter.
pub struct Baz<X> {
pub f: X,
}
trait SuperBar {
type SuperQux;
}
trait Bar<X>: SuperBar {
type Qux;
fn bar(x: Self, y: &Self, z: Box<Self>, _: Self::SuperQux) -> Self;
fn dummy(&self, x: X) { }
}
impl SuperBar for Box<Baz<isize>> {
type SuperQux = bool;
}
impl Bar<isize> for Box<Baz<isize>> {
type Qux = i32;
fn bar(_x: Self, _y: &Self, _z: Box<Self>, _: Self::SuperQux) -> Self {
let _: Self::Qux = 42;
let _: <Self as Bar<isize>>::Qux = 42;
let _: Self::SuperQux = true;
let _: <Self as SuperBar>::SuperQux = true;
Box::new(Baz { f: 42 })
}
}
fn main() {
let _: Foo = Foo::foo(Foo, &Foo, Box::new(Foo));
let _: Box<Baz<isize>> = Bar::bar(Box::new(Baz { f: 42 }),
&Box::new(Baz { f: 42 }),
Box::new(Box::new(Baz { f: 42 })),
true);
}
| {
// Test that Self cannot be shadowed.
type Foo = i32;
// There is no empty method on i32.
Self::empty();
let _: Self = Foo;
} | identifier_body |
areas.rs | use spin::Once;
static AREAS: Once<Areas> = Once::new();
#[derive(Debug)]
struct Areas {
areas: [Area; 16],
count: usize,
}
//TODO Do we align the base and length here,
// provide methods for it,
// or leave it up to the user
#[derive(Copy, Clone, Debug)]
pub struct Area {
pub base: u64,
pub length: u64,
}
#[derive(Debug, Clone)]
pub struct AreaIter {
areas: &'static Areas,
i: usize,
}
impl Areas {
const fn new() -> Areas {
Areas {
areas: [Area{base:0, length: 0};16],
count: 0,
} | fn add_area(&mut self, base: u64, length: u64) {
self.areas[self.count] = Area{base:base, length: length};
self.count += 1;
}
fn iter(&'static self) -> AreaIter {
AreaIter {i: 0, areas: self}
}
}
impl Iterator for AreaIter {
type Item = &'static Area;
fn next(&mut self) -> Option<Self::Item> {
if self.i < self.areas.count {
let item = &self.areas.areas[self.i];
self.i += 1;
Some(item)
} else {
None
}
}
}
fn areas() -> &'static Areas {
AREAS.try().expect("AREAS not initialized yet")
}
pub fn memory_areas() -> AreaIter {
areas().iter()
}
pub fn init(boot_info: &::bootinfo::BootInfo) {
AREAS.call_once(|| {
let mut areas = Areas::new();
for area in boot_info.memory_areas() {
// Only copy free areas
if area.usage == ::bootinfo::Usage::Free {
areas.add_area(area.start, area.length);
}
}
areas
});
//TODO I'm not sure if I want this here
// We should probably move this to a print function, or something.
println!("Areas");
for area in memory_areas() {
println!(" start: 0x{:x}, length: 0x{:x}", area.base, area.length);
}
} | }
| random_line_split |
areas.rs | use spin::Once;
static AREAS: Once<Areas> = Once::new();
#[derive(Debug)]
struct Areas {
areas: [Area; 16],
count: usize,
}
//TODO Do we align the base and length here,
// provide methods for it,
// or leave it up to the user
#[derive(Copy, Clone, Debug)]
pub struct Area {
pub base: u64,
pub length: u64,
}
#[derive(Debug, Clone)]
pub struct AreaIter {
areas: &'static Areas,
i: usize,
}
impl Areas {
const fn new() -> Areas {
Areas {
areas: [Area{base:0, length: 0};16],
count: 0,
}
}
fn add_area(&mut self, base: u64, length: u64) {
self.areas[self.count] = Area{base:base, length: length};
self.count += 1;
}
fn iter(&'static self) -> AreaIter {
AreaIter {i: 0, areas: self}
}
}
impl Iterator for AreaIter {
type Item = &'static Area;
fn next(&mut self) -> Option<Self::Item> {
if self.i < self.areas.count {
let item = &self.areas.areas[self.i];
self.i += 1;
Some(item)
} else {
None
}
}
}
fn areas() -> &'static Areas {
AREAS.try().expect("AREAS not initialized yet")
}
pub fn memory_areas() -> AreaIter {
areas().iter()
}
pub fn init(boot_info: &::bootinfo::BootInfo) {
AREAS.call_once(|| {
let mut areas = Areas::new();
for area in boot_info.memory_areas() {
// Only copy free areas
if area.usage == ::bootinfo::Usage::Free |
}
areas
});
//TODO I'm not sure if I want this here
// We should probably move this to a print function, or something.
println!("Areas");
for area in memory_areas() {
println!(" start: 0x{:x}, length: 0x{:x}", area.base, area.length);
}
} | {
areas.add_area(area.start, area.length);
} | conditional_block |
areas.rs | use spin::Once;
static AREAS: Once<Areas> = Once::new();
#[derive(Debug)]
struct Areas {
areas: [Area; 16],
count: usize,
}
//TODO Do we align the base and length here,
// provide methods for it,
// or leave it up to the user
#[derive(Copy, Clone, Debug)]
pub struct Area {
pub base: u64,
pub length: u64,
}
#[derive(Debug, Clone)]
pub struct AreaIter {
areas: &'static Areas,
i: usize,
}
impl Areas {
const fn new() -> Areas {
Areas {
areas: [Area{base:0, length: 0};16],
count: 0,
}
}
fn add_area(&mut self, base: u64, length: u64) {
self.areas[self.count] = Area{base:base, length: length};
self.count += 1;
}
fn iter(&'static self) -> AreaIter |
}
impl Iterator for AreaIter {
type Item = &'static Area;
fn next(&mut self) -> Option<Self::Item> {
if self.i < self.areas.count {
let item = &self.areas.areas[self.i];
self.i += 1;
Some(item)
} else {
None
}
}
}
fn areas() -> &'static Areas {
AREAS.try().expect("AREAS not initialized yet")
}
pub fn memory_areas() -> AreaIter {
areas().iter()
}
pub fn init(boot_info: &::bootinfo::BootInfo) {
AREAS.call_once(|| {
let mut areas = Areas::new();
for area in boot_info.memory_areas() {
// Only copy free areas
if area.usage == ::bootinfo::Usage::Free {
areas.add_area(area.start, area.length);
}
}
areas
});
//TODO I'm not sure if I want this here
// We should probably move this to a print function, or something.
println!("Areas");
for area in memory_areas() {
println!(" start: 0x{:x}, length: 0x{:x}", area.base, area.length);
}
} | {
AreaIter {i: 0, areas: self}
} | identifier_body |
areas.rs | use spin::Once;
static AREAS: Once<Areas> = Once::new();
#[derive(Debug)]
struct Areas {
areas: [Area; 16],
count: usize,
}
//TODO Do we align the base and length here,
// provide methods for it,
// or leave it up to the user
#[derive(Copy, Clone, Debug)]
pub struct Area {
pub base: u64,
pub length: u64,
}
#[derive(Debug, Clone)]
pub struct AreaIter {
areas: &'static Areas,
i: usize,
}
impl Areas {
const fn | () -> Areas {
Areas {
areas: [Area{base:0, length: 0};16],
count: 0,
}
}
fn add_area(&mut self, base: u64, length: u64) {
self.areas[self.count] = Area{base:base, length: length};
self.count += 1;
}
fn iter(&'static self) -> AreaIter {
AreaIter {i: 0, areas: self}
}
}
impl Iterator for AreaIter {
type Item = &'static Area;
fn next(&mut self) -> Option<Self::Item> {
if self.i < self.areas.count {
let item = &self.areas.areas[self.i];
self.i += 1;
Some(item)
} else {
None
}
}
}
fn areas() -> &'static Areas {
AREAS.try().expect("AREAS not initialized yet")
}
pub fn memory_areas() -> AreaIter {
areas().iter()
}
pub fn init(boot_info: &::bootinfo::BootInfo) {
AREAS.call_once(|| {
let mut areas = Areas::new();
for area in boot_info.memory_areas() {
// Only copy free areas
if area.usage == ::bootinfo::Usage::Free {
areas.add_area(area.start, area.length);
}
}
areas
});
//TODO I'm not sure if I want this here
// We should probably move this to a print function, or something.
println!("Areas");
for area in memory_areas() {
println!(" start: 0x{:x}, length: 0x{:x}", area.base, area.length);
}
} | new | identifier_name |
dominators.rs | //! Compute dominators of a control-flow graph.
//!
//! # The Dominance Relation
//!
//! In a directed graph with a root node **R**, a node **A** is said to *dominate* a
//! node **B** iff every path from **R** to **B** contains **A**.
//!
//! The node **A** is said to *strictly dominate* the node **B** iff **A** dominates
//! **B** and **A ≠ B**.
//!
//! The node **A** is said to be the *immediate dominator* of a node **B** iff it
//! strictly dominates **B** and there does not exist any node **C** where **A**
//! dominates **C** and **C** dominates **B**.
use std::cmp::Ordering;
use std::collections::{HashMap, HashSet, hash_map::Iter};
use std::hash::Hash;
use crate::visit::{DfsPostOrder, GraphBase, IntoNeighbors, Visitable, Walker};
/// The dominance relation for some graph and root.
#[derive(Debug, Clone)]
pub struct Dominators<N>
where
N: Copy + Eq + Hash,
{
root: N,
dominators: HashMap<N, N>,
}
impl<N> Dominators<N>
where
N: Copy + Eq + Hash,
{
/// Get the root node used to construct these dominance relations.
pub fn root(&self) -> N {
self.root
}
/// Get the immediate dominator of the given node.
///
/// Returns `None` for any node that is not reachable from the root, and for
/// the root itself.
pub fn immediate_dominator(&self, node: N) -> Option<N> {
| /// Iterate over the given node's strict dominators.
///
/// If the given node is not reachable from the root, then `None` is
/// returned.
pub fn strict_dominators(&self, node: N) -> Option<DominatorsIter<N>> {
if self.dominators.contains_key(&node) {
Some(DominatorsIter {
dominators: self,
node: self.immediate_dominator(node),
})
} else {
None
}
}
/// Iterate over all of the given node's dominators (including the given
/// node itself).
///
/// If the given node is not reachable from the root, then `None` is
/// returned.
pub fn dominators(&self, node: N) -> Option<DominatorsIter<N>> {
if self.dominators.contains_key(&node) {
Some(DominatorsIter {
dominators: self,
node: Some(node),
})
} else {
None
}
}
/// Iterate over all nodes immediately dominated by the given node (not
/// including the given node itself).
pub fn immediately_dominated_by(&self, node: N) -> DominatedByIter<N> {
DominatedByIter {
iter: self.dominators.iter(),
node: node
}
}
}
/// Iterator for a node's dominators.
pub struct DominatorsIter<'a, N>
where
N: 'a + Copy + Eq + Hash,
{
dominators: &'a Dominators<N>,
node: Option<N>,
}
impl<'a, N> Iterator for DominatorsIter<'a, N>
where
N: 'a + Copy + Eq + Hash,
{
type Item = N;
fn next(&mut self) -> Option<Self::Item> {
let next = self.node.take();
if let Some(next) = next {
self.node = self.dominators.immediate_dominator(next);
}
next
}
}
/// Iterator for nodes dominated by a given node.
pub struct DominatedByIter<'a, N>
where
N: 'a + Copy + Eq + Hash,
{
iter: Iter<'a, N, N>,
node: N,
}
impl<'a, N> Iterator for DominatedByIter<'a, N>
where
N: 'a + Copy + Eq + Hash,
{
type Item = N;
fn next(&mut self) -> Option<Self::Item> {
while let Some(next) = self.iter.next() {
if next.1 == &self.node {
return Some(*next.0);
}
}
None
}
}
/// The undefined dominator sentinel, for when we have not yet discovered a
/// node's dominator.
const UNDEFINED: usize = ::std::usize::MAX;
/// This is an implementation of the engineered ["Simple, Fast Dominance
/// Algorithm"][0] discovered by Cooper et al.
///
/// This algorithm is **O(|V|²)**, and therefore has slower theoretical running time
/// than the Lengauer-Tarjan algorithm (which is **O(|E| log |V|)**. However,
/// Cooper et al found it to be faster in practice on control flow graphs of up
/// to ~30,000 vertices.
///
/// [0]: http://www.cs.rice.edu/~keith/EMBED/dom.pdf
pub fn simple_fast<G>(graph: G, root: G::NodeId) -> Dominators<G::NodeId>
where
G: IntoNeighbors + Visitable,
<G as GraphBase>::NodeId: Eq + Hash,
{
let (post_order, predecessor_sets) = simple_fast_post_order(graph, root);
let length = post_order.len();
debug_assert!(length > 0);
debug_assert!(post_order.last() == Some(&root));
// From here on out we use indices into `post_order` instead of actual
// `NodeId`s wherever possible. This greatly improves the performance of
// this implementation, but we have to pay a little bit of upfront cost to
// convert our data structures to play along first.
// Maps a node to its index into `post_order`.
let node_to_post_order_idx: HashMap<_, _> = post_order
.iter()
.enumerate()
.map(|(idx, &node)| (node, idx))
.collect();
// Maps a node's `post_order` index to its set of predecessors's indices
// into `post_order` (as a vec).
let idx_to_predecessor_vec =
predecessor_sets_to_idx_vecs(&post_order, &node_to_post_order_idx, predecessor_sets);
let mut dominators = vec![UNDEFINED; length];
dominators[length - 1] = length - 1;
let mut changed = true;
while changed {
changed = false;
// Iterate in reverse post order, skipping the root.
for idx in (0..length - 1).rev() {
debug_assert!(post_order[idx]!= root);
// Take the intersection of every predecessor's dominator set; that
// is the current best guess at the immediate dominator for this
// node.
let new_idom_idx = {
let mut predecessors = idx_to_predecessor_vec[idx]
.iter()
.filter(|&&p| dominators[p]!= UNDEFINED);
let new_idom_idx = predecessors.next().expect(
"Because the root is initialized to dominate itself, and is the \
first node in every path, there must exist a predecessor to this \
node that also has a dominator",
);
predecessors.fold(*new_idom_idx, |new_idom_idx, &predecessor_idx| {
intersect(&dominators, new_idom_idx, predecessor_idx)
})
};
debug_assert!(new_idom_idx < length);
if new_idom_idx!= dominators[idx] {
dominators[idx] = new_idom_idx;
changed = true;
}
}
}
// All done! Translate the indices back into proper `G::NodeId`s.
debug_assert!(!dominators.iter().any(|&dom| dom == UNDEFINED));
Dominators {
root,
dominators: dominators
.into_iter()
.enumerate()
.map(|(idx, dom_idx)| (post_order[idx], post_order[dom_idx]))
.collect(),
}
}
fn intersect(dominators: &[usize], mut finger1: usize, mut finger2: usize) -> usize {
loop {
match finger1.cmp(&finger2) {
Ordering::Less => finger1 = dominators[finger1],
Ordering::Greater => finger2 = dominators[finger2],
Ordering::Equal => return finger1,
}
}
}
fn predecessor_sets_to_idx_vecs<N>(
post_order: &[N],
node_to_post_order_idx: &HashMap<N, usize>,
mut predecessor_sets: HashMap<N, HashSet<N>>,
) -> Vec<Vec<usize>>
where
N: Copy + Eq + Hash,
{
post_order
.iter()
.map(|node| {
predecessor_sets
.remove(node)
.map(|predecessors| {
predecessors
.into_iter()
.map(|p| *node_to_post_order_idx.get(&p).unwrap())
.collect()
})
.unwrap_or_else(Vec::new)
})
.collect()
}
fn simple_fast_post_order<G>(
graph: G,
root: G::NodeId,
) -> (Vec<G::NodeId>, HashMap<G::NodeId, HashSet<G::NodeId>>)
where
G: IntoNeighbors + Visitable,
<G as GraphBase>::NodeId: Eq + Hash,
{
let mut post_order = vec![];
let mut predecessor_sets = HashMap::new();
for node in DfsPostOrder::new(graph, root).iter(graph) {
post_order.push(node);
for successor in graph.neighbors(node) {
predecessor_sets
.entry(successor)
.or_insert_with(HashSet::new)
.insert(node);
}
}
(post_order, predecessor_sets)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_iter_dominators() {
let doms: Dominators<u32> = Dominators {
root: 0,
dominators: [(2, 1), (1, 0), (0, 0)].iter().cloned().collect(),
};
let all_doms: Vec<_> = doms.dominators(2).unwrap().collect();
assert_eq!(vec![2, 1, 0], all_doms);
assert_eq!(None::<()>, doms.dominators(99).map(|_| unreachable!()));
let strict_doms: Vec<_> = doms.strict_dominators(2).unwrap().collect();
assert_eq!(vec![1, 0], strict_doms);
assert_eq!(
None::<()>,
doms.strict_dominators(99).map(|_| unreachable!())
);
let dom_by: Vec<_> = doms.immediately_dominated_by(1).collect();
assert_eq!(vec![2], dom_by);
assert_eq!(None, doms.immediately_dominated_by(99).next());
}
}
| if node == self.root {
None
} else {
self.dominators.get(&node).cloned()
}
}
| identifier_body |
dominators.rs | //! Compute dominators of a control-flow graph.
//!
//! # The Dominance Relation
//!
//! In a directed graph with a root node **R**, a node **A** is said to *dominate* a
//! node **B** iff every path from **R** to **B** contains **A**.
//!
//! The node **A** is said to *strictly dominate* the node **B** iff **A** dominates
//! **B** and **A ≠ B**.
//!
//! The node **A** is said to be the *immediate dominator* of a node **B** iff it
//! strictly dominates **B** and there does not exist any node **C** where **A**
//! dominates **C** and **C** dominates **B**.
use std::cmp::Ordering;
use std::collections::{HashMap, HashSet, hash_map::Iter};
use std::hash::Hash;
use crate::visit::{DfsPostOrder, GraphBase, IntoNeighbors, Visitable, Walker};
/// The dominance relation for some graph and root.
#[derive(Debug, Clone)]
pub struct Dominators<N>
where
N: Copy + Eq + Hash,
{
root: N,
dominators: HashMap<N, N>,
}
impl<N> Dominators<N>
where
N: Copy + Eq + Hash,
{
/// Get the root node used to construct these dominance relations.
pub fn root(&self) -> N {
self.root
}
/// Get the immediate dominator of the given node.
///
/// Returns `None` for any node that is not reachable from the root, and for
/// the root itself.
pub fn immediate_dominator(&self, node: N) -> Option<N> {
if node == self.root {
None
} else {
self.dominators.get(&node).cloned()
}
}
/// Iterate over the given node's strict dominators.
///
/// If the given node is not reachable from the root, then `None` is
/// returned.
pub fn strict_dominators(&self, node: N) -> Option<DominatorsIter<N>> {
if self.dominators.contains_key(&node) {
Some(DominatorsIter {
dominators: self,
node: self.immediate_dominator(node),
})
} else {
None
}
}
/// Iterate over all of the given node's dominators (including the given
/// node itself).
///
/// If the given node is not reachable from the root, then `None` is
/// returned.
pub fn dominators(&self, node: N) -> Option<DominatorsIter<N>> {
if self.dominators.contains_key(&node) {
Some(DominatorsIter {
dominators: self,
node: Some(node),
})
} else {
None
}
}
/// Iterate over all nodes immediately dominated by the given node (not
/// including the given node itself).
pub fn immediately_dominated_by(&self, node: N) -> DominatedByIter<N> {
DominatedByIter {
iter: self.dominators.iter(),
node: node
}
}
}
/// Iterator for a node's dominators.
pub struct DominatorsIter<'a, N>
where
N: 'a + Copy + Eq + Hash,
{
dominators: &'a Dominators<N>,
node: Option<N>,
}
impl<'a, N> Iterator for DominatorsIter<'a, N>
where
N: 'a + Copy + Eq + Hash,
{
type Item = N;
fn next(&mut self) -> Option<Self::Item> {
let next = self.node.take();
if let Some(next) = next {
self.node = self.dominators.immediate_dominator(next);
}
next
}
}
/// Iterator for nodes dominated by a given node.
pub struct DominatedByIter<'a, N>
where
N: 'a + Copy + Eq + Hash,
{
iter: Iter<'a, N, N>,
node: N,
}
impl<'a, N> Iterator for DominatedByIter<'a, N>
where
N: 'a + Copy + Eq + Hash,
{
type Item = N;
fn next(&mut self) -> Option<Self::Item> {
while let Some(next) = self.iter.next() {
if next.1 == &self.node {
return Some(*next.0);
}
}
None
}
}
/// The undefined dominator sentinel, for when we have not yet discovered a
/// node's dominator.
const UNDEFINED: usize = ::std::usize::MAX;
/// This is an implementation of the engineered ["Simple, Fast Dominance
/// Algorithm"][0] discovered by Cooper et al.
///
/// This algorithm is **O(|V|²)**, and therefore has slower theoretical running time
/// than the Lengauer-Tarjan algorithm (which is **O(|E| log |V|)**. However,
/// Cooper et al found it to be faster in practice on control flow graphs of up
/// to ~30,000 vertices.
///
/// [0]: http://www.cs.rice.edu/~keith/EMBED/dom.pdf
pub fn simple_fast<G>(graph: G, root: G::NodeId) -> Dominators<G::NodeId>
where
G: IntoNeighbors + Visitable,
<G as GraphBase>::NodeId: Eq + Hash,
{
let (post_order, predecessor_sets) = simple_fast_post_order(graph, root);
let length = post_order.len();
debug_assert!(length > 0);
debug_assert!(post_order.last() == Some(&root));
// From here on out we use indices into `post_order` instead of actual
// `NodeId`s wherever possible. This greatly improves the performance of
// this implementation, but we have to pay a little bit of upfront cost to
// convert our data structures to play along first.
// Maps a node to its index into `post_order`.
let node_to_post_order_idx: HashMap<_, _> = post_order
.iter()
.enumerate()
.map(|(idx, &node)| (node, idx))
.collect();
// Maps a node's `post_order` index to its set of predecessors's indices
// into `post_order` (as a vec).
let idx_to_predecessor_vec =
predecessor_sets_to_idx_vecs(&post_order, &node_to_post_order_idx, predecessor_sets);
let mut dominators = vec![UNDEFINED; length];
dominators[length - 1] = length - 1;
let mut changed = true;
while changed {
changed = false;
// Iterate in reverse post order, skipping the root.
for idx in (0..length - 1).rev() {
debug_assert!(post_order[idx]!= root);
// Take the intersection of every predecessor's dominator set; that
// is the current best guess at the immediate dominator for this
// node.
let new_idom_idx = {
let mut predecessors = idx_to_predecessor_vec[idx]
.iter()
.filter(|&&p| dominators[p]!= UNDEFINED);
let new_idom_idx = predecessors.next().expect(
"Because the root is initialized to dominate itself, and is the \
first node in every path, there must exist a predecessor to this \
node that also has a dominator",
);
predecessors.fold(*new_idom_idx, |new_idom_idx, &predecessor_idx| {
intersect(&dominators, new_idom_idx, predecessor_idx)
})
};
debug_assert!(new_idom_idx < length);
if new_idom_idx!= dominators[idx] {
dominators[idx] = new_idom_idx;
changed = true;
}
}
}
// All done! Translate the indices back into proper `G::NodeId`s.
debug_assert!(!dominators.iter().any(|&dom| dom == UNDEFINED));
Dominators {
root,
dominators: dominators
.into_iter()
.enumerate()
.map(|(idx, dom_idx)| (post_order[idx], post_order[dom_idx]))
.collect(),
}
}
fn intersect(dominators: &[usize], mut finger1: usize, mut finger2: usize) -> usize {
loop {
match finger1.cmp(&finger2) {
Ordering::Less => finger1 = dominators[finger1],
Ordering::Greater => finger2 = dominators[finger2],
Ordering::Equal => return finger1,
}
}
}
fn predecessor_sets_to_idx_vecs<N>(
post_order: &[N],
node_to_post_order_idx: &HashMap<N, usize>,
mut predecessor_sets: HashMap<N, HashSet<N>>,
) -> Vec<Vec<usize>>
where
N: Copy + Eq + Hash,
{
post_order
.iter()
.map(|node| {
predecessor_sets
.remove(node)
.map(|predecessors| {
predecessors
.into_iter()
.map(|p| *node_to_post_order_idx.get(&p).unwrap())
.collect()
})
.unwrap_or_else(Vec::new)
})
.collect()
}
fn simple_fast_post_order<G>(
graph: G,
root: G::NodeId,
) -> (Vec<G::NodeId>, HashMap<G::NodeId, HashSet<G::NodeId>>)
where
G: IntoNeighbors + Visitable,
<G as GraphBase>::NodeId: Eq + Hash,
{
let mut post_order = vec![];
let mut predecessor_sets = HashMap::new();
for node in DfsPostOrder::new(graph, root).iter(graph) {
post_order.push(node);
for successor in graph.neighbors(node) {
predecessor_sets
.entry(successor)
.or_insert_with(HashSet::new)
.insert(node);
}
}
(post_order, predecessor_sets)
}
#[cfg(test)]
mod tests {
use super::*;
#[test] | };
let all_doms: Vec<_> = doms.dominators(2).unwrap().collect();
assert_eq!(vec![2, 1, 0], all_doms);
assert_eq!(None::<()>, doms.dominators(99).map(|_| unreachable!()));
let strict_doms: Vec<_> = doms.strict_dominators(2).unwrap().collect();
assert_eq!(vec![1, 0], strict_doms);
assert_eq!(
None::<()>,
doms.strict_dominators(99).map(|_| unreachable!())
);
let dom_by: Vec<_> = doms.immediately_dominated_by(1).collect();
assert_eq!(vec![2], dom_by);
assert_eq!(None, doms.immediately_dominated_by(99).next());
}
} | fn test_iter_dominators() {
let doms: Dominators<u32> = Dominators {
root: 0,
dominators: [(2, 1), (1, 0), (0, 0)].iter().cloned().collect(), | random_line_split |
dominators.rs | //! Compute dominators of a control-flow graph.
//!
//! # The Dominance Relation
//!
//! In a directed graph with a root node **R**, a node **A** is said to *dominate* a
//! node **B** iff every path from **R** to **B** contains **A**.
//!
//! The node **A** is said to *strictly dominate* the node **B** iff **A** dominates
//! **B** and **A ≠ B**.
//!
//! The node **A** is said to be the *immediate dominator* of a node **B** iff it
//! strictly dominates **B** and there does not exist any node **C** where **A**
//! dominates **C** and **C** dominates **B**.
use std::cmp::Ordering;
use std::collections::{HashMap, HashSet, hash_map::Iter};
use std::hash::Hash;
use crate::visit::{DfsPostOrder, GraphBase, IntoNeighbors, Visitable, Walker};
/// The dominance relation for some graph and root.
#[derive(Debug, Clone)]
pub struct Dominators<N>
where
N: Copy + Eq + Hash,
{
root: N,
dominators: HashMap<N, N>,
}
impl<N> Dominators<N>
where
N: Copy + Eq + Hash,
{
/// Get the root node used to construct these dominance relations.
pub fn root(&self) -> N {
self.root
}
/// Get the immediate dominator of the given node.
///
/// Returns `None` for any node that is not reachable from the root, and for
/// the root itself.
pub fn immediate_dominator(&self, node: N) -> Option<N> {
if node == self.root {
None
} else {
self.dominators.get(&node).cloned()
}
}
/// Iterate over the given node's strict dominators.
///
/// If the given node is not reachable from the root, then `None` is
/// returned.
pub fn strict_dominators(&self, node: N) -> Option<DominatorsIter<N>> {
if self.dominators.contains_key(&node) {
Some(DominatorsIter {
dominators: self,
node: self.immediate_dominator(node),
})
} else {
| }
/// Iterate over all of the given node's dominators (including the given
/// node itself).
///
/// If the given node is not reachable from the root, then `None` is
/// returned.
pub fn dominators(&self, node: N) -> Option<DominatorsIter<N>> {
if self.dominators.contains_key(&node) {
Some(DominatorsIter {
dominators: self,
node: Some(node),
})
} else {
None
}
}
/// Iterate over all nodes immediately dominated by the given node (not
/// including the given node itself).
pub fn immediately_dominated_by(&self, node: N) -> DominatedByIter<N> {
DominatedByIter {
iter: self.dominators.iter(),
node: node
}
}
}
/// Iterator for a node's dominators.
pub struct DominatorsIter<'a, N>
where
N: 'a + Copy + Eq + Hash,
{
dominators: &'a Dominators<N>,
node: Option<N>,
}
impl<'a, N> Iterator for DominatorsIter<'a, N>
where
N: 'a + Copy + Eq + Hash,
{
type Item = N;
fn next(&mut self) -> Option<Self::Item> {
let next = self.node.take();
if let Some(next) = next {
self.node = self.dominators.immediate_dominator(next);
}
next
}
}
/// Iterator for nodes dominated by a given node.
pub struct DominatedByIter<'a, N>
where
N: 'a + Copy + Eq + Hash,
{
iter: Iter<'a, N, N>,
node: N,
}
impl<'a, N> Iterator for DominatedByIter<'a, N>
where
N: 'a + Copy + Eq + Hash,
{
type Item = N;
fn next(&mut self) -> Option<Self::Item> {
while let Some(next) = self.iter.next() {
if next.1 == &self.node {
return Some(*next.0);
}
}
None
}
}
/// The undefined dominator sentinel, for when we have not yet discovered a
/// node's dominator.
const UNDEFINED: usize = ::std::usize::MAX;
/// This is an implementation of the engineered ["Simple, Fast Dominance
/// Algorithm"][0] discovered by Cooper et al.
///
/// This algorithm is **O(|V|²)**, and therefore has slower theoretical running time
/// than the Lengauer-Tarjan algorithm (which is **O(|E| log |V|)**. However,
/// Cooper et al found it to be faster in practice on control flow graphs of up
/// to ~30,000 vertices.
///
/// [0]: http://www.cs.rice.edu/~keith/EMBED/dom.pdf
pub fn simple_fast<G>(graph: G, root: G::NodeId) -> Dominators<G::NodeId>
where
G: IntoNeighbors + Visitable,
<G as GraphBase>::NodeId: Eq + Hash,
{
let (post_order, predecessor_sets) = simple_fast_post_order(graph, root);
let length = post_order.len();
debug_assert!(length > 0);
debug_assert!(post_order.last() == Some(&root));
// From here on out we use indices into `post_order` instead of actual
// `NodeId`s wherever possible. This greatly improves the performance of
// this implementation, but we have to pay a little bit of upfront cost to
// convert our data structures to play along first.
// Maps a node to its index into `post_order`.
let node_to_post_order_idx: HashMap<_, _> = post_order
.iter()
.enumerate()
.map(|(idx, &node)| (node, idx))
.collect();
// Maps a node's `post_order` index to its set of predecessors's indices
// into `post_order` (as a vec).
let idx_to_predecessor_vec =
predecessor_sets_to_idx_vecs(&post_order, &node_to_post_order_idx, predecessor_sets);
let mut dominators = vec![UNDEFINED; length];
dominators[length - 1] = length - 1;
let mut changed = true;
while changed {
changed = false;
// Iterate in reverse post order, skipping the root.
for idx in (0..length - 1).rev() {
debug_assert!(post_order[idx]!= root);
// Take the intersection of every predecessor's dominator set; that
// is the current best guess at the immediate dominator for this
// node.
let new_idom_idx = {
let mut predecessors = idx_to_predecessor_vec[idx]
.iter()
.filter(|&&p| dominators[p]!= UNDEFINED);
let new_idom_idx = predecessors.next().expect(
"Because the root is initialized to dominate itself, and is the \
first node in every path, there must exist a predecessor to this \
node that also has a dominator",
);
predecessors.fold(*new_idom_idx, |new_idom_idx, &predecessor_idx| {
intersect(&dominators, new_idom_idx, predecessor_idx)
})
};
debug_assert!(new_idom_idx < length);
if new_idom_idx!= dominators[idx] {
dominators[idx] = new_idom_idx;
changed = true;
}
}
}
// All done! Translate the indices back into proper `G::NodeId`s.
debug_assert!(!dominators.iter().any(|&dom| dom == UNDEFINED));
Dominators {
root,
dominators: dominators
.into_iter()
.enumerate()
.map(|(idx, dom_idx)| (post_order[idx], post_order[dom_idx]))
.collect(),
}
}
fn intersect(dominators: &[usize], mut finger1: usize, mut finger2: usize) -> usize {
loop {
match finger1.cmp(&finger2) {
Ordering::Less => finger1 = dominators[finger1],
Ordering::Greater => finger2 = dominators[finger2],
Ordering::Equal => return finger1,
}
}
}
fn predecessor_sets_to_idx_vecs<N>(
post_order: &[N],
node_to_post_order_idx: &HashMap<N, usize>,
mut predecessor_sets: HashMap<N, HashSet<N>>,
) -> Vec<Vec<usize>>
where
N: Copy + Eq + Hash,
{
post_order
.iter()
.map(|node| {
predecessor_sets
.remove(node)
.map(|predecessors| {
predecessors
.into_iter()
.map(|p| *node_to_post_order_idx.get(&p).unwrap())
.collect()
})
.unwrap_or_else(Vec::new)
})
.collect()
}
fn simple_fast_post_order<G>(
graph: G,
root: G::NodeId,
) -> (Vec<G::NodeId>, HashMap<G::NodeId, HashSet<G::NodeId>>)
where
G: IntoNeighbors + Visitable,
<G as GraphBase>::NodeId: Eq + Hash,
{
let mut post_order = vec![];
let mut predecessor_sets = HashMap::new();
for node in DfsPostOrder::new(graph, root).iter(graph) {
post_order.push(node);
for successor in graph.neighbors(node) {
predecessor_sets
.entry(successor)
.or_insert_with(HashSet::new)
.insert(node);
}
}
(post_order, predecessor_sets)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_iter_dominators() {
let doms: Dominators<u32> = Dominators {
root: 0,
dominators: [(2, 1), (1, 0), (0, 0)].iter().cloned().collect(),
};
let all_doms: Vec<_> = doms.dominators(2).unwrap().collect();
assert_eq!(vec![2, 1, 0], all_doms);
assert_eq!(None::<()>, doms.dominators(99).map(|_| unreachable!()));
let strict_doms: Vec<_> = doms.strict_dominators(2).unwrap().collect();
assert_eq!(vec![1, 0], strict_doms);
assert_eq!(
None::<()>,
doms.strict_dominators(99).map(|_| unreachable!())
);
let dom_by: Vec<_> = doms.immediately_dominated_by(1).collect();
assert_eq!(vec![2], dom_by);
assert_eq!(None, doms.immediately_dominated_by(99).next());
}
}
| None
}
| conditional_block |
dominators.rs | //! Compute dominators of a control-flow graph.
//!
//! # The Dominance Relation
//!
//! In a directed graph with a root node **R**, a node **A** is said to *dominate* a
//! node **B** iff every path from **R** to **B** contains **A**.
//!
//! The node **A** is said to *strictly dominate* the node **B** iff **A** dominates
//! **B** and **A ≠ B**.
//!
//! The node **A** is said to be the *immediate dominator* of a node **B** iff it
//! strictly dominates **B** and there does not exist any node **C** where **A**
//! dominates **C** and **C** dominates **B**.
use std::cmp::Ordering;
use std::collections::{HashMap, HashSet, hash_map::Iter};
use std::hash::Hash;
use crate::visit::{DfsPostOrder, GraphBase, IntoNeighbors, Visitable, Walker};
/// The dominance relation for some graph and root.
#[derive(Debug, Clone)]
pub struct Dominators<N>
where
N: Copy + Eq + Hash,
{
root: N,
dominators: HashMap<N, N>,
}
impl<N> Dominators<N>
where
N: Copy + Eq + Hash,
{
/// Get the root node used to construct these dominance relations.
pub fn root(&self) -> N {
self.root
}
/// Get the immediate dominator of the given node.
///
/// Returns `None` for any node that is not reachable from the root, and for
/// the root itself.
pub fn immediate_dominator(&self, node: N) -> Option<N> {
if node == self.root {
None
} else {
self.dominators.get(&node).cloned()
}
}
/// Iterate over the given node's strict dominators.
///
/// If the given node is not reachable from the root, then `None` is
/// returned.
pub fn strict_dominators(&self, node: N) -> Option<DominatorsIter<N>> {
if self.dominators.contains_key(&node) {
Some(DominatorsIter {
dominators: self,
node: self.immediate_dominator(node),
})
} else {
None
}
}
/// Iterate over all of the given node's dominators (including the given
/// node itself).
///
/// If the given node is not reachable from the root, then `None` is
/// returned.
pub fn dominators(&self, node: N) -> Option<DominatorsIter<N>> {
if self.dominators.contains_key(&node) {
Some(DominatorsIter {
dominators: self,
node: Some(node),
})
} else {
None
}
}
/// Iterate over all nodes immediately dominated by the given node (not
/// including the given node itself).
pub fn immediately_dominated_by(&self, node: N) -> DominatedByIter<N> {
DominatedByIter {
iter: self.dominators.iter(),
node: node
}
}
}
/// Iterator for a node's dominators.
pub struct DominatorsIter<'a, N>
where
N: 'a + Copy + Eq + Hash,
{
dominators: &'a Dominators<N>,
node: Option<N>,
}
impl<'a, N> Iterator for DominatorsIter<'a, N>
where
N: 'a + Copy + Eq + Hash,
{
type Item = N;
fn next(&mut self) -> Option<Self::Item> {
let next = self.node.take();
if let Some(next) = next {
self.node = self.dominators.immediate_dominator(next);
}
next
}
}
/// Iterator for nodes dominated by a given node.
pub struct DominatedByIter<'a, N>
where
N: 'a + Copy + Eq + Hash,
{
iter: Iter<'a, N, N>,
node: N,
}
impl<'a, N> Iterator for DominatedByIter<'a, N>
where
N: 'a + Copy + Eq + Hash,
{
type Item = N;
fn next(&mut self) -> Option<Self::Item> {
while let Some(next) = self.iter.next() {
if next.1 == &self.node {
return Some(*next.0);
}
}
None
}
}
/// The undefined dominator sentinel, for when we have not yet discovered a
/// node's dominator.
const UNDEFINED: usize = ::std::usize::MAX;
/// This is an implementation of the engineered ["Simple, Fast Dominance
/// Algorithm"][0] discovered by Cooper et al.
///
/// This algorithm is **O(|V|²)**, and therefore has slower theoretical running time
/// than the Lengauer-Tarjan algorithm (which is **O(|E| log |V|)**. However,
/// Cooper et al found it to be faster in practice on control flow graphs of up
/// to ~30,000 vertices.
///
/// [0]: http://www.cs.rice.edu/~keith/EMBED/dom.pdf
pub fn simple_fast<G>(graph: G, root: G::NodeId) -> Dominators<G::NodeId>
where
G: IntoNeighbors + Visitable,
<G as GraphBase>::NodeId: Eq + Hash,
{
let (post_order, predecessor_sets) = simple_fast_post_order(graph, root);
let length = post_order.len();
debug_assert!(length > 0);
debug_assert!(post_order.last() == Some(&root));
// From here on out we use indices into `post_order` instead of actual
// `NodeId`s wherever possible. This greatly improves the performance of
// this implementation, but we have to pay a little bit of upfront cost to
// convert our data structures to play along first.
// Maps a node to its index into `post_order`.
let node_to_post_order_idx: HashMap<_, _> = post_order
.iter()
.enumerate()
.map(|(idx, &node)| (node, idx))
.collect();
// Maps a node's `post_order` index to its set of predecessors's indices
// into `post_order` (as a vec).
let idx_to_predecessor_vec =
predecessor_sets_to_idx_vecs(&post_order, &node_to_post_order_idx, predecessor_sets);
let mut dominators = vec![UNDEFINED; length];
dominators[length - 1] = length - 1;
let mut changed = true;
while changed {
changed = false;
// Iterate in reverse post order, skipping the root.
for idx in (0..length - 1).rev() {
debug_assert!(post_order[idx]!= root);
// Take the intersection of every predecessor's dominator set; that
// is the current best guess at the immediate dominator for this
// node.
let new_idom_idx = {
let mut predecessors = idx_to_predecessor_vec[idx]
.iter()
.filter(|&&p| dominators[p]!= UNDEFINED);
let new_idom_idx = predecessors.next().expect(
"Because the root is initialized to dominate itself, and is the \
first node in every path, there must exist a predecessor to this \
node that also has a dominator",
);
predecessors.fold(*new_idom_idx, |new_idom_idx, &predecessor_idx| {
intersect(&dominators, new_idom_idx, predecessor_idx)
})
};
debug_assert!(new_idom_idx < length);
if new_idom_idx!= dominators[idx] {
dominators[idx] = new_idom_idx;
changed = true;
}
}
}
// All done! Translate the indices back into proper `G::NodeId`s.
debug_assert!(!dominators.iter().any(|&dom| dom == UNDEFINED));
Dominators {
root,
dominators: dominators
.into_iter()
.enumerate()
.map(|(idx, dom_idx)| (post_order[idx], post_order[dom_idx]))
.collect(),
}
}
fn intersect(dominators: &[usize], mut finger1: usize, mut finger2: usize) -> usize {
loop {
match finger1.cmp(&finger2) {
Ordering::Less => finger1 = dominators[finger1],
Ordering::Greater => finger2 = dominators[finger2],
Ordering::Equal => return finger1,
}
}
}
fn pre | (
post_order: &[N],
node_to_post_order_idx: &HashMap<N, usize>,
mut predecessor_sets: HashMap<N, HashSet<N>>,
) -> Vec<Vec<usize>>
where
N: Copy + Eq + Hash,
{
post_order
.iter()
.map(|node| {
predecessor_sets
.remove(node)
.map(|predecessors| {
predecessors
.into_iter()
.map(|p| *node_to_post_order_idx.get(&p).unwrap())
.collect()
})
.unwrap_or_else(Vec::new)
})
.collect()
}
fn simple_fast_post_order<G>(
graph: G,
root: G::NodeId,
) -> (Vec<G::NodeId>, HashMap<G::NodeId, HashSet<G::NodeId>>)
where
G: IntoNeighbors + Visitable,
<G as GraphBase>::NodeId: Eq + Hash,
{
let mut post_order = vec![];
let mut predecessor_sets = HashMap::new();
for node in DfsPostOrder::new(graph, root).iter(graph) {
post_order.push(node);
for successor in graph.neighbors(node) {
predecessor_sets
.entry(successor)
.or_insert_with(HashSet::new)
.insert(node);
}
}
(post_order, predecessor_sets)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_iter_dominators() {
let doms: Dominators<u32> = Dominators {
root: 0,
dominators: [(2, 1), (1, 0), (0, 0)].iter().cloned().collect(),
};
let all_doms: Vec<_> = doms.dominators(2).unwrap().collect();
assert_eq!(vec![2, 1, 0], all_doms);
assert_eq!(None::<()>, doms.dominators(99).map(|_| unreachable!()));
let strict_doms: Vec<_> = doms.strict_dominators(2).unwrap().collect();
assert_eq!(vec![1, 0], strict_doms);
assert_eq!(
None::<()>,
doms.strict_dominators(99).map(|_| unreachable!())
);
let dom_by: Vec<_> = doms.immediately_dominated_by(1).collect();
assert_eq!(vec![2], dom_by);
assert_eq!(None, doms.immediately_dominated_by(99).next());
}
}
| decessor_sets_to_idx_vecs<N> | identifier_name |
boolean.rs | // Copyright (C) 2013-2020 Blockstack PBC, a public benefit corporation
// Copyright (C) 2020 Stacks Open Internet Foundation
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
use vm::costs::cost_functions::ClarityCostFunction;
use vm::costs::{cost_functions, runtime_cost};
use vm::errors::{
check_argument_count, check_arguments_at_least, CheckErrors, InterpreterResult as Result,
};
use vm::representations::SymbolicExpression;
use vm::types::{TypeSignature, Value};
use vm::{eval, Environment, LocalContext};
fn type_force_bool(value: &Value) -> Result<bool> {
match *value {
Value::Bool(boolean) => Ok(boolean),
_ => Err(CheckErrors::TypeValueError(TypeSignature::BoolType, value.clone()).into()),
}
}
pub fn special_or(
args: &[SymbolicExpression],
env: &mut Environment,
context: &LocalContext,
) -> Result<Value> {
check_arguments_at_least(1, args)?;
runtime_cost(ClarityCostFunction::Or, env, args.len())?;
for arg in args.iter() {
let evaluated = eval(&arg, env, context)?;
let result = type_force_bool(&evaluated)?; | Ok(Value::Bool(false))
}
pub fn special_and(
args: &[SymbolicExpression],
env: &mut Environment,
context: &LocalContext,
) -> Result<Value> {
check_arguments_at_least(1, args)?;
runtime_cost(ClarityCostFunction::And, env, args.len())?;
for arg in args.iter() {
let evaluated = eval(&arg, env, context)?;
let result = type_force_bool(&evaluated)?;
if!result {
return Ok(Value::Bool(false));
}
}
Ok(Value::Bool(true))
}
pub fn native_not(input: Value) -> Result<Value> {
let value = type_force_bool(&input)?;
Ok(Value::Bool(!value))
} | if result {
return Ok(Value::Bool(true));
}
}
| random_line_split |
boolean.rs | // Copyright (C) 2013-2020 Blockstack PBC, a public benefit corporation
// Copyright (C) 2020 Stacks Open Internet Foundation
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
use vm::costs::cost_functions::ClarityCostFunction;
use vm::costs::{cost_functions, runtime_cost};
use vm::errors::{
check_argument_count, check_arguments_at_least, CheckErrors, InterpreterResult as Result,
};
use vm::representations::SymbolicExpression;
use vm::types::{TypeSignature, Value};
use vm::{eval, Environment, LocalContext};
fn type_force_bool(value: &Value) -> Result<bool> {
match *value {
Value::Bool(boolean) => Ok(boolean),
_ => Err(CheckErrors::TypeValueError(TypeSignature::BoolType, value.clone()).into()),
}
}
pub fn | (
args: &[SymbolicExpression],
env: &mut Environment,
context: &LocalContext,
) -> Result<Value> {
check_arguments_at_least(1, args)?;
runtime_cost(ClarityCostFunction::Or, env, args.len())?;
for arg in args.iter() {
let evaluated = eval(&arg, env, context)?;
let result = type_force_bool(&evaluated)?;
if result {
return Ok(Value::Bool(true));
}
}
Ok(Value::Bool(false))
}
pub fn special_and(
args: &[SymbolicExpression],
env: &mut Environment,
context: &LocalContext,
) -> Result<Value> {
check_arguments_at_least(1, args)?;
runtime_cost(ClarityCostFunction::And, env, args.len())?;
for arg in args.iter() {
let evaluated = eval(&arg, env, context)?;
let result = type_force_bool(&evaluated)?;
if!result {
return Ok(Value::Bool(false));
}
}
Ok(Value::Bool(true))
}
pub fn native_not(input: Value) -> Result<Value> {
let value = type_force_bool(&input)?;
Ok(Value::Bool(!value))
}
| special_or | identifier_name |
boolean.rs | // Copyright (C) 2013-2020 Blockstack PBC, a public benefit corporation
// Copyright (C) 2020 Stacks Open Internet Foundation
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
use vm::costs::cost_functions::ClarityCostFunction;
use vm::costs::{cost_functions, runtime_cost};
use vm::errors::{
check_argument_count, check_arguments_at_least, CheckErrors, InterpreterResult as Result,
};
use vm::representations::SymbolicExpression;
use vm::types::{TypeSignature, Value};
use vm::{eval, Environment, LocalContext};
fn type_force_bool(value: &Value) -> Result<bool> {
match *value {
Value::Bool(boolean) => Ok(boolean),
_ => Err(CheckErrors::TypeValueError(TypeSignature::BoolType, value.clone()).into()),
}
}
pub fn special_or(
args: &[SymbolicExpression],
env: &mut Environment,
context: &LocalContext,
) -> Result<Value> {
check_arguments_at_least(1, args)?;
runtime_cost(ClarityCostFunction::Or, env, args.len())?;
for arg in args.iter() {
let evaluated = eval(&arg, env, context)?;
let result = type_force_bool(&evaluated)?;
if result {
return Ok(Value::Bool(true));
}
}
Ok(Value::Bool(false))
}
pub fn special_and(
args: &[SymbolicExpression],
env: &mut Environment,
context: &LocalContext,
) -> Result<Value> {
check_arguments_at_least(1, args)?;
runtime_cost(ClarityCostFunction::And, env, args.len())?;
for arg in args.iter() {
let evaluated = eval(&arg, env, context)?;
let result = type_force_bool(&evaluated)?;
if!result {
return Ok(Value::Bool(false));
}
}
Ok(Value::Bool(true))
}
pub fn native_not(input: Value) -> Result<Value> | {
let value = type_force_bool(&input)?;
Ok(Value::Bool(!value))
} | identifier_body |
|
touch.rs | use crate::sys;
pub type Finger = sys::SDL_Finger;
pub type TouchDevice = sys::SDL_TouchID;
#[doc(alias = "SDL_GetNumTouchDevices")]
pub fn num_touch_devices() -> i32 {
unsafe { sys::SDL_GetNumTouchDevices() }
}
#[doc(alias = "SDL_GetTouchDevice")]
pub fn touch_device(index: i32) -> TouchDevice {
unsafe { sys::SDL_GetTouchDevice(index) }
}
#[doc(alias = "SDL_GetNumTouchFingers")]
pub fn num_touch_fingers(touch: TouchDevice) -> i32 {
unsafe { sys::SDL_GetNumTouchFingers(touch) }
} | pub fn touch_finger(touch: TouchDevice, index: i32) -> Option<Finger> {
let raw = unsafe { sys::SDL_GetTouchFinger(touch, index) };
if raw.is_null() {
None
} else {
unsafe { Some(*raw) }
}
} |
#[doc(alias = "SDL_GetTouchFinger")] | random_line_split |
touch.rs | use crate::sys;
pub type Finger = sys::SDL_Finger;
pub type TouchDevice = sys::SDL_TouchID;
#[doc(alias = "SDL_GetNumTouchDevices")]
pub fn num_touch_devices() -> i32 {
unsafe { sys::SDL_GetNumTouchDevices() }
}
#[doc(alias = "SDL_GetTouchDevice")]
pub fn touch_device(index: i32) -> TouchDevice {
unsafe { sys::SDL_GetTouchDevice(index) }
}
#[doc(alias = "SDL_GetNumTouchFingers")]
pub fn num_touch_fingers(touch: TouchDevice) -> i32 {
unsafe { sys::SDL_GetNumTouchFingers(touch) }
}
#[doc(alias = "SDL_GetTouchFinger")]
pub fn touch_finger(touch: TouchDevice, index: i32) -> Option<Finger> {
let raw = unsafe { sys::SDL_GetTouchFinger(touch, index) };
if raw.is_null() {
None
} else |
}
| {
unsafe { Some(*raw) }
} | conditional_block |
touch.rs | use crate::sys;
pub type Finger = sys::SDL_Finger;
pub type TouchDevice = sys::SDL_TouchID;
#[doc(alias = "SDL_GetNumTouchDevices")]
pub fn num_touch_devices() -> i32 {
unsafe { sys::SDL_GetNumTouchDevices() }
}
#[doc(alias = "SDL_GetTouchDevice")]
pub fn touch_device(index: i32) -> TouchDevice {
unsafe { sys::SDL_GetTouchDevice(index) }
}
#[doc(alias = "SDL_GetNumTouchFingers")]
pub fn num_touch_fingers(touch: TouchDevice) -> i32 {
unsafe { sys::SDL_GetNumTouchFingers(touch) }
}
#[doc(alias = "SDL_GetTouchFinger")]
pub fn | (touch: TouchDevice, index: i32) -> Option<Finger> {
let raw = unsafe { sys::SDL_GetTouchFinger(touch, index) };
if raw.is_null() {
None
} else {
unsafe { Some(*raw) }
}
}
| touch_finger | identifier_name |
touch.rs | use crate::sys;
pub type Finger = sys::SDL_Finger;
pub type TouchDevice = sys::SDL_TouchID;
#[doc(alias = "SDL_GetNumTouchDevices")]
pub fn num_touch_devices() -> i32 |
#[doc(alias = "SDL_GetTouchDevice")]
pub fn touch_device(index: i32) -> TouchDevice {
unsafe { sys::SDL_GetTouchDevice(index) }
}
#[doc(alias = "SDL_GetNumTouchFingers")]
pub fn num_touch_fingers(touch: TouchDevice) -> i32 {
unsafe { sys::SDL_GetNumTouchFingers(touch) }
}
#[doc(alias = "SDL_GetTouchFinger")]
pub fn touch_finger(touch: TouchDevice, index: i32) -> Option<Finger> {
let raw = unsafe { sys::SDL_GetTouchFinger(touch, index) };
if raw.is_null() {
None
} else {
unsafe { Some(*raw) }
}
}
| {
unsafe { sys::SDL_GetNumTouchDevices() }
} | identifier_body |
mod.rs | af_get_features_score(out: *mut af_array, feat: af_features) -> c_int;
fn af_get_features_orientation(out: *mut af_array, feat: af_features) -> c_int;
fn af_get_features_size(out: *mut af_array, feat: af_features) -> c_int; | thr: c_float,
arc_len: c_uint,
non_max: bool,
feature_ratio: c_float,
edge: c_uint,
) -> c_int;
fn af_harris(
out: *mut af_features,
input: af_array,
m: c_uint,
r: c_float,
s: c_float,
bs: c_uint,
k: c_float,
) -> c_int;
fn af_orb(
out: *mut af_features,
desc: *mut af_array,
arr: af_array,
fast_thr: c_float,
max_feat: c_uint,
scl_fctr: c_float,
levels: c_uint,
blur_img: bool,
) -> c_int;
fn af_hamming_matcher(
idx: *mut af_array,
dist: *mut af_array,
query: af_array,
train: af_array,
dist_dim: dim_t,
n_dist: c_uint,
) -> c_int;
fn af_nearest_neighbour(
idx: *mut af_array,
dist: *mut af_array,
q: af_array,
t: af_array,
dist_dim: dim_t,
n_dist: c_uint,
dist_type: c_int,
) -> c_int;
fn af_match_template(
out: *mut af_array,
search_img: af_array,
template_img: af_array,
mtype: c_uint,
) -> c_int;
fn af_susan(
feat: *mut af_features,
i: af_array,
r: c_uint,
d: c_float,
g: c_float,
f: c_float,
e: c_uint,
) -> c_int;
fn af_dog(out: *mut af_array, i: af_array, r1: c_int, r2: c_int) -> c_int;
fn af_homography(
H: *mut af_array,
inliers: *mut c_int,
x_src: af_array,
y_src: af_array,
x_dst: af_array,
y_dst: af_array,
htype: c_uint,
inlier_thr: c_float,
iterations: c_uint,
otype: c_uint,
) -> c_int;
}
/// A set of Array objects (usually, used in Computer vision context)
///
/// `Features` struct is used by computer vision functions
/// to return the outcome of their operation. Typically, such output
/// has the following Arrays:
///
/// - X positions of the features
/// - Y positions of the features
/// - Scores of the features
/// - Orientations of the features
/// - Sizes of the features
///
/// ## Sharing Across Threads
///
/// While sharing this object with other threads, there is no need to wrap
/// this in an Arc object unless only one such object is required to exist.
/// The reason being that ArrayFire's internal details that are pointed to
/// by the features handle are appropriately reference counted in thread safe
/// manner. However, if these features are to be edited, then please do wrap
/// the object using a Mutex or Read-Write lock.
pub struct Features {
feat: af_features,
}
unsafe impl Send for Features {}
unsafe impl Sync for Features {}
macro_rules! feat_func_def {
($doc_str: expr, $fn_name: ident, $ffi_name: ident) => {
#[doc=$doc_str]
pub fn $fn_name(&self) -> Array<f32> {
unsafe {
let mut temp: af_array = std::ptr::null_mut();
let err_val = $ffi_name(&mut temp as *mut af_array, self.feat);
HANDLE_ERROR(AfError::from(err_val));
let temp_array: Array<f32> = temp.into();
let retained = temp_array.clone();
mem::forget(temp_array);
retained
}
}
};
}
impl Features {
/// Create and return an object of type Features
///
/// This object is basically a bunch of Arrays.
pub fn new(n: u64) -> Self {
unsafe {
let mut temp: af_features = std::ptr::null_mut();
let err_val = af_create_features(&mut temp as *mut af_features, n as dim_t);
HANDLE_ERROR(AfError::from(err_val));
Self { feat: temp }
}
}
/// Get total number of features found
pub fn num_features(&self) -> i64 {
let mut temp: i64 = 0;
unsafe {
let err_val = af_get_features_num(
&mut temp as *mut dim_t,
self.feat as *const dim_t as af_features,
);
HANDLE_ERROR(AfError::from(err_val));
}
temp
}
feat_func_def!("Get x coordinates Array", xpos, af_get_features_xpos);
feat_func_def!("Get y coordinates Array", ypos, af_get_features_ypos);
feat_func_def!("Get score Array", score, af_get_features_score);
feat_func_def!(
"Get orientation Array",
orientation,
af_get_features_orientation
);
feat_func_def!("Get features size Array", size, af_get_features_size);
}
impl Clone for Features {
fn clone(&self) -> Self {
unsafe {
let mut temp: af_features = std::ptr::null_mut();
let ret_val = af_retain_features(&mut temp as *mut af_features, self.feat);
HANDLE_ERROR(AfError::from(ret_val));
Self { feat: temp }
}
}
}
impl Drop for Features {
fn drop(&mut self) {
unsafe {
let ret_val = af_release_features(self.feat);
HANDLE_ERROR(AfError::from(ret_val));
}
}
}
/// Fast feature detector
///
/// A circle of radius 3 pixels, translating into a total of 16 pixels, is checked for sequential
/// segments of pixels much brighter or much darker than the central one. For a pixel p to be
/// considered a feature, there must exist a sequential segment of arc_length pixels in the circle
/// around it such that all are greather than (p + thr) or smaller than (p - thr). After all
/// features in the image are detected, if nonmax is true, the non-maximal suppression is applied,
/// checking all detected features and the features detected in its 8-neighborhood and discard it
/// if its score is non maximal.
///
/// # Parameters
///
/// - `input` - the input image Array
/// - `thr` - FAST threshold for which pixel of the circle around the center pixel is considered to
/// be greater or smaller
/// - `arc_len` - length of arc (or sequential segment) to be tested, must be within range [9-16]
/// - `non_max` - performs non-maximal supression if true
/// - `feat_ratio` - maximum ratio of features to detect, the maximum number of features is
/// calculated by `feature_ratio * num of elements`. The maximum number of features is not based on
/// the score, instead, features detected after the limit is reached are discarded.
/// - `edge` - is the length of the edges in the image to be discarded by FAST(minimum is 3, as the
/// radius of the circle)
///
/// # Return Values
///
/// This function returns an object of struct [Features](./struct.Features.html) containing Arrays
/// for x and y coordinates and score, while array oreientation is set to 0 as FAST does not
/// compute orientation. Size is set to 1 as FAST does not compute multiple scales.
pub fn fast<T>(
input: &Array<T>,
thr: f32,
arc_len: u32,
non_max: bool,
feat_ratio: f32,
edge: u32,
) -> Features
where
T: HasAfEnum + ImageFilterType,
{
unsafe {
let mut temp: af_features = std::ptr::null_mut();
let err_val = af_fast(
&mut temp as *mut af_features,
input.get(),
thr,
arc_len,
non_max,
feat_ratio,
edge,
);
HANDLE_ERROR(AfError::from(err_val));
Features { feat: temp }
}
}
/// Harris corner detector.
///
/// Compute corners using the Harris corner detector approach. For each pixel, a small window is
/// used to calculate the determinant and trace of such a window, from which a response is
/// calculated. Pixels are considered corners if they are local maximas and have a high positive
/// response.
///
/// # Parameters
///
/// - `input` is the array containing a grayscale image (color images are not supported)
/// - `max_corners` is the maximum number of corners to keep, only retains those with highest Harris responses
/// - `min_response` is the minimum response in order for a corner to be retained, only used if max_corners = 0
/// - `sigma` is the standard deviation of a circular window (its dimensions will be calculated according to the standard deviation), the covariation matrix will be calculated to a circular neighborhood of this standard deviation (only used when block_size == 0, must be >= 0.5f and <= 5.0f)
/// - `block_size` is square window size, the covariation matrix will be calculated to a square neighborhood of this size (must be >= 3 and <= 31)
/// - `k_thr` is the Harris constant, usually set empirically to 0.04f (must be >= 0.01f)
///
/// # Return Values
///
/// This function returns an object of struct [Features](./struct.Features.html) containing Arrays
/// for x and y coordinates and score, while array oreientation & size are set to 0 & 1,
/// respectively, since harris doesn't compute that information
pub fn harris<T>(
input: &Array<T>,
max_corners: u32,
min_response: f32,
sigma: f32,
block_size: u32,
k_thr: f32,
) -> Features
where
T: HasAfEnum + RealFloating,
{
unsafe {
let mut temp: af_features = std::ptr::null_mut();
let err_val = af_harris(
&mut temp as *mut af_features,
input.get(),
max_corners,
min_response,
sigma,
block_size,
k_thr,
);
HANDLE_ERROR(AfError::from(err_val));
Features { feat: temp }
}
}
/// ORB feature descriptor
///
/// Extract ORB descriptors from FAST features that hold higher Harris responses. FAST does not
/// compute orientation, thus, orientation of features is calculated using the intensity centroid.
/// As FAST is also not multi-scale enabled, a multi-scale pyramid is calculated by downsampling
/// the input image multiple times followed by FAST feature detection on each scale.
///
/// # Parameters
///
/// - `input` - the input image Array
/// - `fast_thr` - FAST threshold for which a pixel of the circle around the central pixel is
/// considered to be brighter or darker
/// - `max_feat` - maximum number of features to hold
/// - `scl_fctr` - factor to downsample the input image, meaning that each level with hold prior
/// level dimensions divided by `scl_fctr`
/// - `levels` - number of levels to be computed for the image pyramid
/// - `blur_img` - blur image with a Gaussian filter with sigma=2 before computing descriptors to
/// increase robustness against noise if true
///
/// # Return Values
///
/// This function returns a tuple of [`Features`](./struct.Features.html) and [`Array`](./struct.Array.html). The features objects composed of Arrays for x and y coordinates, score, orientation and size of selected features. The Array object is a two dimensional Array of size Nx8 where N is number of selected features.
pub fn orb<T>(
input: &Array<T>,
fast_thr: f32,
max_feat: u32,
scl_fctr: f32,
levels: u32,
blur_img: bool,
) -> (Features, Array<T>)
where
T: HasAfEnum + RealFloating,
{
unsafe {
let mut f: af_features = std::ptr::null_mut();
let mut d: af_array = std::ptr::null_mut();
let err_val = af_orb(
&mut f as *mut af_features,
&mut d as *mut af_array,
input.get(),
fast_thr,
max_feat,
scl_fctr,
levels,
blur_img,
);
HANDLE_ERROR(AfError::from(err_val));
(Features { feat: f }, d.into())
}
}
/// Hamming feature matcher
///
/// Calculates Hamming distances between two 2-dimensional arrays containing features, one of the
/// arrays containing the training data and the other the query data. One of the dimensions of the
/// both arrays must be equal among them, identifying the length of each feature. The other
/// dimension indicates the total number of features in each of the training and query arrays. Two
/// 1-dimensional arrays are created as results, one containg the smallest N distances of the query
/// array and another containing the indices of these distances in the training array. The
/// resulting 1-dimensional arrays have length equal to the number of features contained in the
/// query array.
///
/// # Parameters
///
/// - `query` - Array containing the data to be queried
/// - `train` - Array containing the data to be used as training data
/// - `dist_dims` - indicates the dimension to analyze for distance (the dimension indicated here
/// must be of equal length for both query and train arrays)
/// - `n_dist` - is the number of smallest distances to return (currently, only values <= 256 are supported)
///
///
/// # Return Values
///
/// This function returns a tuple of [Array](./struct.Array.html)'s.
///
/// First Array is an array of MxN size, where M is equal to the number of query features and N is
/// equal to n_dist. The value at position IxJ indicates the index of the Jth smallest distance to
/// the Ith query value in the train data array. the index of the Ith smallest distance of the Mth
/// query.
///
/// Second Array is an array of MxN size, where M is equal to the number of query features and N is
/// equal to n_dist. The value at position IxJ indicates the Hamming distance of the Jth smallest
/// distance to the Ith query value in the train data array.
pub fn hamming_matcher<T>(
query: &Array<T>,
train: &Array<T>,
dist_dims: i64,
n_dist: u32,
) -> (Array<u32>, Array<T::AggregateOutType>)
where
T: HasAfEnum + ImageFilterType,
T::AggregateOutType: HasAfEnum,
{
unsafe {
let mut idx: af_array = std::ptr::null_mut();
let mut dist: af_array = std::ptr::null_mut();
let err_val = af_hamming_matcher(
&mut idx as *mut af_array,
&mut dist as *mut af_array,
query.get(),
train.get(),
dist_dims,
n_dist,
);
HANDLE_ERROR(AfError::from(err_val));
(idx.into(), dist.into())
}
}
/// Nearest Neighbour.
///
/// Calculates nearest distances between two 2-dimensional arrays containing features based on the
/// type of distance computation chosen. Currently, AF_SAD (sum of absolute differences), AF_SSD
/// (sum of squared differences) and AF_SHD (hamming distance) are supported. One of the arrays
/// containing the training data and the other the query data. One of the dimensions of the both
/// arrays must be equal among them, identifying the length of each feature. The other dimension
/// indicates the total number of features in each of the training and query arrays. Two
/// 1-dimensional arrays are created as results, one containg the smallest N distances of the query
/// array and another containing the indices of these distances in the training array. The resulting
/// 1-dimensional arrays have length equal to the number of features contained in the query array.
///
/// # Parameters
///
/// - `query` is the array containing the data to be queried
/// - `train` is the array containing the data used as training data
/// - `dist_dim` indicates the dimension to analyze for distance (the dimension indicated here must be of equal length for both query and train arrays)
/// - `n_dist` is the number of smallest distances to return (currently, only values <= 256 are supported)
/// - `dist_type` is the distance computation type. Currently [`MatchType::SAD`](./enum.MatchType.html), [`MatchType::SSD`](./enum.MatchType.html), and [`MatchType::SHD`](./enum.MatchType.html) are supported.
///
/// # Return Values
///
/// A tuple of Arrays.
///
/// The first Array is is an array of MxN size, where M is equal to the number of query features
/// and N is equal to `n_dist`. The value at position IxJ indicates the index of the Jth smallest
/// distance to the Ith query value in the train data array. the index of the Ith smallest distance
/// of the Mth query.
///
/// The second Array is is an array of MxN size, where M is equal to the number of query features
/// and N is equal to `n_dist`. The value at position IxJ indicates the distance of the Jth smallest
/// distance to the Ith query value in the train data array based on the `dist_type` chosen.
pub fn nearest_neighbour<T>(
query: &Array<T>,
train: &Array<T>,
dist_dim: i64,
n_dist: u32,
dist_type: MatchType,
) -> (Array<u32>, Array<T::AggregateOutType>)
where
T: HasAfEnum + ImageFilterType,
T::AggregateOutType: HasAfEnum,
{
unsafe {
let mut idx: af_array = std::ptr::null_mut();
let mut dist: af_array = std::ptr::null_mut();
let err_val = af_nearest_neighbour(
&mut idx as *mut af_array,
&mut dist as *mut af_array,
query.get(),
train.get(),
dist_dim,
n_dist,
dist_type as c_int,
);
HANDLE_ERROR(AfError::from(err_val));
(idx.into(), dist.into())
}
}
/// Image matching
///
/// Template matching is an image processing technique to find small patches of an image which
/// match a given template image. A more in depth discussion on the topic can be found
/// [here](https://en.wikipedia.org/wiki/Template_matching).
///
/// # Parameters
///
/// - `search_img` is an array with image data
/// - `template_img` is the template we are looking for in the image
/// - `mtype` is metric that should be used to calculate the disparity between window in the image and the template image. It can be one of the values defined by the enum [MatchType](./enum.MatchType.html).
/// # Return Values
///
/// This function returns an Array with disparity values for the window starting at corresponding pixel position.
pub fn match_template<T>(
search_img: &Array<T>,
template_img: &Array<T>,
mtype: MatchType,
) -> Array<T::AbsOutType>
where
T: HasAfEnum + ImageFilterType,
T::AbsOutType: HasAfEnum,
{
unsafe {
let mut temp: af_array = std::ptr::null_mut();
let err_val = af_match_template(
&mut temp as *mut af_array,
search_img.get(),
template_img.get(),
mtype as c_uint,
);
HANDLE_ERROR(AfError::from(err_val));
temp.into()
}
}
/// SUSAN corner detector.
///
/// SUSAN is an acronym standing for Smallest Univalue Segment Assimilating Nucleus. This method
/// places a circular disc over the pixel to be tested (a.k.a nucleus) to compute the corner
/// measure of that corresponding pixel. The region covered by the circular disc is M, and a pixel
/// in this region is represented by m⃗ ∈M where m⃗ 0 is the nucleus. Every pixel in the region is
/// compared to the nucleus using the following comparison function:
///
/// c(m⃗ )=e^−((I(m⃗)−I(m⃗_0))/t)^6
///
/// where t is radius of the region, I is the brightness of the pixel.
///
/// Response of SUSAN operator is given by the following equation:
///
/// R(M) = g−n(M) if n(M) < g
///
/// R(M) = 0 otherwise,
///
/// where n(M)=∑c(m⃗) m⃗∈M, g is named the geometric threshold and n is the number of pixels in the
/// mask which are within t of the nucleus.
///
/// Importance of the parameters, t and g is explained below:
///
/// - t determines how similar points have to be to the nucleusbefore they are considered to be a
/// part of the univalue segment
/// - g determines the minimum size of the univalue segment. For a large enough g, SUSAN operator
/// becomes an edge dectector.
///
/// # Parameters
///
/// - `input` is input grayscale/intensity image
/// - `radius` is the nucleus radius for each pixel neighborhood
/// - `diff_thr` is intensity difference threshold a.k.a **t** from equations in description
/// - `geom_thr` is the geometric threshold
/// - `feature_ratio` is maximum number of features that will be returned by the function
/// - `edge` indicates how many pixels width area should be skipped for corner detection
///
/// # Return Values
/// An object of type [Features](./struct.Features.html) composed of arrays for x and y coordinates, score, orientation and size of selected features.
pub fn susan<T>(
input: &Array<T>,
radius: u32,
diff_thr: f32,
geom_thr: f32,
feature_ratio: f32,
edge: u32,
) -> Features
where
T: HasAfEnum + ImageFilterType,
{
unsafe {
let mut temp: af_features = std::ptr::null_mut();
let err_val = af_susan(
&mut temp as *mut af_features,
input.get(),
radius,
diff_thr,
geom_thr,
feature_ratio,
edge,
);
HANDLE_ERROR(AfError::from(err_val));
Features { feat: temp }
}
}
/// Difference of Gaussians.
///
/// Given an image, this function computes two different versions of smoothed input image using the
/// difference smoothing parameters and subtracts one from the other and returns the result.
///
/// # Parameters
///
/// - `input` is the input image
/// - `radius1` is the radius of the first gaussian kernel
/// - `radius2` is the radius of the second gaussian kernel
///
/// # Return Values
///
/// Difference of smoothed inputs - An Array.
pub fn dog<T>(input: &Array<T>, radius1: i32, radius2: i32) -> Array<T::AbsOutType>
where
T: HasAfEnum + ImageFilterType,
T::AbsOutType: HasAfEnum,
{
unsafe {
let mut temp: af_array = std::ptr::null_mut();
let err_val = af_dog(&mut temp as *mut af_array, input.get(), radius1, radius2);
HANDLE_ERROR(AfError::from(err_val));
temp.into()
}
}
/// Homography estimation
///
/// Homography estimation find a perspective transform between two sets of 2D points.
/// Currently, two methods are supported for the estimation, RANSAC (RANdom SAmple Consensus)
/// and LMedS (Least Median of Squares). Both methods work by randomly selecting a subset
/// of 4 points of the set of source points, computing the eigenvectors of that set and
/// finding the perspective transform. The process is repeated several times, a maximum of
/// times given by the value passed to the iterations arguments for RANSAC (for the CPU
/// backend, usually less than that, depending on the quality of the dataset, but for CUDA
/// and OpenCL backends the transformation will be computed exactly the amount of times
/// passed via the iterations parameter), the returned value is the one that matches the
/// best number of inliers, which are all of the points that fall within a maximum L2
/// distance from the value passed to the inlier_thr argument.
///
/// # Parameters
///
/// - `x_src` is the x coordinates of the source points.
/// - `y_src` is the y coordinates of the source points.
/// - `x_dst` is the x coordinates of the destination points.
/// - `y_dst` is the y coordinates of the destination points.
/// - `htype` can be AF_HOMOGRAPHY_RANSAC, for which a RANdom SAmple Consensus will be used to evaluate the homography quality (e.g., number of inliers), or AF_HOMOGRAPHY_LMEDS, which will use Least Median of Squares method to evaluate homography quality
/// - `inlier_thr` - if htype is AF_HOMOGRAPHY_RANSAC | fn af_release_features(feat: af_features) -> c_int;
fn af_fast(
out: *mut af_features,
input: af_array, | random_line_split |
mod.rs | _get_features_score(out: *mut af_array, feat: af_features) -> c_int;
fn af_get_features_orientation(out: *mut af_array, feat: af_features) -> c_int;
fn af_get_features_size(out: *mut af_array, feat: af_features) -> c_int;
fn af_release_features(feat: af_features) -> c_int;
fn af_fast(
out: *mut af_features,
input: af_array,
thr: c_float,
arc_len: c_uint,
non_max: bool,
feature_ratio: c_float,
edge: c_uint,
) -> c_int;
fn af_harris(
out: *mut af_features,
input: af_array,
m: c_uint,
r: c_float,
s: c_float,
bs: c_uint,
k: c_float,
) -> c_int;
fn af_orb(
out: *mut af_features,
desc: *mut af_array,
arr: af_array,
fast_thr: c_float,
max_feat: c_uint,
scl_fctr: c_float,
levels: c_uint,
blur_img: bool,
) -> c_int;
fn af_hamming_matcher(
idx: *mut af_array,
dist: *mut af_array,
query: af_array,
train: af_array,
dist_dim: dim_t,
n_dist: c_uint,
) -> c_int;
fn af_nearest_neighbour(
idx: *mut af_array,
dist: *mut af_array,
q: af_array,
t: af_array,
dist_dim: dim_t,
n_dist: c_uint,
dist_type: c_int,
) -> c_int;
fn af_match_template(
out: *mut af_array,
search_img: af_array,
template_img: af_array,
mtype: c_uint,
) -> c_int;
fn af_susan(
feat: *mut af_features,
i: af_array,
r: c_uint,
d: c_float,
g: c_float,
f: c_float,
e: c_uint,
) -> c_int;
fn af_dog(out: *mut af_array, i: af_array, r1: c_int, r2: c_int) -> c_int;
fn af_homography(
H: *mut af_array,
inliers: *mut c_int,
x_src: af_array,
y_src: af_array,
x_dst: af_array,
y_dst: af_array,
htype: c_uint,
inlier_thr: c_float,
iterations: c_uint,
otype: c_uint,
) -> c_int;
}
/// A set of Array objects (usually, used in Computer vision context)
///
/// `Features` struct is used by computer vision functions
/// to return the outcome of their operation. Typically, such output
/// has the following Arrays:
///
/// - X positions of the features
/// - Y positions of the features
/// - Scores of the features
/// - Orientations of the features
/// - Sizes of the features
///
/// ## Sharing Across Threads
///
/// While sharing this object with other threads, there is no need to wrap
/// this in an Arc object unless only one such object is required to exist.
/// The reason being that ArrayFire's internal details that are pointed to
/// by the features handle are appropriately reference counted in thread safe
/// manner. However, if these features are to be edited, then please do wrap
/// the object using a Mutex or Read-Write lock.
pub struct Features {
feat: af_features,
}
unsafe impl Send for Features {}
unsafe impl Sync for Features {}
macro_rules! feat_func_def {
($doc_str: expr, $fn_name: ident, $ffi_name: ident) => {
#[doc=$doc_str]
pub fn $fn_name(&self) -> Array<f32> {
unsafe {
let mut temp: af_array = std::ptr::null_mut();
let err_val = $ffi_name(&mut temp as *mut af_array, self.feat);
HANDLE_ERROR(AfError::from(err_val));
let temp_array: Array<f32> = temp.into();
let retained = temp_array.clone();
mem::forget(temp_array);
retained
}
}
};
}
impl Features {
/// Create and return an object of type Features
///
/// This object is basically a bunch of Arrays.
pub fn new(n: u64) -> Self {
unsafe {
let mut temp: af_features = std::ptr::null_mut();
let err_val = af_create_features(&mut temp as *mut af_features, n as dim_t);
HANDLE_ERROR(AfError::from(err_val));
Self { feat: temp }
}
}
/// Get total number of features found
pub fn num_features(&self) -> i64 {
let mut temp: i64 = 0;
unsafe {
let err_val = af_get_features_num(
&mut temp as *mut dim_t,
self.feat as *const dim_t as af_features,
);
HANDLE_ERROR(AfError::from(err_val));
}
temp
}
feat_func_def!("Get x coordinates Array", xpos, af_get_features_xpos);
feat_func_def!("Get y coordinates Array", ypos, af_get_features_ypos);
feat_func_def!("Get score Array", score, af_get_features_score);
feat_func_def!(
"Get orientation Array",
orientation,
af_get_features_orientation
);
feat_func_def!("Get features size Array", size, af_get_features_size);
}
impl Clone for Features {
fn clone(&self) -> Self {
unsafe {
let mut temp: af_features = std::ptr::null_mut();
let ret_val = af_retain_features(&mut temp as *mut af_features, self.feat);
HANDLE_ERROR(AfError::from(ret_val));
Self { feat: temp }
}
}
}
impl Drop for Features {
fn drop(&mut self) {
unsafe {
let ret_val = af_release_features(self.feat);
HANDLE_ERROR(AfError::from(ret_val));
}
}
}
/// Fast feature detector
///
/// A circle of radius 3 pixels, translating into a total of 16 pixels, is checked for sequential
/// segments of pixels much brighter or much darker than the central one. For a pixel p to be
/// considered a feature, there must exist a sequential segment of arc_length pixels in the circle
/// around it such that all are greather than (p + thr) or smaller than (p - thr). After all
/// features in the image are detected, if nonmax is true, the non-maximal suppression is applied,
/// checking all detected features and the features detected in its 8-neighborhood and discard it
/// if its score is non maximal.
///
/// # Parameters
///
/// - `input` - the input image Array
/// - `thr` - FAST threshold for which pixel of the circle around the center pixel is considered to
/// be greater or smaller
/// - `arc_len` - length of arc (or sequential segment) to be tested, must be within range [9-16]
/// - `non_max` - performs non-maximal supression if true
/// - `feat_ratio` - maximum ratio of features to detect, the maximum number of features is
/// calculated by `feature_ratio * num of elements`. The maximum number of features is not based on
/// the score, instead, features detected after the limit is reached are discarded.
/// - `edge` - is the length of the edges in the image to be discarded by FAST(minimum is 3, as the
/// radius of the circle)
///
/// # Return Values
///
/// This function returns an object of struct [Features](./struct.Features.html) containing Arrays
/// for x and y coordinates and score, while array oreientation is set to 0 as FAST does not
/// compute orientation. Size is set to 1 as FAST does not compute multiple scales.
pub fn fast<T>(
input: &Array<T>,
thr: f32,
arc_len: u32,
non_max: bool,
feat_ratio: f32,
edge: u32,
) -> Features
where
T: HasAfEnum + ImageFilterType,
{
unsafe {
let mut temp: af_features = std::ptr::null_mut();
let err_val = af_fast(
&mut temp as *mut af_features,
input.get(),
thr,
arc_len,
non_max,
feat_ratio,
edge,
);
HANDLE_ERROR(AfError::from(err_val));
Features { feat: temp }
}
}
/// Harris corner detector.
///
/// Compute corners using the Harris corner detector approach. For each pixel, a small window is
/// used to calculate the determinant and trace of such a window, from which a response is
/// calculated. Pixels are considered corners if they are local maximas and have a high positive
/// response.
///
/// # Parameters
///
/// - `input` is the array containing a grayscale image (color images are not supported)
/// - `max_corners` is the maximum number of corners to keep, only retains those with highest Harris responses
/// - `min_response` is the minimum response in order for a corner to be retained, only used if max_corners = 0
/// - `sigma` is the standard deviation of a circular window (its dimensions will be calculated according to the standard deviation), the covariation matrix will be calculated to a circular neighborhood of this standard deviation (only used when block_size == 0, must be >= 0.5f and <= 5.0f)
/// - `block_size` is square window size, the covariation matrix will be calculated to a square neighborhood of this size (must be >= 3 and <= 31)
/// - `k_thr` is the Harris constant, usually set empirically to 0.04f (must be >= 0.01f)
///
/// # Return Values
///
/// This function returns an object of struct [Features](./struct.Features.html) containing Arrays
/// for x and y coordinates and score, while array oreientation & size are set to 0 & 1,
/// respectively, since harris doesn't compute that information
pub fn harris<T>(
input: &Array<T>,
max_corners: u32,
min_response: f32,
sigma: f32,
block_size: u32,
k_thr: f32,
) -> Features
where
T: HasAfEnum + RealFloating,
{
unsafe {
let mut temp: af_features = std::ptr::null_mut();
let err_val = af_harris(
&mut temp as *mut af_features,
input.get(),
max_corners,
min_response,
sigma,
block_size,
k_thr,
);
HANDLE_ERROR(AfError::from(err_val));
Features { feat: temp }
}
}
/// ORB feature descriptor
///
/// Extract ORB descriptors from FAST features that hold higher Harris responses. FAST does not
/// compute orientation, thus, orientation of features is calculated using the intensity centroid.
/// As FAST is also not multi-scale enabled, a multi-scale pyramid is calculated by downsampling
/// the input image multiple times followed by FAST feature detection on each scale.
///
/// # Parameters
///
/// - `input` - the input image Array
/// - `fast_thr` - FAST threshold for which a pixel of the circle around the central pixel is
/// considered to be brighter or darker
/// - `max_feat` - maximum number of features to hold
/// - `scl_fctr` - factor to downsample the input image, meaning that each level with hold prior
/// level dimensions divided by `scl_fctr`
/// - `levels` - number of levels to be computed for the image pyramid
/// - `blur_img` - blur image with a Gaussian filter with sigma=2 before computing descriptors to
/// increase robustness against noise if true
///
/// # Return Values
///
/// This function returns a tuple of [`Features`](./struct.Features.html) and [`Array`](./struct.Array.html). The features objects composed of Arrays for x and y coordinates, score, orientation and size of selected features. The Array object is a two dimensional Array of size Nx8 where N is number of selected features.
pub fn orb<T>(
input: &Array<T>,
fast_thr: f32,
max_feat: u32,
scl_fctr: f32,
levels: u32,
blur_img: bool,
) -> (Features, Array<T>)
where
T: HasAfEnum + RealFloating,
{
unsafe {
let mut f: af_features = std::ptr::null_mut();
let mut d: af_array = std::ptr::null_mut();
let err_val = af_orb(
&mut f as *mut af_features,
&mut d as *mut af_array,
input.get(),
fast_thr,
max_feat,
scl_fctr,
levels,
blur_img,
);
HANDLE_ERROR(AfError::from(err_val));
(Features { feat: f }, d.into())
}
}
/// Hamming feature matcher
///
/// Calculates Hamming distances between two 2-dimensional arrays containing features, one of the
/// arrays containing the training data and the other the query data. One of the dimensions of the
/// both arrays must be equal among them, identifying the length of each feature. The other
/// dimension indicates the total number of features in each of the training and query arrays. Two
/// 1-dimensional arrays are created as results, one containg the smallest N distances of the query
/// array and another containing the indices of these distances in the training array. The
/// resulting 1-dimensional arrays have length equal to the number of features contained in the
/// query array.
///
/// # Parameters
///
/// - `query` - Array containing the data to be queried
/// - `train` - Array containing the data to be used as training data
/// - `dist_dims` - indicates the dimension to analyze for distance (the dimension indicated here
/// must be of equal length for both query and train arrays)
/// - `n_dist` - is the number of smallest distances to return (currently, only values <= 256 are supported)
///
///
/// # Return Values
///
/// This function returns a tuple of [Array](./struct.Array.html)'s.
///
/// First Array is an array of MxN size, where M is equal to the number of query features and N is
/// equal to n_dist. The value at position IxJ indicates the index of the Jth smallest distance to
/// the Ith query value in the train data array. the index of the Ith smallest distance of the Mth
/// query.
///
/// Second Array is an array of MxN size, where M is equal to the number of query features and N is
/// equal to n_dist. The value at position IxJ indicates the Hamming distance of the Jth smallest
/// distance to the Ith query value in the train data array.
pub fn hamming_matcher<T>(
query: &Array<T>,
train: &Array<T>,
dist_dims: i64,
n_dist: u32,
) -> (Array<u32>, Array<T::AggregateOutType>)
where
T: HasAfEnum + ImageFilterType,
T::AggregateOutType: HasAfEnum,
{
unsafe {
let mut idx: af_array = std::ptr::null_mut();
let mut dist: af_array = std::ptr::null_mut();
let err_val = af_hamming_matcher(
&mut idx as *mut af_array,
&mut dist as *mut af_array,
query.get(),
train.get(),
dist_dims,
n_dist,
);
HANDLE_ERROR(AfError::from(err_val));
(idx.into(), dist.into())
}
}
/// Nearest Neighbour.
///
/// Calculates nearest distances between two 2-dimensional arrays containing features based on the
/// type of distance computation chosen. Currently, AF_SAD (sum of absolute differences), AF_SSD
/// (sum of squared differences) and AF_SHD (hamming distance) are supported. One of the arrays
/// containing the training data and the other the query data. One of the dimensions of the both
/// arrays must be equal among them, identifying the length of each feature. The other dimension
/// indicates the total number of features in each of the training and query arrays. Two
/// 1-dimensional arrays are created as results, one containg the smallest N distances of the query
/// array and another containing the indices of these distances in the training array. The resulting
/// 1-dimensional arrays have length equal to the number of features contained in the query array.
///
/// # Parameters
///
/// - `query` is the array containing the data to be queried
/// - `train` is the array containing the data used as training data
/// - `dist_dim` indicates the dimension to analyze for distance (the dimension indicated here must be of equal length for both query and train arrays)
/// - `n_dist` is the number of smallest distances to return (currently, only values <= 256 are supported)
/// - `dist_type` is the distance computation type. Currently [`MatchType::SAD`](./enum.MatchType.html), [`MatchType::SSD`](./enum.MatchType.html), and [`MatchType::SHD`](./enum.MatchType.html) are supported.
///
/// # Return Values
///
/// A tuple of Arrays.
///
/// The first Array is is an array of MxN size, where M is equal to the number of query features
/// and N is equal to `n_dist`. The value at position IxJ indicates the index of the Jth smallest
/// distance to the Ith query value in the train data array. the index of the Ith smallest distance
/// of the Mth query.
///
/// The second Array is is an array of MxN size, where M is equal to the number of query features
/// and N is equal to `n_dist`. The value at position IxJ indicates the distance of the Jth smallest
/// distance to the Ith query value in the train data array based on the `dist_type` chosen.
pub fn nearest_neighbour<T>(
query: &Array<T>,
train: &Array<T>,
dist_dim: i64,
n_dist: u32,
dist_type: MatchType,
) -> (Array<u32>, Array<T::AggregateOutType>)
where
T: HasAfEnum + ImageFilterType,
T::AggregateOutType: HasAfEnum,
{
unsafe {
let mut idx: af_array = std::ptr::null_mut();
let mut dist: af_array = std::ptr::null_mut();
let err_val = af_nearest_neighbour(
&mut idx as *mut af_array,
&mut dist as *mut af_array,
query.get(),
train.get(),
dist_dim,
n_dist,
dist_type as c_int,
);
HANDLE_ERROR(AfError::from(err_val));
(idx.into(), dist.into())
}
}
/// Image matching
///
/// Template matching is an image processing technique to find small patches of an image which
/// match a given template image. A more in depth discussion on the topic can be found
/// [here](https://en.wikipedia.org/wiki/Template_matching).
///
/// # Parameters
///
/// - `search_img` is an array with image data
/// - `template_img` is the template we are looking for in the image
/// - `mtype` is metric that should be used to calculate the disparity between window in the image and the template image. It can be one of the values defined by the enum [MatchType](./enum.MatchType.html).
/// # Return Values
///
/// This function returns an Array with disparity values for the window starting at corresponding pixel position.
pub fn match_template<T>(
search_img: &Array<T>,
template_img: &Array<T>,
mtype: MatchType,
) -> Array<T::AbsOutType>
where
T: HasAfEnum + ImageFilterType,
T::AbsOutType: HasAfEnum,
|
/// SUSAN corner detector.
///
/// SUSAN is an acronym standing for Smallest Univalue Segment Assimilating Nucleus. This method
/// places a circular disc over the pixel to be tested (a.k.a nucleus) to compute the corner
/// measure of that corresponding pixel. The region covered by the circular disc is M, and a pixel
/// in this region is represented by m⃗ ∈M where m⃗ 0 is the nucleus. Every pixel in the region is
/// compared to the nucleus using the following comparison function:
///
/// c(m⃗ )=e^−((I(m⃗)−I(m⃗_0))/t)^6
///
/// where t is radius of the region, I is the brightness of the pixel.
///
/// Response of SUSAN operator is given by the following equation:
///
/// R(M) = g−n(M) if n(M) < g
///
/// R(M) = 0 otherwise,
///
/// where n(M)=∑c(m⃗) m⃗∈M, g is named the geometric threshold and n is the number of pixels in the
/// mask which are within t of the nucleus.
///
/// Importance of the parameters, t and g is explained below:
///
/// - t determines how similar points have to be to the nucleusbefore they are considered to be a
/// part of the univalue segment
/// - g determines the minimum size of the univalue segment. For a large enough g, SUSAN operator
/// becomes an edge dectector.
///
/// # Parameters
///
/// - `input` is input grayscale/intensity image
/// - `radius` is the nucleus radius for each pixel neighborhood
/// - `diff_thr` is intensity difference threshold a.k.a **t** from equations in description
/// - `geom_thr` is the geometric threshold
/// - `feature_ratio` is maximum number of features that will be returned by the function
/// - `edge` indicates how many pixels width area should be skipped for corner detection
///
/// # Return Values
/// An object of type [Features](./struct.Features.html) composed of arrays for x and y coordinates, score, orientation and size of selected features.
pub fn susan<T>(
input: &Array<T>,
radius: u32,
diff_thr: f32,
geom_thr: f32,
feature_ratio: f32,
edge: u32,
) -> Features
where
T: HasAfEnum + ImageFilterType,
{
unsafe {
let mut temp: af_features = std::ptr::null_mut();
let err_val = af_susan(
&mut temp as *mut af_features,
input.get(),
radius,
diff_thr,
geom_thr,
feature_ratio,
edge,
);
HANDLE_ERROR(AfError::from(err_val));
Features { feat: temp }
}
}
/// Difference of Gaussians.
///
/// Given an image, this function computes two different versions of smoothed input image using the
/// difference smoothing parameters and subtracts one from the other and returns the result.
///
/// # Parameters
///
/// - `input` is the input image
/// - `radius1` is the radius of the first gaussian kernel
/// - `radius2` is the radius of the second gaussian kernel
///
/// # Return Values
///
/// Difference of smoothed inputs - An Array.
pub fn dog<T>(input: &Array<T>, radius1: i32, radius2: i32) -> Array<T::AbsOutType>
where
T: HasAfEnum + ImageFilterType,
T::AbsOutType: HasAfEnum,
{
unsafe {
let mut temp: af_array = std::ptr::null_mut();
let err_val = af_dog(&mut temp as *mut af_array, input.get(), radius1, radius2);
HANDLE_ERROR(AfError::from(err_val));
temp.into()
}
}
/// Homography estimation
///
/// Homography estimation find a perspective transform between two sets of 2D points.
/// Currently, two methods are supported for the estimation, RANSAC (RANdom SAmple Consensus)
/// and LMedS (Least Median of Squares). Both methods work by randomly selecting a subset
/// of 4 points of the set of source points, computing the eigenvectors of that set and
/// finding the perspective transform. The process is repeated several times, a maximum of
/// times given by the value passed to the iterations arguments for RANSAC (for the CPU
/// backend, usually less than that, depending on the quality of the dataset, but for CUDA
/// and OpenCL backends the transformation will be computed exactly the amount of times
/// passed via the iterations parameter), the returned value is the one that matches the
/// best number of inliers, which are all of the points that fall within a maximum L2
/// distance from the value passed to the inlier_thr argument.
///
/// # Parameters
///
/// - `x_src` is the x coordinates of the source points.
/// - `y_src` is the y coordinates of the source points.
/// - `x_dst` is the x coordinates of the destination points.
/// - `y_dst` is the y coordinates of the destination points.
/// - `htype` can be AF_HOMOGRAPHY_RANSAC, for which a RANdom SAmple Consensus will be used to evaluate the homography quality (e.g., number of inliers), or AF_HOMOGRAPHY_LMEDS, which will use Least Median of Squares method to evaluate homography quality
/// - `inlier_thr` - if htype is AF_HOMOGRAPHY_RAN | {
unsafe {
let mut temp: af_array = std::ptr::null_mut();
let err_val = af_match_template(
&mut temp as *mut af_array,
search_img.get(),
template_img.get(),
mtype as c_uint,
);
HANDLE_ERROR(AfError::from(err_val));
temp.into()
}
} | identifier_body |
mod.rs | get_features_score(out: *mut af_array, feat: af_features) -> c_int;
fn af_get_features_orientation(out: *mut af_array, feat: af_features) -> c_int;
fn af_get_features_size(out: *mut af_array, feat: af_features) -> c_int;
fn af_release_features(feat: af_features) -> c_int;
fn af_fast(
out: *mut af_features,
input: af_array,
thr: c_float,
arc_len: c_uint,
non_max: bool,
feature_ratio: c_float,
edge: c_uint,
) -> c_int;
fn af_harris(
out: *mut af_features,
input: af_array,
m: c_uint,
r: c_float,
s: c_float,
bs: c_uint,
k: c_float,
) -> c_int;
fn af_orb(
out: *mut af_features,
desc: *mut af_array,
arr: af_array,
fast_thr: c_float,
max_feat: c_uint,
scl_fctr: c_float,
levels: c_uint,
blur_img: bool,
) -> c_int;
fn af_hamming_matcher(
idx: *mut af_array,
dist: *mut af_array,
query: af_array,
train: af_array,
dist_dim: dim_t,
n_dist: c_uint,
) -> c_int;
fn af_nearest_neighbour(
idx: *mut af_array,
dist: *mut af_array,
q: af_array,
t: af_array,
dist_dim: dim_t,
n_dist: c_uint,
dist_type: c_int,
) -> c_int;
fn af_match_template(
out: *mut af_array,
search_img: af_array,
template_img: af_array,
mtype: c_uint,
) -> c_int;
fn af_susan(
feat: *mut af_features,
i: af_array,
r: c_uint,
d: c_float,
g: c_float,
f: c_float,
e: c_uint,
) -> c_int;
fn af_dog(out: *mut af_array, i: af_array, r1: c_int, r2: c_int) -> c_int;
fn af_homography(
H: *mut af_array,
inliers: *mut c_int,
x_src: af_array,
y_src: af_array,
x_dst: af_array,
y_dst: af_array,
htype: c_uint,
inlier_thr: c_float,
iterations: c_uint,
otype: c_uint,
) -> c_int;
}
/// A set of Array objects (usually, used in Computer vision context)
///
/// `Features` struct is used by computer vision functions
/// to return the outcome of their operation. Typically, such output
/// has the following Arrays:
///
/// - X positions of the features
/// - Y positions of the features
/// - Scores of the features
/// - Orientations of the features
/// - Sizes of the features
///
/// ## Sharing Across Threads
///
/// While sharing this object with other threads, there is no need to wrap
/// this in an Arc object unless only one such object is required to exist.
/// The reason being that ArrayFire's internal details that are pointed to
/// by the features handle are appropriately reference counted in thread safe
/// manner. However, if these features are to be edited, then please do wrap
/// the object using a Mutex or Read-Write lock.
pub struct Features {
feat: af_features,
}
unsafe impl Send for Features {}
unsafe impl Sync for Features {}
macro_rules! feat_func_def {
($doc_str: expr, $fn_name: ident, $ffi_name: ident) => {
#[doc=$doc_str]
pub fn $fn_name(&self) -> Array<f32> {
unsafe {
let mut temp: af_array = std::ptr::null_mut();
let err_val = $ffi_name(&mut temp as *mut af_array, self.feat);
HANDLE_ERROR(AfError::from(err_val));
let temp_array: Array<f32> = temp.into();
let retained = temp_array.clone();
mem::forget(temp_array);
retained
}
}
};
}
impl Features {
/// Create and return an object of type Features
///
/// This object is basically a bunch of Arrays.
pub fn new(n: u64) -> Self {
unsafe {
let mut temp: af_features = std::ptr::null_mut();
let err_val = af_create_features(&mut temp as *mut af_features, n as dim_t);
HANDLE_ERROR(AfError::from(err_val));
Self { feat: temp }
}
}
/// Get total number of features found
pub fn num_features(&self) -> i64 {
let mut temp: i64 = 0;
unsafe {
let err_val = af_get_features_num(
&mut temp as *mut dim_t,
self.feat as *const dim_t as af_features,
);
HANDLE_ERROR(AfError::from(err_val));
}
temp
}
feat_func_def!("Get x coordinates Array", xpos, af_get_features_xpos);
feat_func_def!("Get y coordinates Array", ypos, af_get_features_ypos);
feat_func_def!("Get score Array", score, af_get_features_score);
feat_func_def!(
"Get orientation Array",
orientation,
af_get_features_orientation
);
feat_func_def!("Get features size Array", size, af_get_features_size);
}
impl Clone for Features {
fn clone(&self) -> Self {
unsafe {
let mut temp: af_features = std::ptr::null_mut();
let ret_val = af_retain_features(&mut temp as *mut af_features, self.feat);
HANDLE_ERROR(AfError::from(ret_val));
Self { feat: temp }
}
}
}
impl Drop for Features {
fn drop(&mut self) {
unsafe {
let ret_val = af_release_features(self.feat);
HANDLE_ERROR(AfError::from(ret_val));
}
}
}
/// Fast feature detector
///
/// A circle of radius 3 pixels, translating into a total of 16 pixels, is checked for sequential
/// segments of pixels much brighter or much darker than the central one. For a pixel p to be
/// considered a feature, there must exist a sequential segment of arc_length pixels in the circle
/// around it such that all are greather than (p + thr) or smaller than (p - thr). After all
/// features in the image are detected, if nonmax is true, the non-maximal suppression is applied,
/// checking all detected features and the features detected in its 8-neighborhood and discard it
/// if its score is non maximal.
///
/// # Parameters
///
/// - `input` - the input image Array
/// - `thr` - FAST threshold for which pixel of the circle around the center pixel is considered to
/// be greater or smaller
/// - `arc_len` - length of arc (or sequential segment) to be tested, must be within range [9-16]
/// - `non_max` - performs non-maximal supression if true
/// - `feat_ratio` - maximum ratio of features to detect, the maximum number of features is
/// calculated by `feature_ratio * num of elements`. The maximum number of features is not based on
/// the score, instead, features detected after the limit is reached are discarded.
/// - `edge` - is the length of the edges in the image to be discarded by FAST(minimum is 3, as the
/// radius of the circle)
///
/// # Return Values
///
/// This function returns an object of struct [Features](./struct.Features.html) containing Arrays
/// for x and y coordinates and score, while array oreientation is set to 0 as FAST does not
/// compute orientation. Size is set to 1 as FAST does not compute multiple scales.
pub fn | <T>(
input: &Array<T>,
thr: f32,
arc_len: u32,
non_max: bool,
feat_ratio: f32,
edge: u32,
) -> Features
where
T: HasAfEnum + ImageFilterType,
{
unsafe {
let mut temp: af_features = std::ptr::null_mut();
let err_val = af_fast(
&mut temp as *mut af_features,
input.get(),
thr,
arc_len,
non_max,
feat_ratio,
edge,
);
HANDLE_ERROR(AfError::from(err_val));
Features { feat: temp }
}
}
/// Harris corner detector.
///
/// Compute corners using the Harris corner detector approach. For each pixel, a small window is
/// used to calculate the determinant and trace of such a window, from which a response is
/// calculated. Pixels are considered corners if they are local maximas and have a high positive
/// response.
///
/// # Parameters
///
/// - `input` is the array containing a grayscale image (color images are not supported)
/// - `max_corners` is the maximum number of corners to keep, only retains those with highest Harris responses
/// - `min_response` is the minimum response in order for a corner to be retained, only used if max_corners = 0
/// - `sigma` is the standard deviation of a circular window (its dimensions will be calculated according to the standard deviation), the covariation matrix will be calculated to a circular neighborhood of this standard deviation (only used when block_size == 0, must be >= 0.5f and <= 5.0f)
/// - `block_size` is square window size, the covariation matrix will be calculated to a square neighborhood of this size (must be >= 3 and <= 31)
/// - `k_thr` is the Harris constant, usually set empirically to 0.04f (must be >= 0.01f)
///
/// # Return Values
///
/// This function returns an object of struct [Features](./struct.Features.html) containing Arrays
/// for x and y coordinates and score, while array oreientation & size are set to 0 & 1,
/// respectively, since harris doesn't compute that information
pub fn harris<T>(
input: &Array<T>,
max_corners: u32,
min_response: f32,
sigma: f32,
block_size: u32,
k_thr: f32,
) -> Features
where
T: HasAfEnum + RealFloating,
{
unsafe {
let mut temp: af_features = std::ptr::null_mut();
let err_val = af_harris(
&mut temp as *mut af_features,
input.get(),
max_corners,
min_response,
sigma,
block_size,
k_thr,
);
HANDLE_ERROR(AfError::from(err_val));
Features { feat: temp }
}
}
/// ORB feature descriptor
///
/// Extract ORB descriptors from FAST features that hold higher Harris responses. FAST does not
/// compute orientation, thus, orientation of features is calculated using the intensity centroid.
/// As FAST is also not multi-scale enabled, a multi-scale pyramid is calculated by downsampling
/// the input image multiple times followed by FAST feature detection on each scale.
///
/// # Parameters
///
/// - `input` - the input image Array
/// - `fast_thr` - FAST threshold for which a pixel of the circle around the central pixel is
/// considered to be brighter or darker
/// - `max_feat` - maximum number of features to hold
/// - `scl_fctr` - factor to downsample the input image, meaning that each level with hold prior
/// level dimensions divided by `scl_fctr`
/// - `levels` - number of levels to be computed for the image pyramid
/// - `blur_img` - blur image with a Gaussian filter with sigma=2 before computing descriptors to
/// increase robustness against noise if true
///
/// # Return Values
///
/// This function returns a tuple of [`Features`](./struct.Features.html) and [`Array`](./struct.Array.html). The features objects composed of Arrays for x and y coordinates, score, orientation and size of selected features. The Array object is a two dimensional Array of size Nx8 where N is number of selected features.
pub fn orb<T>(
input: &Array<T>,
fast_thr: f32,
max_feat: u32,
scl_fctr: f32,
levels: u32,
blur_img: bool,
) -> (Features, Array<T>)
where
T: HasAfEnum + RealFloating,
{
unsafe {
let mut f: af_features = std::ptr::null_mut();
let mut d: af_array = std::ptr::null_mut();
let err_val = af_orb(
&mut f as *mut af_features,
&mut d as *mut af_array,
input.get(),
fast_thr,
max_feat,
scl_fctr,
levels,
blur_img,
);
HANDLE_ERROR(AfError::from(err_val));
(Features { feat: f }, d.into())
}
}
/// Hamming feature matcher
///
/// Calculates Hamming distances between two 2-dimensional arrays containing features, one of the
/// arrays containing the training data and the other the query data. One of the dimensions of the
/// both arrays must be equal among them, identifying the length of each feature. The other
/// dimension indicates the total number of features in each of the training and query arrays. Two
/// 1-dimensional arrays are created as results, one containg the smallest N distances of the query
/// array and another containing the indices of these distances in the training array. The
/// resulting 1-dimensional arrays have length equal to the number of features contained in the
/// query array.
///
/// # Parameters
///
/// - `query` - Array containing the data to be queried
/// - `train` - Array containing the data to be used as training data
/// - `dist_dims` - indicates the dimension to analyze for distance (the dimension indicated here
/// must be of equal length for both query and train arrays)
/// - `n_dist` - is the number of smallest distances to return (currently, only values <= 256 are supported)
///
///
/// # Return Values
///
/// This function returns a tuple of [Array](./struct.Array.html)'s.
///
/// First Array is an array of MxN size, where M is equal to the number of query features and N is
/// equal to n_dist. The value at position IxJ indicates the index of the Jth smallest distance to
/// the Ith query value in the train data array. the index of the Ith smallest distance of the Mth
/// query.
///
/// Second Array is an array of MxN size, where M is equal to the number of query features and N is
/// equal to n_dist. The value at position IxJ indicates the Hamming distance of the Jth smallest
/// distance to the Ith query value in the train data array.
pub fn hamming_matcher<T>(
query: &Array<T>,
train: &Array<T>,
dist_dims: i64,
n_dist: u32,
) -> (Array<u32>, Array<T::AggregateOutType>)
where
T: HasAfEnum + ImageFilterType,
T::AggregateOutType: HasAfEnum,
{
unsafe {
let mut idx: af_array = std::ptr::null_mut();
let mut dist: af_array = std::ptr::null_mut();
let err_val = af_hamming_matcher(
&mut idx as *mut af_array,
&mut dist as *mut af_array,
query.get(),
train.get(),
dist_dims,
n_dist,
);
HANDLE_ERROR(AfError::from(err_val));
(idx.into(), dist.into())
}
}
/// Nearest Neighbour.
///
/// Calculates nearest distances between two 2-dimensional arrays containing features based on the
/// type of distance computation chosen. Currently, AF_SAD (sum of absolute differences), AF_SSD
/// (sum of squared differences) and AF_SHD (hamming distance) are supported. One of the arrays
/// containing the training data and the other the query data. One of the dimensions of the both
/// arrays must be equal among them, identifying the length of each feature. The other dimension
/// indicates the total number of features in each of the training and query arrays. Two
/// 1-dimensional arrays are created as results, one containg the smallest N distances of the query
/// array and another containing the indices of these distances in the training array. The resulting
/// 1-dimensional arrays have length equal to the number of features contained in the query array.
///
/// # Parameters
///
/// - `query` is the array containing the data to be queried
/// - `train` is the array containing the data used as training data
/// - `dist_dim` indicates the dimension to analyze for distance (the dimension indicated here must be of equal length for both query and train arrays)
/// - `n_dist` is the number of smallest distances to return (currently, only values <= 256 are supported)
/// - `dist_type` is the distance computation type. Currently [`MatchType::SAD`](./enum.MatchType.html), [`MatchType::SSD`](./enum.MatchType.html), and [`MatchType::SHD`](./enum.MatchType.html) are supported.
///
/// # Return Values
///
/// A tuple of Arrays.
///
/// The first Array is is an array of MxN size, where M is equal to the number of query features
/// and N is equal to `n_dist`. The value at position IxJ indicates the index of the Jth smallest
/// distance to the Ith query value in the train data array. the index of the Ith smallest distance
/// of the Mth query.
///
/// The second Array is is an array of MxN size, where M is equal to the number of query features
/// and N is equal to `n_dist`. The value at position IxJ indicates the distance of the Jth smallest
/// distance to the Ith query value in the train data array based on the `dist_type` chosen.
pub fn nearest_neighbour<T>(
query: &Array<T>,
train: &Array<T>,
dist_dim: i64,
n_dist: u32,
dist_type: MatchType,
) -> (Array<u32>, Array<T::AggregateOutType>)
where
T: HasAfEnum + ImageFilterType,
T::AggregateOutType: HasAfEnum,
{
unsafe {
let mut idx: af_array = std::ptr::null_mut();
let mut dist: af_array = std::ptr::null_mut();
let err_val = af_nearest_neighbour(
&mut idx as *mut af_array,
&mut dist as *mut af_array,
query.get(),
train.get(),
dist_dim,
n_dist,
dist_type as c_int,
);
HANDLE_ERROR(AfError::from(err_val));
(idx.into(), dist.into())
}
}
/// Image matching
///
/// Template matching is an image processing technique to find small patches of an image which
/// match a given template image. A more in depth discussion on the topic can be found
/// [here](https://en.wikipedia.org/wiki/Template_matching).
///
/// # Parameters
///
/// - `search_img` is an array with image data
/// - `template_img` is the template we are looking for in the image
/// - `mtype` is metric that should be used to calculate the disparity between window in the image and the template image. It can be one of the values defined by the enum [MatchType](./enum.MatchType.html).
/// # Return Values
///
/// This function returns an Array with disparity values for the window starting at corresponding pixel position.
pub fn match_template<T>(
search_img: &Array<T>,
template_img: &Array<T>,
mtype: MatchType,
) -> Array<T::AbsOutType>
where
T: HasAfEnum + ImageFilterType,
T::AbsOutType: HasAfEnum,
{
unsafe {
let mut temp: af_array = std::ptr::null_mut();
let err_val = af_match_template(
&mut temp as *mut af_array,
search_img.get(),
template_img.get(),
mtype as c_uint,
);
HANDLE_ERROR(AfError::from(err_val));
temp.into()
}
}
/// SUSAN corner detector.
///
/// SUSAN is an acronym standing for Smallest Univalue Segment Assimilating Nucleus. This method
/// places a circular disc over the pixel to be tested (a.k.a nucleus) to compute the corner
/// measure of that corresponding pixel. The region covered by the circular disc is M, and a pixel
/// in this region is represented by m⃗ ∈M where m⃗ 0 is the nucleus. Every pixel in the region is
/// compared to the nucleus using the following comparison function:
///
/// c(m⃗ )=e^−((I(m⃗)−I(m⃗_0))/t)^6
///
/// where t is radius of the region, I is the brightness of the pixel.
///
/// Response of SUSAN operator is given by the following equation:
///
/// R(M) = g−n(M) if n(M) < g
///
/// R(M) = 0 otherwise,
///
/// where n(M)=∑c(m⃗) m⃗∈M, g is named the geometric threshold and n is the number of pixels in the
/// mask which are within t of the nucleus.
///
/// Importance of the parameters, t and g is explained below:
///
/// - t determines how similar points have to be to the nucleusbefore they are considered to be a
/// part of the univalue segment
/// - g determines the minimum size of the univalue segment. For a large enough g, SUSAN operator
/// becomes an edge dectector.
///
/// # Parameters
///
/// - `input` is input grayscale/intensity image
/// - `radius` is the nucleus radius for each pixel neighborhood
/// - `diff_thr` is intensity difference threshold a.k.a **t** from equations in description
/// - `geom_thr` is the geometric threshold
/// - `feature_ratio` is maximum number of features that will be returned by the function
/// - `edge` indicates how many pixels width area should be skipped for corner detection
///
/// # Return Values
/// An object of type [Features](./struct.Features.html) composed of arrays for x and y coordinates, score, orientation and size of selected features.
pub fn susan<T>(
input: &Array<T>,
radius: u32,
diff_thr: f32,
geom_thr: f32,
feature_ratio: f32,
edge: u32,
) -> Features
where
T: HasAfEnum + ImageFilterType,
{
unsafe {
let mut temp: af_features = std::ptr::null_mut();
let err_val = af_susan(
&mut temp as *mut af_features,
input.get(),
radius,
diff_thr,
geom_thr,
feature_ratio,
edge,
);
HANDLE_ERROR(AfError::from(err_val));
Features { feat: temp }
}
}
/// Difference of Gaussians.
///
/// Given an image, this function computes two different versions of smoothed input image using the
/// difference smoothing parameters and subtracts one from the other and returns the result.
///
/// # Parameters
///
/// - `input` is the input image
/// - `radius1` is the radius of the first gaussian kernel
/// - `radius2` is the radius of the second gaussian kernel
///
/// # Return Values
///
/// Difference of smoothed inputs - An Array.
pub fn dog<T>(input: &Array<T>, radius1: i32, radius2: i32) -> Array<T::AbsOutType>
where
T: HasAfEnum + ImageFilterType,
T::AbsOutType: HasAfEnum,
{
unsafe {
let mut temp: af_array = std::ptr::null_mut();
let err_val = af_dog(&mut temp as *mut af_array, input.get(), radius1, radius2);
HANDLE_ERROR(AfError::from(err_val));
temp.into()
}
}
/// Homography estimation
///
/// Homography estimation find a perspective transform between two sets of 2D points.
/// Currently, two methods are supported for the estimation, RANSAC (RANdom SAmple Consensus)
/// and LMedS (Least Median of Squares). Both methods work by randomly selecting a subset
/// of 4 points of the set of source points, computing the eigenvectors of that set and
/// finding the perspective transform. The process is repeated several times, a maximum of
/// times given by the value passed to the iterations arguments for RANSAC (for the CPU
/// backend, usually less than that, depending on the quality of the dataset, but for CUDA
/// and OpenCL backends the transformation will be computed exactly the amount of times
/// passed via the iterations parameter), the returned value is the one that matches the
/// best number of inliers, which are all of the points that fall within a maximum L2
/// distance from the value passed to the inlier_thr argument.
///
/// # Parameters
///
/// - `x_src` is the x coordinates of the source points.
/// - `y_src` is the y coordinates of the source points.
/// - `x_dst` is the x coordinates of the destination points.
/// - `y_dst` is the y coordinates of the destination points.
/// - `htype` can be AF_HOMOGRAPHY_RANSAC, for which a RANdom SAmple Consensus will be used to evaluate the homography quality (e.g., number of inliers), or AF_HOMOGRAPHY_LMEDS, which will use Least Median of Squares method to evaluate homography quality
/// - `inlier_thr` - if htype is AF_HOMOGRAPHY_RAN | fast | identifier_name |
decode_properties.rs | // Copyright 2020 TiKV Project Authors. Licensed under Apache-2.0.
//! Base types used by various rocks properties decoders
use std::collections::BTreeMap;
use std::io::Read;
use std::ops::{Deref, DerefMut};
use tikv_util::codec::number::{self, NumberEncoder};
use tikv_util::codec::Result;
#[derive(Clone, Debug, Default)]
pub struct IndexHandle {
pub size: u64, // The size of the stored block
pub offset: u64, // The offset of the block in the file
}
#[derive(Debug, Default)]
pub struct IndexHandles(BTreeMap<Vec<u8>, IndexHandle>);
impl Deref for IndexHandles {
type Target = BTreeMap<Vec<u8>, IndexHandle>;
fn deref(&self) -> &BTreeMap<Vec<u8>, IndexHandle> {
&self.0
}
}
impl DerefMut for IndexHandles {
fn deref_mut(&mut self) -> &mut BTreeMap<Vec<u8>, IndexHandle> {
&mut self.0
}
}
impl IndexHandles {
pub fn new() -> IndexHandles {
IndexHandles(BTreeMap::new())
}
pub fn into_map(self) -> BTreeMap<Vec<u8>, IndexHandle> {
self.0
}
pub fn add(&mut self, key: Vec<u8>, index_handle: IndexHandle) {
self.0.insert(key, index_handle);
}
// Format: | klen | k | v.size | v.offset |
pub fn encode(&self) -> Vec<u8> {
let mut buf = Vec::with_capacity(1024);
for (k, v) in &self.0 {
buf.encode_u64(k.len() as u64).unwrap();
buf.extend(k);
buf.encode_u64(v.size).unwrap();
buf.encode_u64(v.offset).unwrap();
}
buf
}
pub fn decode(mut buf: &[u8]) -> Result<IndexHandles> {
let mut res = BTreeMap::new();
while!buf.is_empty() {
let klen = number::decode_u64(&mut buf)?;
let mut k = vec![0; klen as usize];
buf.read_exact(&mut k)?;
let v = IndexHandle {
size: number::decode_u64(&mut buf)?,
offset: number::decode_u64(&mut buf)?,
};
res.insert(k, v);
}
Ok(IndexHandles(res))
}
}
pub trait DecodeProperties {
fn decode(&self, k: &str) -> Result<&[u8]>;
fn decode_u64(&self, k: &str) -> Result<u64> {
let mut buf = self.decode(k)?;
number::decode_u64(&mut buf)
} | fn decode_handles(&self, k: &str) -> Result<IndexHandles> {
let buf = self.decode(k)?;
IndexHandles::decode(buf)
}
}
impl DecodeProperties for rocksdb::UserCollectedProperties {
fn decode(&self, k: &str) -> tikv_util::codec::Result<&[u8]> {
self.get(k.as_bytes())
.ok_or(tikv_util::codec::Error::KeyNotFound)
}
} | random_line_split |
|
decode_properties.rs | // Copyright 2020 TiKV Project Authors. Licensed under Apache-2.0.
//! Base types used by various rocks properties decoders
use std::collections::BTreeMap;
use std::io::Read;
use std::ops::{Deref, DerefMut};
use tikv_util::codec::number::{self, NumberEncoder};
use tikv_util::codec::Result;
#[derive(Clone, Debug, Default)]
pub struct IndexHandle {
pub size: u64, // The size of the stored block
pub offset: u64, // The offset of the block in the file
}
#[derive(Debug, Default)]
pub struct IndexHandles(BTreeMap<Vec<u8>, IndexHandle>);
impl Deref for IndexHandles {
type Target = BTreeMap<Vec<u8>, IndexHandle>;
fn deref(&self) -> &BTreeMap<Vec<u8>, IndexHandle> {
&self.0
}
}
impl DerefMut for IndexHandles {
fn | (&mut self) -> &mut BTreeMap<Vec<u8>, IndexHandle> {
&mut self.0
}
}
impl IndexHandles {
pub fn new() -> IndexHandles {
IndexHandles(BTreeMap::new())
}
pub fn into_map(self) -> BTreeMap<Vec<u8>, IndexHandle> {
self.0
}
pub fn add(&mut self, key: Vec<u8>, index_handle: IndexHandle) {
self.0.insert(key, index_handle);
}
// Format: | klen | k | v.size | v.offset |
pub fn encode(&self) -> Vec<u8> {
let mut buf = Vec::with_capacity(1024);
for (k, v) in &self.0 {
buf.encode_u64(k.len() as u64).unwrap();
buf.extend(k);
buf.encode_u64(v.size).unwrap();
buf.encode_u64(v.offset).unwrap();
}
buf
}
pub fn decode(mut buf: &[u8]) -> Result<IndexHandles> {
let mut res = BTreeMap::new();
while!buf.is_empty() {
let klen = number::decode_u64(&mut buf)?;
let mut k = vec![0; klen as usize];
buf.read_exact(&mut k)?;
let v = IndexHandle {
size: number::decode_u64(&mut buf)?,
offset: number::decode_u64(&mut buf)?,
};
res.insert(k, v);
}
Ok(IndexHandles(res))
}
}
pub trait DecodeProperties {
fn decode(&self, k: &str) -> Result<&[u8]>;
fn decode_u64(&self, k: &str) -> Result<u64> {
let mut buf = self.decode(k)?;
number::decode_u64(&mut buf)
}
fn decode_handles(&self, k: &str) -> Result<IndexHandles> {
let buf = self.decode(k)?;
IndexHandles::decode(buf)
}
}
impl DecodeProperties for rocksdb::UserCollectedProperties {
fn decode(&self, k: &str) -> tikv_util::codec::Result<&[u8]> {
self.get(k.as_bytes())
.ok_or(tikv_util::codec::Error::KeyNotFound)
}
}
| deref_mut | identifier_name |
knowledge.rs | use std::collections::BTreeMap;
use game::*;
use spatial_hash::*;
use coord::Coord;
use util::TwoDimensionalCons;
/// Trait implemented by representations of knowledge about a level
pub trait LevelKnowledge {
/// Updates a cell of the knowledge representation, returnig true iff the
/// knowledge of the cell changed as a result of the update.
fn update_cell(&mut self, coord: Coord, world_cell: &SpatialHashCell, accuracy: f64, action_env: ActionEnv) -> bool;
}
#[derive(Serialize, Deserialize)]
pub struct GameKnowledge<K: LevelKnowledge> {
levels: BTreeMap<LevelId, K>,
}
impl<K: LevelKnowledge> GameKnowledge<K> {
pub fn new() -> Self {
GameKnowledge {
levels: BTreeMap::new(),
}
}
pub fn level(&self, level_id: LevelId) -> &K {
self.levels.get(&level_id).expect("No such level")
}
pub fn level_mut(&mut self, level_id: LevelId) -> &mut K {
self.levels.get_mut(&level_id).expect("No such level")
}
}
impl<K: LevelKnowledge + Default> Default for GameKnowledge<K> {
fn default() -> Self {
Self::new()
}
}
impl<K: LevelKnowledge + TwoDimensionalCons> GameKnowledge<K> {
pub fn | (&mut self, level_id: LevelId,
width: usize, height: usize) -> &mut K {
self.levels.entry(level_id).or_insert_with(|| K::new(width, height))
}
}
| level_mut_or_insert_size | identifier_name |
knowledge.rs | use std::collections::BTreeMap;
use game::*;
use spatial_hash::*;
use coord::Coord;
use util::TwoDimensionalCons;
/// Trait implemented by representations of knowledge about a level
pub trait LevelKnowledge {
/// Updates a cell of the knowledge representation, returnig true iff the
/// knowledge of the cell changed as a result of the update.
fn update_cell(&mut self, coord: Coord, world_cell: &SpatialHashCell, accuracy: f64, action_env: ActionEnv) -> bool;
}
#[derive(Serialize, Deserialize)]
pub struct GameKnowledge<K: LevelKnowledge> {
levels: BTreeMap<LevelId, K>,
}
impl<K: LevelKnowledge> GameKnowledge<K> {
pub fn new() -> Self {
GameKnowledge {
levels: BTreeMap::new(),
}
}
pub fn level(&self, level_id: LevelId) -> &K {
self.levels.get(&level_id).expect("No such level")
}
pub fn level_mut(&mut self, level_id: LevelId) -> &mut K {
self.levels.get_mut(&level_id).expect("No such level")
}
}
impl<K: LevelKnowledge + Default> Default for GameKnowledge<K> {
fn default() -> Self {
Self::new()
}
}
| self.levels.entry(level_id).or_insert_with(|| K::new(width, height))
}
} | impl<K: LevelKnowledge + TwoDimensionalCons> GameKnowledge<K> {
pub fn level_mut_or_insert_size(&mut self, level_id: LevelId,
width: usize, height: usize) -> &mut K { | random_line_split |
knowledge.rs | use std::collections::BTreeMap;
use game::*;
use spatial_hash::*;
use coord::Coord;
use util::TwoDimensionalCons;
/// Trait implemented by representations of knowledge about a level
pub trait LevelKnowledge {
/// Updates a cell of the knowledge representation, returnig true iff the
/// knowledge of the cell changed as a result of the update.
fn update_cell(&mut self, coord: Coord, world_cell: &SpatialHashCell, accuracy: f64, action_env: ActionEnv) -> bool;
}
#[derive(Serialize, Deserialize)]
pub struct GameKnowledge<K: LevelKnowledge> {
levels: BTreeMap<LevelId, K>,
}
impl<K: LevelKnowledge> GameKnowledge<K> {
pub fn new() -> Self {
GameKnowledge {
levels: BTreeMap::new(),
}
}
pub fn level(&self, level_id: LevelId) -> &K {
self.levels.get(&level_id).expect("No such level")
}
pub fn level_mut(&mut self, level_id: LevelId) -> &mut K |
}
impl<K: LevelKnowledge + Default> Default for GameKnowledge<K> {
fn default() -> Self {
Self::new()
}
}
impl<K: LevelKnowledge + TwoDimensionalCons> GameKnowledge<K> {
pub fn level_mut_or_insert_size(&mut self, level_id: LevelId,
width: usize, height: usize) -> &mut K {
self.levels.entry(level_id).or_insert_with(|| K::new(width, height))
}
}
| {
self.levels.get_mut(&level_id).expect("No such level")
} | identifier_body |
no_1109_corporate_flight_bookings.rs | pub fn corp_flight_bookings(bookings: Vec<Vec<i32>>, n: i32) -> Vec<i32> {
let mut answer = vec![0; n as usize];
for booking in bookings {
answer[booking[0] as usize - 1] += booking[2];
if booking[1] < n {
answer[booking[1] as usize] -= booking[2];
}
}
let mut sum = 0;
for a in answer.iter_mut() {
sum += *a;
*a = sum;
} |
answer
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_corp_flight_bookings() {
let answer = corp_flight_bookings(vec![vec![1, 2, 10], vec![2, 3, 20], vec![2, 5, 25]], 5);
assert_eq!(answer, vec![10, 55, 45, 25, 25]);
}
} | random_line_split |
|
no_1109_corporate_flight_bookings.rs | pub fn corp_flight_bookings(bookings: Vec<Vec<i32>>, n: i32) -> Vec<i32> {
let mut answer = vec![0; n as usize];
for booking in bookings {
answer[booking[0] as usize - 1] += booking[2];
if booking[1] < n {
answer[booking[1] as usize] -= booking[2];
}
}
let mut sum = 0;
for a in answer.iter_mut() {
sum += *a;
*a = sum;
}
answer
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn | () {
let answer = corp_flight_bookings(vec![vec![1, 2, 10], vec![2, 3, 20], vec![2, 5, 25]], 5);
assert_eq!(answer, vec![10, 55, 45, 25, 25]);
}
}
| test_corp_flight_bookings | identifier_name |
no_1109_corporate_flight_bookings.rs | pub fn corp_flight_bookings(bookings: Vec<Vec<i32>>, n: i32) -> Vec<i32> {
let mut answer = vec![0; n as usize];
for booking in bookings {
answer[booking[0] as usize - 1] += booking[2];
if booking[1] < n |
}
let mut sum = 0;
for a in answer.iter_mut() {
sum += *a;
*a = sum;
}
answer
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_corp_flight_bookings() {
let answer = corp_flight_bookings(vec![vec![1, 2, 10], vec![2, 3, 20], vec![2, 5, 25]], 5);
assert_eq!(answer, vec![10, 55, 45, 25, 25]);
}
}
| {
answer[booking[1] as usize] -= booking[2];
} | conditional_block |
no_1109_corporate_flight_bookings.rs | pub fn corp_flight_bookings(bookings: Vec<Vec<i32>>, n: i32) -> Vec<i32> |
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_corp_flight_bookings() {
let answer = corp_flight_bookings(vec![vec![1, 2, 10], vec![2, 3, 20], vec![2, 5, 25]], 5);
assert_eq!(answer, vec![10, 55, 45, 25, 25]);
}
}
| {
let mut answer = vec![0; n as usize];
for booking in bookings {
answer[booking[0] as usize - 1] += booking[2];
if booking[1] < n {
answer[booking[1] as usize] -= booking[2];
}
}
let mut sum = 0;
for a in answer.iter_mut() {
sum += *a;
*a = sum;
}
answer
} | identifier_body |
client_json.rs | #![deny(warnings)]
extern crate hyper;
#[macro_use]
extern crate serde_derive;
extern crate serde;
extern crate serde_json;
use hyper::Client;
use hyper::rt::{self, Future, Stream};
fn main() {
let url = "http://jsonplaceholder.typicode.com/users".parse().unwrap();
let fut = fetch_json(url)
// use the parsed vector
.map(|users| {
// print users
println!("users: {:#?}", users);
// print the sum of ids
let sum = users.iter().fold(0, |acc, user| acc + user.id);
println!("sum of ids: {}", sum);
})
// if there was an error print it
.map_err(|e| {
match e {
FetchError::Http(e) => eprintln!("http error: {}", e),
FetchError::Json(e) => eprintln!("json parsing error: {}", e),
}
});
// Run the runtime with the future trying to fetch, parse and print json.
//
// Note that in more complicated use cases, the runtime should probably
// run on its own, and futures should just be spawned into it.
rt::run(fut);
}
fn fetch_json(url: hyper::Uri) -> impl Future<Item=Vec<User>, Error=FetchError> {
let client = Client::new();
client
// Fetch the url...
.get(url)
// And then, if we get a response back...
.and_then(|res| {
// asynchronously concatenate chunks of the body
res.into_body().concat2()
})
.from_err::<FetchError>()
// use the body after concatenation
.and_then(|body| {
// try to parse as json with serde_json
let users = serde_json::from_slice(&body)?;
Ok(users)
})
.from_err()
}
#[derive(Deserialize, Debug)]
struct User {
id: i32,
name: String,
}
// Define a type so we can return multiple types of errors | Json(serde_json::Error),
}
impl From<hyper::Error> for FetchError {
fn from(err: hyper::Error) -> FetchError {
FetchError::Http(err)
}
}
impl From<serde_json::Error> for FetchError {
fn from(err: serde_json::Error) -> FetchError {
FetchError::Json(err)
}
} | enum FetchError {
Http(hyper::Error), | random_line_split |
client_json.rs | #![deny(warnings)]
extern crate hyper;
#[macro_use]
extern crate serde_derive;
extern crate serde;
extern crate serde_json;
use hyper::Client;
use hyper::rt::{self, Future, Stream};
fn | () {
let url = "http://jsonplaceholder.typicode.com/users".parse().unwrap();
let fut = fetch_json(url)
// use the parsed vector
.map(|users| {
// print users
println!("users: {:#?}", users);
// print the sum of ids
let sum = users.iter().fold(0, |acc, user| acc + user.id);
println!("sum of ids: {}", sum);
})
// if there was an error print it
.map_err(|e| {
match e {
FetchError::Http(e) => eprintln!("http error: {}", e),
FetchError::Json(e) => eprintln!("json parsing error: {}", e),
}
});
// Run the runtime with the future trying to fetch, parse and print json.
//
// Note that in more complicated use cases, the runtime should probably
// run on its own, and futures should just be spawned into it.
rt::run(fut);
}
fn fetch_json(url: hyper::Uri) -> impl Future<Item=Vec<User>, Error=FetchError> {
let client = Client::new();
client
// Fetch the url...
.get(url)
// And then, if we get a response back...
.and_then(|res| {
// asynchronously concatenate chunks of the body
res.into_body().concat2()
})
.from_err::<FetchError>()
// use the body after concatenation
.and_then(|body| {
// try to parse as json with serde_json
let users = serde_json::from_slice(&body)?;
Ok(users)
})
.from_err()
}
#[derive(Deserialize, Debug)]
struct User {
id: i32,
name: String,
}
// Define a type so we can return multiple types of errors
enum FetchError {
Http(hyper::Error),
Json(serde_json::Error),
}
impl From<hyper::Error> for FetchError {
fn from(err: hyper::Error) -> FetchError {
FetchError::Http(err)
}
}
impl From<serde_json::Error> for FetchError {
fn from(err: serde_json::Error) -> FetchError {
FetchError::Json(err)
}
}
| main | identifier_name |
client_json.rs | #![deny(warnings)]
extern crate hyper;
#[macro_use]
extern crate serde_derive;
extern crate serde;
extern crate serde_json;
use hyper::Client;
use hyper::rt::{self, Future, Stream};
fn main() {
let url = "http://jsonplaceholder.typicode.com/users".parse().unwrap();
let fut = fetch_json(url)
// use the parsed vector
.map(|users| {
// print users
println!("users: {:#?}", users);
// print the sum of ids
let sum = users.iter().fold(0, |acc, user| acc + user.id);
println!("sum of ids: {}", sum);
})
// if there was an error print it
.map_err(|e| {
match e {
FetchError::Http(e) => eprintln!("http error: {}", e),
FetchError::Json(e) => eprintln!("json parsing error: {}", e),
}
});
// Run the runtime with the future trying to fetch, parse and print json.
//
// Note that in more complicated use cases, the runtime should probably
// run on its own, and futures should just be spawned into it.
rt::run(fut);
}
fn fetch_json(url: hyper::Uri) -> impl Future<Item=Vec<User>, Error=FetchError> | }
#[derive(Deserialize, Debug)]
struct User {
id: i32,
name: String,
}
// Define a type so we can return multiple types of errors
enum FetchError {
Http(hyper::Error),
Json(serde_json::Error),
}
impl From<hyper::Error> for FetchError {
fn from(err: hyper::Error) -> FetchError {
FetchError::Http(err)
}
}
impl From<serde_json::Error> for FetchError {
fn from(err: serde_json::Error) -> FetchError {
FetchError::Json(err)
}
}
| {
let client = Client::new();
client
// Fetch the url...
.get(url)
// And then, if we get a response back...
.and_then(|res| {
// asynchronously concatenate chunks of the body
res.into_body().concat2()
})
.from_err::<FetchError>()
// use the body after concatenation
.and_then(|body| {
// try to parse as json with serde_json
let users = serde_json::from_slice(&body)?;
Ok(users)
})
.from_err() | identifier_body |
TestNativeExp2.rs | /*
* Copyright (C) 2014 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License"); | * http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma version(1)
#pragma rs java_package_name(android.renderscript.cts)
// Don't edit this file! It is auto-generated by frameworks/rs/api/gen_runtime.
float __attribute__((kernel)) testNativeExp2FloatFloat(float inV) {
return native_exp2(inV);
}
float2 __attribute__((kernel)) testNativeExp2Float2Float2(float2 inV) {
return native_exp2(inV);
}
float3 __attribute__((kernel)) testNativeExp2Float3Float3(float3 inV) {
return native_exp2(inV);
}
float4 __attribute__((kernel)) testNativeExp2Float4Float4(float4 inV) {
return native_exp2(inV);
} | * you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
* | random_line_split |
lib.rs | //! A wrapper around the uchardet library. Detects character encodings.
//!
//! Note that the underlying implemention is written in C and C++, and I'm
//! not aware of any security audits which have been performed against it.
//!
//! ```
//! use uchardet::detect_encoding_name;
//!
//! assert_eq!("WINDOWS-1252",
//! detect_encoding_name(&[0x46, 0x93, 0x72, 0x61, 0x6e, 0xe7, 0x6f,
//! 0x69, 0x73, 0xe9, 0x94]).unwrap());
//! ```
//!
//! For more information, see [this project on
//! GitHub](https://github.com/emk/rust-uchardet).
// Increase the compiler's recursion limit for the `error_chain` crate.
#![recursion_limit = "1024"]
#![deny(missing_docs)]
#[macro_use]
extern crate error_chain;
extern crate libc;
extern crate uchardet_sys as ffi;
use libc::size_t;
use std::ffi::CStr;
use std::str::from_utf8;
use std::os::raw::c_char;
pub use errors::*; |
/// We declare our `error_chain!` in a private submodule so that we can
/// `allow(missing_docs)`.
#[allow(missing_docs)]
mod errors {
error_chain! {
errors {
UnrecognizableCharset {
description("unrecognizable charset")
display("uchardet was unable to recognize a charset")
}
OutOfMemory {
description("out of memory error")
display("uchardet ran out of memory")
}
Other(int: i32) {
description("unknown error")
display("uchardet returned unknown error {}", int)
}
}
}
}
impl ErrorKind {
/// Convert an `nsresult` into a Rust error. We panic if the error
/// value is zero.
fn from_nsresult(nsresult: ::ffi::nsresult) -> ErrorKind {
assert!(nsresult!= 0);
match nsresult {
1 => ErrorKind::OutOfMemory,
int => ErrorKind::Other(int),
}
}
}
/// Detects the encoding of text using the uchardet library.
///
/// EXPERIMENTAL: This may be replaced by a better API soon.
struct EncodingDetector {
ptr: ffi::uchardet_t
}
/// Return the name of the charset used in `data` or an error if uchardet
/// was unable to detect a charset.
///
/// ```
/// use uchardet::detect_encoding_name;
///
/// assert_eq!("ASCII",
/// detect_encoding_name("ascii".as_bytes()).unwrap());
/// assert_eq!("UTF-8",
/// detect_encoding_name("©français".as_bytes()).unwrap());
/// assert_eq!("WINDOWS-1252",
/// detect_encoding_name(&[0x46, 0x93, 0x72, 0x61, 0x6e, 0xe7, 0x6f,
/// 0x69, 0x73, 0xe9, 0x94]).unwrap());
/// ```
pub fn detect_encoding_name(data: &[u8]) -> Result<String> {
let mut detector = EncodingDetector::new();
try!(detector.handle_data(data));
detector.data_end();
detector.charset()
}
impl EncodingDetector {
/// Create a new EncodingDetector.
fn new() -> EncodingDetector {
let ptr = unsafe { ffi::uchardet_new() };
assert!(!ptr.is_null());
EncodingDetector{ptr: ptr}
}
/// Pass a chunk of raw bytes to the detector. This is a no-op if a
/// charset has been detected.
fn handle_data(&mut self, data: &[u8]) -> Result<()> {
let nsresult = unsafe {
ffi::uchardet_handle_data(self.ptr, data.as_ptr() as *const c_char,
data.len() as size_t)
};
match nsresult {
0 => Ok(()),
int => {
Err(ErrorKind::from_nsresult(int).into())
}
}
}
/// Notify the detector that we're done calling `handle_data`, and that
/// we want it to make a guess as to our encoding. This is a no-op if
/// no data has been passed yet, or if an encoding has been detected
/// for certain. From reading the code, it appears that you can safely
/// call `handle_data` after calling this, but I'm not certain.
fn data_end(&mut self) {
unsafe { ffi::uchardet_data_end(self.ptr); }
}
/// Get the decoder's current best guess as to the encoding. May return
/// an error if uchardet was unable to detect an encoding.
fn charset(&self) -> Result<String> {
unsafe {
let internal_str = ffi::uchardet_get_charset(self.ptr);
assert!(!internal_str.is_null());
let bytes = CStr::from_ptr(internal_str).to_bytes();
let charset = from_utf8(bytes);
match charset {
Err(_) =>
panic!("uchardet_get_charset returned a charset name \
containing invalid characters"),
Ok("") => Err(ErrorKind::UnrecognizableCharset.into()),
Ok(encoding) => Ok(encoding.to_string())
}
}
}
}
impl Drop for EncodingDetector {
fn drop(&mut self) {
unsafe { ffi::uchardet_delete(self.ptr) };
}
} | random_line_split |
|
lib.rs | //! A wrapper around the uchardet library. Detects character encodings.
//!
//! Note that the underlying implemention is written in C and C++, and I'm
//! not aware of any security audits which have been performed against it.
//!
//! ```
//! use uchardet::detect_encoding_name;
//!
//! assert_eq!("WINDOWS-1252",
//! detect_encoding_name(&[0x46, 0x93, 0x72, 0x61, 0x6e, 0xe7, 0x6f,
//! 0x69, 0x73, 0xe9, 0x94]).unwrap());
//! ```
//!
//! For more information, see [this project on
//! GitHub](https://github.com/emk/rust-uchardet).
// Increase the compiler's recursion limit for the `error_chain` crate.
#![recursion_limit = "1024"]
#![deny(missing_docs)]
#[macro_use]
extern crate error_chain;
extern crate libc;
extern crate uchardet_sys as ffi;
use libc::size_t;
use std::ffi::CStr;
use std::str::from_utf8;
use std::os::raw::c_char;
pub use errors::*;
/// We declare our `error_chain!` in a private submodule so that we can
/// `allow(missing_docs)`.
#[allow(missing_docs)]
mod errors {
error_chain! {
errors {
UnrecognizableCharset {
description("unrecognizable charset")
display("uchardet was unable to recognize a charset")
}
OutOfMemory {
description("out of memory error")
display("uchardet ran out of memory")
}
Other(int: i32) {
description("unknown error")
display("uchardet returned unknown error {}", int)
}
}
}
}
impl ErrorKind {
/// Convert an `nsresult` into a Rust error. We panic if the error
/// value is zero.
fn from_nsresult(nsresult: ::ffi::nsresult) -> ErrorKind {
assert!(nsresult!= 0);
match nsresult {
1 => ErrorKind::OutOfMemory,
int => ErrorKind::Other(int),
}
}
}
/// Detects the encoding of text using the uchardet library.
///
/// EXPERIMENTAL: This may be replaced by a better API soon.
struct EncodingDetector {
ptr: ffi::uchardet_t
}
/// Return the name of the charset used in `data` or an error if uchardet
/// was unable to detect a charset.
///
/// ```
/// use uchardet::detect_encoding_name;
///
/// assert_eq!("ASCII",
/// detect_encoding_name("ascii".as_bytes()).unwrap());
/// assert_eq!("UTF-8",
/// detect_encoding_name("©français".as_bytes()).unwrap());
/// assert_eq!("WINDOWS-1252",
/// detect_encoding_name(&[0x46, 0x93, 0x72, 0x61, 0x6e, 0xe7, 0x6f,
/// 0x69, 0x73, 0xe9, 0x94]).unwrap());
/// ```
pub fn detect_encoding_name(data: &[u8]) -> Result<String> {
let mut detector = EncodingDetector::new();
try!(detector.handle_data(data));
detector.data_end();
detector.charset()
}
impl EncodingDetector {
/// Create a new EncodingDetector.
fn new() -> EncodingDetector {
let ptr = unsafe { ffi::uchardet_new() };
assert!(!ptr.is_null());
EncodingDetector{ptr: ptr}
}
/// Pass a chunk of raw bytes to the detector. This is a no-op if a
/// charset has been detected.
fn handle_data(&mut self, data: &[u8]) -> Result<()> {
let nsresult = unsafe {
ffi::uchardet_handle_data(self.ptr, data.as_ptr() as *const c_char,
data.len() as size_t)
};
match nsresult {
0 => Ok(()),
int => {
Err(ErrorKind::from_nsresult(int).into())
}
}
}
/// Notify the detector that we're done calling `handle_data`, and that
/// we want it to make a guess as to our encoding. This is a no-op if
/// no data has been passed yet, or if an encoding has been detected
/// for certain. From reading the code, it appears that you can safely
/// call `handle_data` after calling this, but I'm not certain.
fn data_end(&mut self) {
unsafe { ffi::uchardet_data_end(self.ptr); }
}
/// Get the decoder's current best guess as to the encoding. May return
/// an error if uchardet was unable to detect an encoding.
fn charset(&self) -> Result<String> {
unsafe {
let internal_str = ffi::uchardet_get_charset(self.ptr);
assert!(!internal_str.is_null());
let bytes = CStr::from_ptr(internal_str).to_bytes();
let charset = from_utf8(bytes);
match charset {
Err(_) =>
panic!("uchardet_get_charset returned a charset name \
containing invalid characters"),
Ok("") => Err(ErrorKind::UnrecognizableCharset.into()),
Ok(encoding) => Ok(encoding.to_string())
}
}
}
}
impl Drop for EncodingDetector {
fn drop(&mut self) {
| unsafe { ffi::uchardet_delete(self.ptr) };
}
} | identifier_body |
|
lib.rs | //! A wrapper around the uchardet library. Detects character encodings.
//!
//! Note that the underlying implemention is written in C and C++, and I'm
//! not aware of any security audits which have been performed against it.
//!
//! ```
//! use uchardet::detect_encoding_name;
//!
//! assert_eq!("WINDOWS-1252",
//! detect_encoding_name(&[0x46, 0x93, 0x72, 0x61, 0x6e, 0xe7, 0x6f,
//! 0x69, 0x73, 0xe9, 0x94]).unwrap());
//! ```
//!
//! For more information, see [this project on
//! GitHub](https://github.com/emk/rust-uchardet).
// Increase the compiler's recursion limit for the `error_chain` crate.
#![recursion_limit = "1024"]
#![deny(missing_docs)]
#[macro_use]
extern crate error_chain;
extern crate libc;
extern crate uchardet_sys as ffi;
use libc::size_t;
use std::ffi::CStr;
use std::str::from_utf8;
use std::os::raw::c_char;
pub use errors::*;
/// We declare our `error_chain!` in a private submodule so that we can
/// `allow(missing_docs)`.
#[allow(missing_docs)]
mod errors {
error_chain! {
errors {
UnrecognizableCharset {
description("unrecognizable charset")
display("uchardet was unable to recognize a charset")
}
OutOfMemory {
description("out of memory error")
display("uchardet ran out of memory")
}
Other(int: i32) {
description("unknown error")
display("uchardet returned unknown error {}", int)
}
}
}
}
impl ErrorKind {
/// Convert an `nsresult` into a Rust error. We panic if the error
/// value is zero.
fn from_nsresult(nsresult: ::ffi::nsresult) -> ErrorKind {
assert!(nsresult!= 0);
match nsresult {
1 => ErrorKind::OutOfMemory,
int => ErrorKind::Other(int),
}
}
}
/// Detects the encoding of text using the uchardet library.
///
/// EXPERIMENTAL: This may be replaced by a better API soon.
struct EncodingDetector {
ptr: ffi::uchardet_t
}
/// Return the name of the charset used in `data` or an error if uchardet
/// was unable to detect a charset.
///
/// ```
/// use uchardet::detect_encoding_name;
///
/// assert_eq!("ASCII",
/// detect_encoding_name("ascii".as_bytes()).unwrap());
/// assert_eq!("UTF-8",
/// detect_encoding_name("©français".as_bytes()).unwrap());
/// assert_eq!("WINDOWS-1252",
/// detect_encoding_name(&[0x46, 0x93, 0x72, 0x61, 0x6e, 0xe7, 0x6f,
/// 0x69, 0x73, 0xe9, 0x94]).unwrap());
/// ```
pub fn detect_encoding_name(data: &[u8]) -> Result<String> {
let mut detector = EncodingDetector::new();
try!(detector.handle_data(data));
detector.data_end();
detector.charset()
}
impl EncodingDetector {
/// Create a new EncodingDetector.
fn new() -> EncodingDetector {
let ptr = unsafe { ffi::uchardet_new() };
assert!(!ptr.is_null());
EncodingDetector{ptr: ptr}
}
/// Pass a chunk of raw bytes to the detector. This is a no-op if a
/// charset has been detected.
fn handle_data(&mut self, data: &[u8]) -> Result<()> {
let nsresult = unsafe {
ffi::uchardet_handle_data(self.ptr, data.as_ptr() as *const c_char,
data.len() as size_t)
};
match nsresult {
0 => Ok(()),
int => {
Err(ErrorKind::from_nsresult(int).into())
}
}
}
/// Notify the detector that we're done calling `handle_data`, and that
/// we want it to make a guess as to our encoding. This is a no-op if
/// no data has been passed yet, or if an encoding has been detected
/// for certain. From reading the code, it appears that you can safely
/// call `handle_data` after calling this, but I'm not certain.
fn da | mut self) {
unsafe { ffi::uchardet_data_end(self.ptr); }
}
/// Get the decoder's current best guess as to the encoding. May return
/// an error if uchardet was unable to detect an encoding.
fn charset(&self) -> Result<String> {
unsafe {
let internal_str = ffi::uchardet_get_charset(self.ptr);
assert!(!internal_str.is_null());
let bytes = CStr::from_ptr(internal_str).to_bytes();
let charset = from_utf8(bytes);
match charset {
Err(_) =>
panic!("uchardet_get_charset returned a charset name \
containing invalid characters"),
Ok("") => Err(ErrorKind::UnrecognizableCharset.into()),
Ok(encoding) => Ok(encoding.to_string())
}
}
}
}
impl Drop for EncodingDetector {
fn drop(&mut self) {
unsafe { ffi::uchardet_delete(self.ptr) };
}
}
| ta_end(& | identifier_name |
synonym.rs | use std::collections::HashMap;
use std::collections::hash_map::{Iter, Keys};
use std::fmt::Debug;
use std::hash::Hash;
use std::iter::{FromIterator, IntoIterator};
use std::mem;
#[derive(Clone)]
pub struct SynonymMap<K, V> {
vals: HashMap<K, V>,
syns: HashMap<K, K>,
}
impl<K: Eq + Hash, V> SynonymMap<K, V> {
pub fn new() -> SynonymMap<K, V> {
SynonymMap {
vals: HashMap::new(),
syns: HashMap::new(),
}
}
pub fn insert_synonym(&mut self, from: K, to: K) -> bool {
assert!(self.vals.contains_key(&to));
self.syns.insert(from, to).is_none()
}
pub fn keys<'a>(&'a self) -> Keys<'a, K, V> {
self.vals.keys()
}
pub fn iter<'a>(&'a self) -> Iter<'a, K, V> {
self.vals.iter()
}
pub fn synonyms<'a>(&'a self) -> Iter<'a, K, K> {
self.syns.iter()
}
pub fn find<'a>(&'a self, k: &K) -> Option<&'a V> {
self.with_key(k, |k| self.vals.get(k))
}
pub fn contains_key(&self, k: &K) -> bool {
self.with_key(k, |k| self.vals.contains_key(k))
}
pub fn len(&self) -> usize {
self.vals.len()
}
fn with_key<T, F>(&self, k: &K, with: F) -> T where F: FnOnce(&K) -> T {
if self.syns.contains_key(k) {
with(&self.syns[k])
} else |
}
}
impl<K: Eq + Hash + Clone, V> SynonymMap<K, V> {
pub fn resolve(&self, k: &K) -> K {
self.with_key(k, |k| k.clone())
}
pub fn get<'a>(&'a self, k: &K) -> &'a V {
self.find(k).unwrap()
}
pub fn find_mut<'a>(&'a mut self, k: &K) -> Option<&'a mut V> {
if self.syns.contains_key(k) {
self.vals.get_mut(&self.syns[k])
} else {
self.vals.get_mut(k)
}
}
pub fn swap(&mut self, k: K, mut new: V) -> Option<V> {
if self.syns.contains_key(&k) {
let old = self.vals.get_mut(&k).unwrap();
mem::swap(old, &mut new);
Some(new)
} else {
self.vals.insert(k, new)
}
}
pub fn insert(&mut self, k: K, v: V) -> bool {
self.swap(k, v).is_none()
}
}
impl<K: Eq + Hash + Clone, V> FromIterator<(K, V)> for SynonymMap<K, V> {
fn from_iter<T: IntoIterator<Item=(K, V)>>(iter: T) -> SynonymMap<K, V> {
let mut map = SynonymMap::new();
for (k, v) in iter {
map.insert(k, v);
}
map
}
}
impl<K: Eq + Hash + Debug, V: Debug> Debug for SynonymMap<K, V> {
fn fmt(&self, f: &mut ::std::fmt::Formatter) -> ::std::fmt::Result {
try!(self.vals.fmt(f));
write!(f, " (synomyns: {:?})", self.syns)
}
}
| {
with(k)
} | conditional_block |
synonym.rs | use std::collections::HashMap;
use std::collections::hash_map::{Iter, Keys};
use std::fmt::Debug;
use std::hash::Hash;
use std::iter::{FromIterator, IntoIterator};
use std::mem;
#[derive(Clone)]
pub struct SynonymMap<K, V> {
vals: HashMap<K, V>,
syns: HashMap<K, K>,
}
impl<K: Eq + Hash, V> SynonymMap<K, V> {
pub fn new() -> SynonymMap<K, V> {
SynonymMap {
vals: HashMap::new(),
syns: HashMap::new(),
}
}
pub fn insert_synonym(&mut self, from: K, to: K) -> bool {
assert!(self.vals.contains_key(&to));
self.syns.insert(from, to).is_none()
}
pub fn keys<'a>(&'a self) -> Keys<'a, K, V> {
self.vals.keys()
}
pub fn iter<'a>(&'a self) -> Iter<'a, K, V> {
self.vals.iter()
}
pub fn synonyms<'a>(&'a self) -> Iter<'a, K, K> {
self.syns.iter()
}
pub fn find<'a>(&'a self, k: &K) -> Option<&'a V> {
self.with_key(k, |k| self.vals.get(k))
}
pub fn contains_key(&self, k: &K) -> bool {
self.with_key(k, |k| self.vals.contains_key(k))
}
pub fn len(&self) -> usize {
self.vals.len()
}
fn with_key<T, F>(&self, k: &K, with: F) -> T where F: FnOnce(&K) -> T {
if self.syns.contains_key(k) {
with(&self.syns[k])
} else {
with(k)
}
}
}
impl<K: Eq + Hash + Clone, V> SynonymMap<K, V> {
pub fn resolve(&self, k: &K) -> K {
self.with_key(k, |k| k.clone())
}
pub fn get<'a>(&'a self, k: &K) -> &'a V {
self.find(k).unwrap()
}
pub fn find_mut<'a>(&'a mut self, k: &K) -> Option<&'a mut V> {
if self.syns.contains_key(k) {
self.vals.get_mut(&self.syns[k])
} else {
self.vals.get_mut(k)
}
}
pub fn swap(&mut self, k: K, mut new: V) -> Option<V> {
if self.syns.contains_key(&k) {
let old = self.vals.get_mut(&k).unwrap();
mem::swap(old, &mut new);
Some(new)
} else {
self.vals.insert(k, new)
}
}
pub fn insert(&mut self, k: K, v: V) -> bool {
self.swap(k, v).is_none()
}
}
impl<K: Eq + Hash + Clone, V> FromIterator<(K, V)> for SynonymMap<K, V> {
fn from_iter<T: IntoIterator<Item=(K, V)>>(iter: T) -> SynonymMap<K, V> {
let mut map = SynonymMap::new();
for (k, v) in iter {
map.insert(k, v);
}
map
} | write!(f, " (synomyns: {:?})", self.syns)
}
} | }
impl<K: Eq + Hash + Debug, V: Debug> Debug for SynonymMap<K, V> {
fn fmt(&self, f: &mut ::std::fmt::Formatter) -> ::std::fmt::Result {
try!(self.vals.fmt(f)); | random_line_split |
synonym.rs | use std::collections::HashMap;
use std::collections::hash_map::{Iter, Keys};
use std::fmt::Debug;
use std::hash::Hash;
use std::iter::{FromIterator, IntoIterator};
use std::mem;
#[derive(Clone)]
pub struct SynonymMap<K, V> {
vals: HashMap<K, V>,
syns: HashMap<K, K>,
}
impl<K: Eq + Hash, V> SynonymMap<K, V> {
pub fn | () -> SynonymMap<K, V> {
SynonymMap {
vals: HashMap::new(),
syns: HashMap::new(),
}
}
pub fn insert_synonym(&mut self, from: K, to: K) -> bool {
assert!(self.vals.contains_key(&to));
self.syns.insert(from, to).is_none()
}
pub fn keys<'a>(&'a self) -> Keys<'a, K, V> {
self.vals.keys()
}
pub fn iter<'a>(&'a self) -> Iter<'a, K, V> {
self.vals.iter()
}
pub fn synonyms<'a>(&'a self) -> Iter<'a, K, K> {
self.syns.iter()
}
pub fn find<'a>(&'a self, k: &K) -> Option<&'a V> {
self.with_key(k, |k| self.vals.get(k))
}
pub fn contains_key(&self, k: &K) -> bool {
self.with_key(k, |k| self.vals.contains_key(k))
}
pub fn len(&self) -> usize {
self.vals.len()
}
fn with_key<T, F>(&self, k: &K, with: F) -> T where F: FnOnce(&K) -> T {
if self.syns.contains_key(k) {
with(&self.syns[k])
} else {
with(k)
}
}
}
impl<K: Eq + Hash + Clone, V> SynonymMap<K, V> {
pub fn resolve(&self, k: &K) -> K {
self.with_key(k, |k| k.clone())
}
pub fn get<'a>(&'a self, k: &K) -> &'a V {
self.find(k).unwrap()
}
pub fn find_mut<'a>(&'a mut self, k: &K) -> Option<&'a mut V> {
if self.syns.contains_key(k) {
self.vals.get_mut(&self.syns[k])
} else {
self.vals.get_mut(k)
}
}
pub fn swap(&mut self, k: K, mut new: V) -> Option<V> {
if self.syns.contains_key(&k) {
let old = self.vals.get_mut(&k).unwrap();
mem::swap(old, &mut new);
Some(new)
} else {
self.vals.insert(k, new)
}
}
pub fn insert(&mut self, k: K, v: V) -> bool {
self.swap(k, v).is_none()
}
}
impl<K: Eq + Hash + Clone, V> FromIterator<(K, V)> for SynonymMap<K, V> {
fn from_iter<T: IntoIterator<Item=(K, V)>>(iter: T) -> SynonymMap<K, V> {
let mut map = SynonymMap::new();
for (k, v) in iter {
map.insert(k, v);
}
map
}
}
impl<K: Eq + Hash + Debug, V: Debug> Debug for SynonymMap<K, V> {
fn fmt(&self, f: &mut ::std::fmt::Formatter) -> ::std::fmt::Result {
try!(self.vals.fmt(f));
write!(f, " (synomyns: {:?})", self.syns)
}
}
| new | identifier_name |
synonym.rs | use std::collections::HashMap;
use std::collections::hash_map::{Iter, Keys};
use std::fmt::Debug;
use std::hash::Hash;
use std::iter::{FromIterator, IntoIterator};
use std::mem;
#[derive(Clone)]
pub struct SynonymMap<K, V> {
vals: HashMap<K, V>,
syns: HashMap<K, K>,
}
impl<K: Eq + Hash, V> SynonymMap<K, V> {
pub fn new() -> SynonymMap<K, V> {
SynonymMap {
vals: HashMap::new(),
syns: HashMap::new(),
}
}
pub fn insert_synonym(&mut self, from: K, to: K) -> bool {
assert!(self.vals.contains_key(&to));
self.syns.insert(from, to).is_none()
}
pub fn keys<'a>(&'a self) -> Keys<'a, K, V> {
self.vals.keys()
}
pub fn iter<'a>(&'a self) -> Iter<'a, K, V> {
self.vals.iter()
}
pub fn synonyms<'a>(&'a self) -> Iter<'a, K, K> {
self.syns.iter()
}
pub fn find<'a>(&'a self, k: &K) -> Option<&'a V> |
pub fn contains_key(&self, k: &K) -> bool {
self.with_key(k, |k| self.vals.contains_key(k))
}
pub fn len(&self) -> usize {
self.vals.len()
}
fn with_key<T, F>(&self, k: &K, with: F) -> T where F: FnOnce(&K) -> T {
if self.syns.contains_key(k) {
with(&self.syns[k])
} else {
with(k)
}
}
}
impl<K: Eq + Hash + Clone, V> SynonymMap<K, V> {
pub fn resolve(&self, k: &K) -> K {
self.with_key(k, |k| k.clone())
}
pub fn get<'a>(&'a self, k: &K) -> &'a V {
self.find(k).unwrap()
}
pub fn find_mut<'a>(&'a mut self, k: &K) -> Option<&'a mut V> {
if self.syns.contains_key(k) {
self.vals.get_mut(&self.syns[k])
} else {
self.vals.get_mut(k)
}
}
pub fn swap(&mut self, k: K, mut new: V) -> Option<V> {
if self.syns.contains_key(&k) {
let old = self.vals.get_mut(&k).unwrap();
mem::swap(old, &mut new);
Some(new)
} else {
self.vals.insert(k, new)
}
}
pub fn insert(&mut self, k: K, v: V) -> bool {
self.swap(k, v).is_none()
}
}
impl<K: Eq + Hash + Clone, V> FromIterator<(K, V)> for SynonymMap<K, V> {
fn from_iter<T: IntoIterator<Item=(K, V)>>(iter: T) -> SynonymMap<K, V> {
let mut map = SynonymMap::new();
for (k, v) in iter {
map.insert(k, v);
}
map
}
}
impl<K: Eq + Hash + Debug, V: Debug> Debug for SynonymMap<K, V> {
fn fmt(&self, f: &mut ::std::fmt::Formatter) -> ::std::fmt::Result {
try!(self.vals.fmt(f));
write!(f, " (synomyns: {:?})", self.syns)
}
}
| {
self.with_key(k, |k| self.vals.get(k))
} | identifier_body |
base.rs | use std::io::Write;
pub enum Color {
Reset,
Black,
White,
Grey,
}
// Shamelessly stolen from termios
// which doesn't compile on Win32
// which is why I'm doing all this nonsense in the first place
pub enum Event {
Key(Key),
Mouse(MouseEvent),
Unsupported(Vec<u32>),
}
// Derived from termios, with modifications
// Precedence **must** be Ctrl(Alt(Shift())) in that order
#[derive(PartialEq, Eq, Hash, Debug, Clone)]
pub enum Key {
Backspace,
Left,
Right,
Up,
Down,
Home,
End,
PageUp,
PageDown,
Delete,
Insert, | F(u8),
Char(char),
Shift(Box<Key>),
Alt(Box<Key>),
Ctrl(Box<Key>),
Null,
Esc,
}
impl Key {
pub fn is_char(&self) -> bool {
match self {
&Key::Char(_) => true,
_ => false
}
}
pub fn is_navigation(&self) -> bool {
match self {
&Key::Left | &Key::Right | &Key::Up | &Key::Down |
&Key::Home | &Key::End | &Key::PageUp | &Key::PageDown => true,
_ => false
}
}
}
// Also termios
pub enum MouseEvent {
Press(MouseButton, i32, i32),
Release(i32, i32),
Hold(i32, i32),
}
// Still termios
pub enum MouseButton {
Left,
Right,
Middle,
WheelUp,
WheelDown,
}
// Me again
pub trait TermImpl: Write + Default {
fn get_size(&self) -> (i32, i32);
fn goto(&mut self, (i32, i32));
fn set_color_fg(&mut self, Color);
fn set_color_bg(&mut self, Color);
fn clear(&mut self);
// fn keys(&mut self) -> Iterator<Item = Event>;
} | random_line_split |
|
base.rs | use std::io::Write;
pub enum Color {
Reset,
Black,
White,
Grey,
}
// Shamelessly stolen from termios
// which doesn't compile on Win32
// which is why I'm doing all this nonsense in the first place
pub enum Event {
Key(Key),
Mouse(MouseEvent),
Unsupported(Vec<u32>),
}
// Derived from termios, with modifications
// Precedence **must** be Ctrl(Alt(Shift())) in that order
#[derive(PartialEq, Eq, Hash, Debug, Clone)]
pub enum Key {
Backspace,
Left,
Right,
Up,
Down,
Home,
End,
PageUp,
PageDown,
Delete,
Insert,
F(u8),
Char(char),
Shift(Box<Key>),
Alt(Box<Key>),
Ctrl(Box<Key>),
Null,
Esc,
}
impl Key {
pub fn | (&self) -> bool {
match self {
&Key::Char(_) => true,
_ => false
}
}
pub fn is_navigation(&self) -> bool {
match self {
&Key::Left | &Key::Right | &Key::Up | &Key::Down |
&Key::Home | &Key::End | &Key::PageUp | &Key::PageDown => true,
_ => false
}
}
}
// Also termios
pub enum MouseEvent {
Press(MouseButton, i32, i32),
Release(i32, i32),
Hold(i32, i32),
}
// Still termios
pub enum MouseButton {
Left,
Right,
Middle,
WheelUp,
WheelDown,
}
// Me again
pub trait TermImpl: Write + Default {
fn get_size(&self) -> (i32, i32);
fn goto(&mut self, (i32, i32));
fn set_color_fg(&mut self, Color);
fn set_color_bg(&mut self, Color);
fn clear(&mut self);
// fn keys(&mut self) -> Iterator<Item = Event>;
}
| is_char | identifier_name |
status.rs | // Copyright (C) 2016 Stefan Luecke
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
//
// Authors: Stefan Luecke <[email protected]>
extern crate telegram_bot;
extern crate hyper;
use std::io::Read;
use std::str::FromStr;
use handler::Handler;
pub struct StatusHandler {
} | impl StatusHandler {
pub fn new() -> StatusHandler {
StatusHandler {}
}
}
impl Handler for StatusHandler {
fn msg_type() -> telegram_bot::MessageType {
telegram_bot::MessageType::Text("".to_string())
}
fn command() -> Vec<String> {
vec![String::from_str("/status").unwrap()]
}
fn process(&self,
m: telegram_bot::Message,
a: telegram_bot::Api)
-> Result<telegram_bot::Message, telegram_bot::Error> {
let client = hyper::client::Client::new();
let status = client.get("https://labctl.openlab-augsburg.de/sphincter/?action=state")
.send();
let name = m.from.first_name;
match status {
Ok(mut response) => {
let mut buf = String::new();
let body = response.read_to_string(&mut buf);
match body {
Ok(_) => {
return a.send_message(m.chat.id(),
format!("Hey {}! The Lab status is {}", name, buf),
None,
None,
None,
None)
}
Err(e) => {
println!("Error: {}", e);
return a.send_message(m.chat.id(),
format!("Sorry {}! The Lab status is currently \
unavailable. Please Try again later.",
name),
None,
None,
None,
None);
}
}
}
Err(e) => {
println!("Error: {}", e);
return a.send_message(m.chat.id(),
format!("Sorry {}! The Lab status is currently \
unavailable. Please Try again later.",
name),
None,
None,
None,
None);
}
}
}
} | random_line_split |
|
status.rs | // Copyright (C) 2016 Stefan Luecke
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
//
// Authors: Stefan Luecke <[email protected]>
extern crate telegram_bot;
extern crate hyper;
use std::io::Read;
use std::str::FromStr;
use handler::Handler;
pub struct StatusHandler {
}
impl StatusHandler {
pub fn new() -> StatusHandler |
}
impl Handler for StatusHandler {
fn msg_type() -> telegram_bot::MessageType {
telegram_bot::MessageType::Text("".to_string())
}
fn command() -> Vec<String> {
vec![String::from_str("/status").unwrap()]
}
fn process(&self,
m: telegram_bot::Message,
a: telegram_bot::Api)
-> Result<telegram_bot::Message, telegram_bot::Error> {
let client = hyper::client::Client::new();
let status = client.get("https://labctl.openlab-augsburg.de/sphincter/?action=state")
.send();
let name = m.from.first_name;
match status {
Ok(mut response) => {
let mut buf = String::new();
let body = response.read_to_string(&mut buf);
match body {
Ok(_) => {
return a.send_message(m.chat.id(),
format!("Hey {}! The Lab status is {}", name, buf),
None,
None,
None,
None)
}
Err(e) => {
println!("Error: {}", e);
return a.send_message(m.chat.id(),
format!("Sorry {}! The Lab status is currently \
unavailable. Please Try again later.",
name),
None,
None,
None,
None);
}
}
}
Err(e) => {
println!("Error: {}", e);
return a.send_message(m.chat.id(),
format!("Sorry {}! The Lab status is currently \
unavailable. Please Try again later.",
name),
None,
None,
None,
None);
}
}
}
}
| {
StatusHandler {}
} | identifier_body |
status.rs | // Copyright (C) 2016 Stefan Luecke
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
//
// Authors: Stefan Luecke <[email protected]>
extern crate telegram_bot;
extern crate hyper;
use std::io::Read;
use std::str::FromStr;
use handler::Handler;
pub struct StatusHandler {
}
impl StatusHandler {
pub fn | () -> StatusHandler {
StatusHandler {}
}
}
impl Handler for StatusHandler {
fn msg_type() -> telegram_bot::MessageType {
telegram_bot::MessageType::Text("".to_string())
}
fn command() -> Vec<String> {
vec![String::from_str("/status").unwrap()]
}
fn process(&self,
m: telegram_bot::Message,
a: telegram_bot::Api)
-> Result<telegram_bot::Message, telegram_bot::Error> {
let client = hyper::client::Client::new();
let status = client.get("https://labctl.openlab-augsburg.de/sphincter/?action=state")
.send();
let name = m.from.first_name;
match status {
Ok(mut response) => {
let mut buf = String::new();
let body = response.read_to_string(&mut buf);
match body {
Ok(_) => {
return a.send_message(m.chat.id(),
format!("Hey {}! The Lab status is {}", name, buf),
None,
None,
None,
None)
}
Err(e) => {
println!("Error: {}", e);
return a.send_message(m.chat.id(),
format!("Sorry {}! The Lab status is currently \
unavailable. Please Try again later.",
name),
None,
None,
None,
None);
}
}
}
Err(e) => {
println!("Error: {}", e);
return a.send_message(m.chat.id(),
format!("Sorry {}! The Lab status is currently \
unavailable. Please Try again later.",
name),
None,
None,
None,
None);
}
}
}
}
| new | identifier_name |
render.rs | use smithay::{
backend::renderer::{Frame, ImportAll, Renderer},
desktop::{
draw_window,
space::{DynamicRenderElements, RenderError, Space},
},
utils::{Logical, Rectangle},
wayland::output::Output,
};
use crate::{drawing::*, shell::FullscreenSurface};
pub fn render_output<R>(
output: &Output,
space: &mut Space,
renderer: &mut R,
age: usize,
elements: &[DynamicRenderElements<R>],
log: &slog::Logger,
) -> Result<Option<Vec<Rectangle<i32, Logical>>>, RenderError<R>>
where
R: Renderer + ImportAll +'static,
R::Frame:'static,
R::TextureId:'static,
R::Error:'static,
{
if let Some(window) = output
.user_data()
.get::<FullscreenSurface>()
.and_then(|f| f.get())
{
let transform = output.current_transform().into();
let mode = output.current_mode().unwrap();
let scale = space.output_scale(output).unwrap();
let output_geo = space
.output_geometry(output)
.unwrap_or_else(|| Rectangle::from_loc_and_size((0, 0), (0, 0)));
renderer
.render(mode.size, transform, |renderer, frame| {
let mut damage = window.accumulated_damage(None);
frame.clear(CLEAR_COLOR, &[Rectangle::from_loc_and_size((0, 0), mode.size)])?;
draw_window(
renderer,
frame,
&window,
scale,
(0, 0),
&[Rectangle::from_loc_and_size(
(0, 0),
mode.size.to_f64().to_logical(scale).to_i32_round(),
)],
log,
)?;
for elem in elements {
let geo = elem.geometry();
let location = geo.loc - output_geo.loc;
let elem_damage = elem.accumulated_damage(None);
elem.draw(
renderer,
frame,
scale,
location,
&[Rectangle::from_loc_and_size((0, 0), geo.size)],
log,
)?;
damage.extend(elem_damage.into_iter().map(|mut rect| {
rect.loc += geo.loc;
rect
}))
}
Ok(Some(damage))
})
.and_then(std::convert::identity)
.map_err(RenderError::<R>::Rendering)
} else |
}
| {
space.render_output(&mut *renderer, output, age as usize, CLEAR_COLOR, &*elements)
} | conditional_block |
render.rs | use smithay::{
backend::renderer::{Frame, ImportAll, Renderer},
desktop::{
draw_window,
space::{DynamicRenderElements, RenderError, Space},
},
utils::{Logical, Rectangle},
wayland::output::Output,
};
use crate::{drawing::*, shell::FullscreenSurface};
pub fn render_output<R>(
output: &Output,
space: &mut Space,
renderer: &mut R,
age: usize,
elements: &[DynamicRenderElements<R>],
log: &slog::Logger,
) -> Result<Option<Vec<Rectangle<i32, Logical>>>, RenderError<R>>
where
R: Renderer + ImportAll +'static,
R::Frame:'static,
R::TextureId:'static,
R::Error:'static,
| scale,
(0, 0),
&[Rectangle::from_loc_and_size(
(0, 0),
mode.size.to_f64().to_logical(scale).to_i32_round(),
)],
log,
)?;
for elem in elements {
let geo = elem.geometry();
let location = geo.loc - output_geo.loc;
let elem_damage = elem.accumulated_damage(None);
elem.draw(
renderer,
frame,
scale,
location,
&[Rectangle::from_loc_and_size((0, 0), geo.size)],
log,
)?;
damage.extend(elem_damage.into_iter().map(|mut rect| {
rect.loc += geo.loc;
rect
}))
}
Ok(Some(damage))
})
.and_then(std::convert::identity)
.map_err(RenderError::<R>::Rendering)
} else {
space.render_output(&mut *renderer, output, age as usize, CLEAR_COLOR, &*elements)
}
}
| {
if let Some(window) = output
.user_data()
.get::<FullscreenSurface>()
.and_then(|f| f.get())
{
let transform = output.current_transform().into();
let mode = output.current_mode().unwrap();
let scale = space.output_scale(output).unwrap();
let output_geo = space
.output_geometry(output)
.unwrap_or_else(|| Rectangle::from_loc_and_size((0, 0), (0, 0)));
renderer
.render(mode.size, transform, |renderer, frame| {
let mut damage = window.accumulated_damage(None);
frame.clear(CLEAR_COLOR, &[Rectangle::from_loc_and_size((0, 0), mode.size)])?;
draw_window(
renderer,
frame,
&window, | identifier_body |
render.rs | use smithay::{
backend::renderer::{Frame, ImportAll, Renderer},
desktop::{
draw_window,
space::{DynamicRenderElements, RenderError, Space},
},
utils::{Logical, Rectangle},
wayland::output::Output,
};
use crate::{drawing::*, shell::FullscreenSurface};
pub fn | <R>(
output: &Output,
space: &mut Space,
renderer: &mut R,
age: usize,
elements: &[DynamicRenderElements<R>],
log: &slog::Logger,
) -> Result<Option<Vec<Rectangle<i32, Logical>>>, RenderError<R>>
where
R: Renderer + ImportAll +'static,
R::Frame:'static,
R::TextureId:'static,
R::Error:'static,
{
if let Some(window) = output
.user_data()
.get::<FullscreenSurface>()
.and_then(|f| f.get())
{
let transform = output.current_transform().into();
let mode = output.current_mode().unwrap();
let scale = space.output_scale(output).unwrap();
let output_geo = space
.output_geometry(output)
.unwrap_or_else(|| Rectangle::from_loc_and_size((0, 0), (0, 0)));
renderer
.render(mode.size, transform, |renderer, frame| {
let mut damage = window.accumulated_damage(None);
frame.clear(CLEAR_COLOR, &[Rectangle::from_loc_and_size((0, 0), mode.size)])?;
draw_window(
renderer,
frame,
&window,
scale,
(0, 0),
&[Rectangle::from_loc_and_size(
(0, 0),
mode.size.to_f64().to_logical(scale).to_i32_round(),
)],
log,
)?;
for elem in elements {
let geo = elem.geometry();
let location = geo.loc - output_geo.loc;
let elem_damage = elem.accumulated_damage(None);
elem.draw(
renderer,
frame,
scale,
location,
&[Rectangle::from_loc_and_size((0, 0), geo.size)],
log,
)?;
damage.extend(elem_damage.into_iter().map(|mut rect| {
rect.loc += geo.loc;
rect
}))
}
Ok(Some(damage))
})
.and_then(std::convert::identity)
.map_err(RenderError::<R>::Rendering)
} else {
space.render_output(&mut *renderer, output, age as usize, CLEAR_COLOR, &*elements)
}
}
| render_output | identifier_name |
render.rs | use smithay::{
backend::renderer::{Frame, ImportAll, Renderer},
desktop::{
draw_window,
space::{DynamicRenderElements, RenderError, Space},
},
utils::{Logical, Rectangle},
wayland::output::Output,
};
use crate::{drawing::*, shell::FullscreenSurface};
pub fn render_output<R>(
output: &Output,
space: &mut Space,
renderer: &mut R,
age: usize,
elements: &[DynamicRenderElements<R>],
log: &slog::Logger,
) -> Result<Option<Vec<Rectangle<i32, Logical>>>, RenderError<R>>
where
R: Renderer + ImportAll +'static,
R::Frame:'static, | {
if let Some(window) = output
.user_data()
.get::<FullscreenSurface>()
.and_then(|f| f.get())
{
let transform = output.current_transform().into();
let mode = output.current_mode().unwrap();
let scale = space.output_scale(output).unwrap();
let output_geo = space
.output_geometry(output)
.unwrap_or_else(|| Rectangle::from_loc_and_size((0, 0), (0, 0)));
renderer
.render(mode.size, transform, |renderer, frame| {
let mut damage = window.accumulated_damage(None);
frame.clear(CLEAR_COLOR, &[Rectangle::from_loc_and_size((0, 0), mode.size)])?;
draw_window(
renderer,
frame,
&window,
scale,
(0, 0),
&[Rectangle::from_loc_and_size(
(0, 0),
mode.size.to_f64().to_logical(scale).to_i32_round(),
)],
log,
)?;
for elem in elements {
let geo = elem.geometry();
let location = geo.loc - output_geo.loc;
let elem_damage = elem.accumulated_damage(None);
elem.draw(
renderer,
frame,
scale,
location,
&[Rectangle::from_loc_and_size((0, 0), geo.size)],
log,
)?;
damage.extend(elem_damage.into_iter().map(|mut rect| {
rect.loc += geo.loc;
rect
}))
}
Ok(Some(damage))
})
.and_then(std::convert::identity)
.map_err(RenderError::<R>::Rendering)
} else {
space.render_output(&mut *renderer, output, age as usize, CLEAR_COLOR, &*elements)
}
} | R::TextureId: 'static,
R::Error: 'static, | random_line_split |
advent20.rs | // advent20.rs
// factorizing
// Used the first crate I found that did what I needed; turned out to be quite slow, oh well
extern crate tcorp_math_mods;
use std::io;
fn main() {
let mut input = String::new();
io::stdin().read_line(&mut input)
.ok()
.expect("Failed to read line");
let presents = input.trim().parse::<u32>().unwrap();
println!("first house: {}", find_lowest_house(presents));
println!("first house part 2: {}", find_lowest_house2(presents));
}
fn find_lowest_house(min_presents: u32) -> u32 {
(1u32..).find(|x| count_presents(*x) >= min_presents).unwrap()
}
fn count_presents(house: u32) -> u32 {
assert!(house > 0);
10 * tcorp_math_mods::factors::factors_for(house).iter().fold(0, |acc, x| acc + x)
}
#[test]
fn | () {
assert_eq!(10, count_presents(1));
assert_eq!(30, count_presents(2));
assert_eq!(40, count_presents(3));
assert_eq!(70, count_presents(4));
assert_eq!(60, count_presents(5));
assert_eq!(120, count_presents(6));
assert_eq!(80, count_presents(7));
assert_eq!(150, count_presents(8));
assert_eq!(130, count_presents(9));
}
#[test]
fn test_find_lowest_house() {
assert_eq!(6, find_lowest_house(100));
assert_eq!(6, find_lowest_house(120));
assert_eq!(8, find_lowest_house(121));
}
// part 2
fn find_lowest_house2(min_presents: u32) -> u32 {
(1u32..).find(|x| count_presents2(*x) >= min_presents).unwrap()
}
fn count_presents2(house: u32) -> u32 {
assert!(house > 0);
let v = tcorp_math_mods::factors::factors_for(house);
// skip the factors that are less than house/50
let mut split_idx = 0;
for x in v.iter() {
if *x * 50 >= house {
break;
}
split_idx += 1;
}
let (_, factors) = v.split_at(split_idx);
11 * factors.iter().fold(0, |acc, x| acc + x)
}
#[test]
fn test_count_presents2() {
assert_eq!(11, count_presents2(1));
assert_eq!(33, count_presents2(2));
assert_eq!(44, count_presents2(3));
assert_eq!(77, count_presents2(4));
assert_eq!(66, count_presents2(5));
assert_eq!(132, count_presents2(6));
assert_eq!(88, count_presents2(7));
assert_eq!(165, count_presents2(8));
assert_eq!(143, count_presents2(9));
assert_eq!(count_presents(50) / 10 * 11, count_presents2(50));
assert_eq!(count_presents(51) / 10 * 11 - 11, count_presents2(51));
}
#[test]
fn test_find_lowest_house2() {
assert_eq!(6, find_lowest_house2(100));
assert_eq!(6, find_lowest_house2(132));
assert_eq!(8, find_lowest_house2(133));
}
| test_count_presents | identifier_name |
advent20.rs | // advent20.rs
// factorizing
// Used the first crate I found that did what I needed; turned out to be quite slow, oh well
extern crate tcorp_math_mods;
use std::io;
fn main() {
let mut input = String::new();
io::stdin().read_line(&mut input)
.ok()
.expect("Failed to read line");
let presents = input.trim().parse::<u32>().unwrap();
println!("first house: {}", find_lowest_house(presents));
println!("first house part 2: {}", find_lowest_house2(presents));
}
fn find_lowest_house(min_presents: u32) -> u32 {
(1u32..).find(|x| count_presents(*x) >= min_presents).unwrap()
}
fn count_presents(house: u32) -> u32 {
assert!(house > 0);
10 * tcorp_math_mods::factors::factors_for(house).iter().fold(0, |acc, x| acc + x)
}
#[test]
fn test_count_presents() {
assert_eq!(10, count_presents(1));
assert_eq!(30, count_presents(2));
assert_eq!(40, count_presents(3));
assert_eq!(70, count_presents(4));
assert_eq!(60, count_presents(5));
assert_eq!(120, count_presents(6));
assert_eq!(80, count_presents(7));
assert_eq!(150, count_presents(8));
assert_eq!(130, count_presents(9));
}
#[test]
fn test_find_lowest_house() {
assert_eq!(6, find_lowest_house(100));
assert_eq!(6, find_lowest_house(120));
assert_eq!(8, find_lowest_house(121));
}
// part 2
fn find_lowest_house2(min_presents: u32) -> u32 |
fn count_presents2(house: u32) -> u32 {
assert!(house > 0);
let v = tcorp_math_mods::factors::factors_for(house);
// skip the factors that are less than house/50
let mut split_idx = 0;
for x in v.iter() {
if *x * 50 >= house {
break;
}
split_idx += 1;
}
let (_, factors) = v.split_at(split_idx);
11 * factors.iter().fold(0, |acc, x| acc + x)
}
#[test]
fn test_count_presents2() {
assert_eq!(11, count_presents2(1));
assert_eq!(33, count_presents2(2));
assert_eq!(44, count_presents2(3));
assert_eq!(77, count_presents2(4));
assert_eq!(66, count_presents2(5));
assert_eq!(132, count_presents2(6));
assert_eq!(88, count_presents2(7));
assert_eq!(165, count_presents2(8));
assert_eq!(143, count_presents2(9));
assert_eq!(count_presents(50) / 10 * 11, count_presents2(50));
assert_eq!(count_presents(51) / 10 * 11 - 11, count_presents2(51));
}
#[test]
fn test_find_lowest_house2() {
assert_eq!(6, find_lowest_house2(100));
assert_eq!(6, find_lowest_house2(132));
assert_eq!(8, find_lowest_house2(133));
}
| {
(1u32..).find(|x| count_presents2(*x) >= min_presents).unwrap()
} | identifier_body |
advent20.rs | // advent20.rs
// factorizing
// Used the first crate I found that did what I needed; turned out to be quite slow, oh well
extern crate tcorp_math_mods;
use std::io;
fn main() {
let mut input = String::new();
io::stdin().read_line(&mut input)
.ok()
.expect("Failed to read line");
let presents = input.trim().parse::<u32>().unwrap();
println!("first house: {}", find_lowest_house(presents));
println!("first house part 2: {}", find_lowest_house2(presents));
}
fn find_lowest_house(min_presents: u32) -> u32 {
(1u32..).find(|x| count_presents(*x) >= min_presents).unwrap()
}
fn count_presents(house: u32) -> u32 {
assert!(house > 0);
10 * tcorp_math_mods::factors::factors_for(house).iter().fold(0, |acc, x| acc + x)
}
#[test]
fn test_count_presents() {
assert_eq!(10, count_presents(1));
assert_eq!(30, count_presents(2));
assert_eq!(40, count_presents(3));
assert_eq!(70, count_presents(4));
assert_eq!(60, count_presents(5));
assert_eq!(120, count_presents(6));
assert_eq!(80, count_presents(7));
assert_eq!(150, count_presents(8));
assert_eq!(130, count_presents(9));
}
#[test]
fn test_find_lowest_house() {
assert_eq!(6, find_lowest_house(100));
assert_eq!(6, find_lowest_house(120));
assert_eq!(8, find_lowest_house(121));
}
// part 2
fn find_lowest_house2(min_presents: u32) -> u32 {
(1u32..).find(|x| count_presents2(*x) >= min_presents).unwrap()
}
fn count_presents2(house: u32) -> u32 {
assert!(house > 0);
let v = tcorp_math_mods::factors::factors_for(house);
// skip the factors that are less than house/50
let mut split_idx = 0;
for x in v.iter() {
if *x * 50 >= house |
split_idx += 1;
}
let (_, factors) = v.split_at(split_idx);
11 * factors.iter().fold(0, |acc, x| acc + x)
}
#[test]
fn test_count_presents2() {
assert_eq!(11, count_presents2(1));
assert_eq!(33, count_presents2(2));
assert_eq!(44, count_presents2(3));
assert_eq!(77, count_presents2(4));
assert_eq!(66, count_presents2(5));
assert_eq!(132, count_presents2(6));
assert_eq!(88, count_presents2(7));
assert_eq!(165, count_presents2(8));
assert_eq!(143, count_presents2(9));
assert_eq!(count_presents(50) / 10 * 11, count_presents2(50));
assert_eq!(count_presents(51) / 10 * 11 - 11, count_presents2(51));
}
#[test]
fn test_find_lowest_house2() {
assert_eq!(6, find_lowest_house2(100));
assert_eq!(6, find_lowest_house2(132));
assert_eq!(8, find_lowest_house2(133));
}
| {
break;
} | conditional_block |
advent20.rs | // advent20.rs
// factorizing
// Used the first crate I found that did what I needed; turned out to be quite slow, oh well
extern crate tcorp_math_mods;
use std::io;
fn main() {
let mut input = String::new();
io::stdin().read_line(&mut input)
.ok()
.expect("Failed to read line");
let presents = input.trim().parse::<u32>().unwrap();
println!("first house: {}", find_lowest_house(presents));
println!("first house part 2: {}", find_lowest_house2(presents));
}
fn find_lowest_house(min_presents: u32) -> u32 {
(1u32..).find(|x| count_presents(*x) >= min_presents).unwrap()
}
fn count_presents(house: u32) -> u32 {
assert!(house > 0);
10 * tcorp_math_mods::factors::factors_for(house).iter().fold(0, |acc, x| acc + x)
}
#[test]
fn test_count_presents() {
assert_eq!(10, count_presents(1));
assert_eq!(30, count_presents(2));
assert_eq!(40, count_presents(3));
assert_eq!(70, count_presents(4));
assert_eq!(60, count_presents(5));
assert_eq!(120, count_presents(6));
assert_eq!(80, count_presents(7));
assert_eq!(150, count_presents(8));
assert_eq!(130, count_presents(9));
}
#[test]
fn test_find_lowest_house() {
assert_eq!(6, find_lowest_house(100));
assert_eq!(6, find_lowest_house(120)); | fn find_lowest_house2(min_presents: u32) -> u32 {
(1u32..).find(|x| count_presents2(*x) >= min_presents).unwrap()
}
fn count_presents2(house: u32) -> u32 {
assert!(house > 0);
let v = tcorp_math_mods::factors::factors_for(house);
// skip the factors that are less than house/50
let mut split_idx = 0;
for x in v.iter() {
if *x * 50 >= house {
break;
}
split_idx += 1;
}
let (_, factors) = v.split_at(split_idx);
11 * factors.iter().fold(0, |acc, x| acc + x)
}
#[test]
fn test_count_presents2() {
assert_eq!(11, count_presents2(1));
assert_eq!(33, count_presents2(2));
assert_eq!(44, count_presents2(3));
assert_eq!(77, count_presents2(4));
assert_eq!(66, count_presents2(5));
assert_eq!(132, count_presents2(6));
assert_eq!(88, count_presents2(7));
assert_eq!(165, count_presents2(8));
assert_eq!(143, count_presents2(9));
assert_eq!(count_presents(50) / 10 * 11, count_presents2(50));
assert_eq!(count_presents(51) / 10 * 11 - 11, count_presents2(51));
}
#[test]
fn test_find_lowest_house2() {
assert_eq!(6, find_lowest_house2(100));
assert_eq!(6, find_lowest_house2(132));
assert_eq!(8, find_lowest_house2(133));
} | assert_eq!(8, find_lowest_house(121));
}
// part 2 | random_line_split |
transform.rs | // Copyright (C) 2011 The Android Open Source Project
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#pragma version(1)
#pragma rs java_package_name(com.android.modelviewer)
#include "scenegraph_objects.rsh"
rs_script gTransformScript;
typedef struct {
int changed;
rs_matrix4x4 *mat;
} ParentData;
//#define DEBUG_TRANSFORMS
static void debugTransform(SgTransform *data, const ParentData *parent) {
rsDebug("****** <Transform> ******", (int)data);
printName(data->name);
rsDebug("isDirty", data->isDirty);
rsDebug("parent", (int)parent);
rsDebug("child ", rsIsObject(data->children));
// Refresh matrices if dirty
if (data->isDirty && rsIsObject(data->components)) {
uint32_t numComponenets = rsAllocationGetDimX(data->components);
for (int i = 0; i < numComponenets; i ++) {
const SgTransformComponent *comp = NULL;
comp = (const SgTransformComponent *)rsGetElementAt(data->components, i);
if (rsIsObject(comp->name)) {
rsDebug((const char*)rsGetElementAt(comp->name, 0), comp->value);
rsDebug("Type", comp->type);
} else {
rsDebug("no name", comp->value);
rsDebug("Type", comp->type);
}
}
}
rsDebug("timestamp", data->timestamp);
rsDebug("****** </Transform> ******", (int)data);
}
static void appendTransformation(int type, float4 data, rs_matrix4x4 *mat) {
rs_matrix4x4 temp;
switch (type) {
case TRANSFORM_TRANSLATE:
rsMatrixLoadTranslate(&temp, data.x, data.y, data.z);
break;
case TRANSFORM_ROTATE:
rsMatrixLoadRotate(&temp, data.w, data.x, data.y, data.z);
break;
case TRANSFORM_SCALE:
rsMatrixLoadScale(&temp, data.x, data.y, data.z);
break;
}
rsMatrixMultiply(mat, &temp);
}
void root(const rs_allocation *v_in, rs_allocation *v_out, const void *usrData) {
SgTransform *data = (SgTransform *)rsGetElementAt(*v_in, 0);
const ParentData *parent = (const ParentData *)usrData;
#ifdef DEBUG_TRANSFORMS
debugTransform(data, parent);
#endif //DEBUG_TRANSFORMS
rs_matrix4x4 *localMat = &data->localMat;
rs_matrix4x4 *globalMat = &data->globalMat;
// Refresh matrices if dirty
if (data->isDirty && rsIsObject(data->components)) {
bool resetLocal = false;
uint32_t numComponenets = rsAllocationGetDimX(data->components);
for (int i = 0; i < numComponenets; i ++) {
if (!resetLocal) {
// Reset our local matrix only for component transforms
rsMatrixLoadIdentity(localMat);
resetLocal = true;
}
const SgTransformComponent *comp = NULL;
comp = (const SgTransformComponent *)rsGetElementAt(data->components, i);
appendTransformation(comp->type, comp->value, localMat);
}
}
if (parent) {
data->isDirty = (parent->changed || data->isDirty)? 1 : 0;
if (data->isDirty) {
rsMatrixLoad(globalMat, parent->mat);
rsMatrixMultiply(globalMat, localMat);
}
} else if (data->isDirty) {
rsMatrixLoad(globalMat, localMat);
}
ParentData toChild;
toChild.changed = 0;
toChild.mat = globalMat;
if (data->isDirty) |
if (rsIsObject(data->children)) {
rs_allocation nullAlloc;
rsForEach(gTransformScript, data->children, nullAlloc, &toChild, sizeof(toChild));
}
data->isDirty = 0;
}
| {
toChild.changed = 1;
data->timestamp ++;
} | conditional_block |
transform.rs | // Copyright (C) 2011 The Android Open Source Project
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#pragma version(1)
#pragma rs java_package_name(com.android.modelviewer)
#include "scenegraph_objects.rsh"
rs_script gTransformScript;
typedef struct {
int changed;
rs_matrix4x4 *mat;
} ParentData;
//#define DEBUG_TRANSFORMS
static void debugTransform(SgTransform *data, const ParentData *parent) {
rsDebug("****** <Transform> ******", (int)data);
printName(data->name);
rsDebug("isDirty", data->isDirty);
rsDebug("parent", (int)parent);
rsDebug("child ", rsIsObject(data->children));
// Refresh matrices if dirty
if (data->isDirty && rsIsObject(data->components)) {
uint32_t numComponenets = rsAllocationGetDimX(data->components);
for (int i = 0; i < numComponenets; i ++) {
const SgTransformComponent *comp = NULL;
comp = (const SgTransformComponent *)rsGetElementAt(data->components, i);
if (rsIsObject(comp->name)) {
rsDebug((const char*)rsGetElementAt(comp->name, 0), comp->value);
rsDebug("Type", comp->type);
} else {
rsDebug("no name", comp->value);
rsDebug("Type", comp->type);
}
}
}
rsDebug("timestamp", data->timestamp);
rsDebug("****** </Transform> ******", (int)data);
}
static void appendTransformation(int type, float4 data, rs_matrix4x4 *mat) {
rs_matrix4x4 temp;
switch (type) { | rsMatrixLoadRotate(&temp, data.w, data.x, data.y, data.z);
break;
case TRANSFORM_SCALE:
rsMatrixLoadScale(&temp, data.x, data.y, data.z);
break;
}
rsMatrixMultiply(mat, &temp);
}
void root(const rs_allocation *v_in, rs_allocation *v_out, const void *usrData) {
SgTransform *data = (SgTransform *)rsGetElementAt(*v_in, 0);
const ParentData *parent = (const ParentData *)usrData;
#ifdef DEBUG_TRANSFORMS
debugTransform(data, parent);
#endif //DEBUG_TRANSFORMS
rs_matrix4x4 *localMat = &data->localMat;
rs_matrix4x4 *globalMat = &data->globalMat;
// Refresh matrices if dirty
if (data->isDirty && rsIsObject(data->components)) {
bool resetLocal = false;
uint32_t numComponenets = rsAllocationGetDimX(data->components);
for (int i = 0; i < numComponenets; i ++) {
if (!resetLocal) {
// Reset our local matrix only for component transforms
rsMatrixLoadIdentity(localMat);
resetLocal = true;
}
const SgTransformComponent *comp = NULL;
comp = (const SgTransformComponent *)rsGetElementAt(data->components, i);
appendTransformation(comp->type, comp->value, localMat);
}
}
if (parent) {
data->isDirty = (parent->changed || data->isDirty)? 1 : 0;
if (data->isDirty) {
rsMatrixLoad(globalMat, parent->mat);
rsMatrixMultiply(globalMat, localMat);
}
} else if (data->isDirty) {
rsMatrixLoad(globalMat, localMat);
}
ParentData toChild;
toChild.changed = 0;
toChild.mat = globalMat;
if (data->isDirty) {
toChild.changed = 1;
data->timestamp ++;
}
if (rsIsObject(data->children)) {
rs_allocation nullAlloc;
rsForEach(gTransformScript, data->children, nullAlloc, &toChild, sizeof(toChild));
}
data->isDirty = 0;
} | case TRANSFORM_TRANSLATE:
rsMatrixLoadTranslate(&temp, data.x, data.y, data.z);
break;
case TRANSFORM_ROTATE: | random_line_split |
regions-infer-invariance-due-to-mutability-4.rs | // Copyright 2012 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
struct Invariant<'a> {
f: Box<for<'b> FnOnce() -> &'b mut &'a isize +'static>,
}
fn to_same_lifetime<'r>(bi: Invariant<'r>) {
let bj: Invariant<'r> = bi;
}
fn to_longer_lifetime<'r>(bi: Invariant<'r>) -> Invariant<'static> {
bi //~ ERROR mismatched types
}
| fn main() {
} | random_line_split |
|
regions-infer-invariance-due-to-mutability-4.rs | // Copyright 2012 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
struct Invariant<'a> {
f: Box<for<'b> FnOnce() -> &'b mut &'a isize +'static>,
}
fn to_same_lifetime<'r>(bi: Invariant<'r>) {
let bj: Invariant<'r> = bi;
}
fn to_longer_lifetime<'r>(bi: Invariant<'r>) -> Invariant<'static> |
fn main() {
}
| {
bi //~ ERROR mismatched types
} | identifier_body |
regions-infer-invariance-due-to-mutability-4.rs | // Copyright 2012 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
struct Invariant<'a> {
f: Box<for<'b> FnOnce() -> &'b mut &'a isize +'static>,
}
fn to_same_lifetime<'r>(bi: Invariant<'r>) {
let bj: Invariant<'r> = bi;
}
fn | <'r>(bi: Invariant<'r>) -> Invariant<'static> {
bi //~ ERROR mismatched types
}
fn main() {
}
| to_longer_lifetime | identifier_name |
lint.rs | use conf;
use itertools::Itertools;
use lisp::LispExpr;
use lisp;
use rusqlite as sql;
use rustc::hir::*;
use rustc::hir::map::Node;
use rustc::lint::{LateContext, LintArray, LintContext, LintPass, LateLintPass};
use rustc::ty::TypeVariants;
use std::borrow::Cow;
use std::io::{Read, Write};
use std::process::{Command, Stdio};
use std::time::Duration;
use std;
use syntax::ast::MetaItemKind;
use syntax::ast::{Attribute, FloatTy};
use wait_timeout::ChildExt;
#[derive(Debug, Default)]
pub struct Herbie {
conf: Option<conf::Conf>,
initialized: bool,
subs: Vec<(LispExpr, LispExpr)>,
}
#[derive(Debug)]
pub enum InitError {
Conf {
error: conf::ConfError,
},
SQL {
error: sql::Error,
},
}
impl From<conf::ConfError> for InitError {
fn from(err: conf::ConfError) -> InitError {
InitError::Conf { error: err }
}
}
impl From<sql::Error> for InitError {
fn from(err: sql::Error) -> InitError {
InitError::SQL { error: err }
}
}
impl std::fmt::Display for InitError {
fn fmt(&self, f: &mut std::fmt::Formatter) -> Result<(), std::fmt::Error> {
match *self {
InitError::Conf { ref error } => write!(f, "Configuration error: {}", error),
InitError::SQL { ref error } => write!(f, "Got SQL error: {}", error),
}
}
}
impl Herbie {
pub fn new() -> Herbie {
Herbie::default()
}
pub fn init(&mut self) -> Result<(), InitError> {
if self.initialized {
return Ok(())
}
self.initialized = true;
let conf = try!(conf::read_conf());
let connection = try!(sql::Connection::open_with_flags(
conf.db_path.as_ref(), sql::SQLITE_OPEN_READ_ONLY
));
let mut query = try!(connection.prepare("SELECT * FROM HerbieResults"));
self.subs = try!(query.query(&[])).filter_map(|row| {
match row {
Ok(row) => {
let cmdin : String = row.get(1);
let cmdout : String = row.get(2);
// row.get(3) is opts ↔ Herbies options
let errin = row.get_checked(4).unwrap_or(0.);
let errout = row.get_checked(5).unwrap_or(0.);
if cmdin == cmdout || errin <= errout {
return None;
}
let mut parser = lisp::Parser::new();
match parser.parse(&cmdin) {
Ok(cmdin) => {
match parser.parse(&cmdout) {
Ok(cmdout) => {
Some((cmdin, cmdout))
}
Err(..) => None,
}
}
Err(..) => None,
}
}
Err(..) => None,
}
}).collect();
self.conf = Some(conf);
Ok(())
}
}
declare_lint!(pub HERBIE, Warn,
"checks for numerical instability");
impl LintPass for Herbie {
fn get_lints(&self) -> LintArray {
lint_array!(HERBIE)
}
}
impl LateLintPass for Herbie {
fn check_expr(&mut self, cx: &LateContext, expr: &Expr) {
fn is | ttr: &Attribute) -> bool {
if let MetaItemKind::Word(ref word) = attr.node.value.node {
word == &"herbie_ignore"
}
else {
false
}
}
let attrs = match cx.tcx.map.find(cx.tcx.map.get_parent(expr.id)) {
Some(Node::NodeItem(item)) => &item.attrs,
Some(Node::NodeTraitItem(item)) => &item.attrs,
Some(Node::NodeImplItem(item)) => &item.attrs,
_ => panic!("In herbie-lint: how did I get there?"),
};
if attrs.iter().any(is_herbie_ignore) {
return;
}
let ty = cx.tcx.expr_ty(expr);
if ty.sty!= TypeVariants::TyFloat(FloatTy::F64) {
return;
}
if let Err(err) = self.init() {
cx.span_lint_note(
HERBIE,
cx.krate.span,
"Could not initialize Herbie-Lint",
cx.krate.span,
&err.to_string()
);
return;
}
let mut got_match = false;
for &(ref cmdin, ref cmdout) in &self.subs {
if let Some(bindings) = LispExpr::match_expr(expr, cmdin) {
report(cx, expr, cmdout, &bindings);
got_match = true;
}
}
let conf = self.conf.as_ref().expect("Configuration should be read by now");
if!got_match && conf.use_herbie!= conf::UseHerbieConf::No {
if let Err(err) = try_with_herbie(cx, expr, &conf) {
cx.span_lint(HERBIE, expr.span, &err);
}
}
}
}
fn try_with_herbie(cx: &LateContext, expr: &Expr, conf: &conf::Conf) -> Result<(), Cow<'static, str>> {
let (lisp_expr, nb_ids, bindings) = match LispExpr::from_expr(expr) {
Some(r) => r,
// not an error, the expression might for example contain a function unknown to Herbie
None => return Ok(()),
};
if lisp_expr.depth() <= 2 {
return Ok(());
}
let seed: &str = &conf.herbie_seed;
let mut command = Command::new("herbie-inout");
let command = command
.arg("--seed").arg(seed)
.arg("-o").arg("rules:numerics")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
;
let mut child = match command.spawn() {
Ok(child) => child,
Err(err) => {
return if conf.use_herbie == conf::UseHerbieConf::Yes {
// TODO: wiki
Err(format!("Could not call Herbie: {}", err).into())
}
else {
Ok(())
};
}
};
// TODO: link to wiki about Herbie.toml
cx.sess().diagnostic().span_note_without_error(
expr.span,
"Calling Herbie on the following expression, it might take a while"
);
let params = (0..nb_ids).map(|id| format!("herbie{}", id)).join(" ");
let cmdin = lisp_expr.to_lisp("herbie");
let lisp_expr = format!("(lambda ({}) {})\n", params, cmdin);
let lisp_expr = lisp_expr.as_bytes();
child.stdin
.as_mut().expect("Herbie-inout's stdin not captured")
.write(lisp_expr).expect("Could not write on herbie-inout's stdin")
;
match conf.timeout {
Some(timeout) => {
match child.wait_timeout(Duration::from_secs(timeout as u64)) {
Ok(Some(status)) if status.success() => (),
Ok(Some(status)) => {
return Err(format!("herbie-inout did not return successfully: status={}", status).into());
}
Ok(None) => {
cx.sess().diagnostic().span_note_without_error(expr.span, "Herbie timed out");
return Ok(());
}
Err(err) => {
return Err(format!("herbie-inout did not return successfully: {}", err).into());
}
}
}
None => {
match child.wait() {
Ok(status) if status.success() => (),
Ok(status) => {
return Err(format!("herbie-inout did not return successfully: status={}", status).into());
}
Err(err) => {
return Err(format!("herbie-inout did not return successfully: {}", err).into());
}
}
}
}
let mut stdout = if let Some(output) = child.stdout {
output
}
else {
return Err("cannot capture herbie-inout output".into());
};
let mut output = String::new();
if let Err(err) = stdout.read_to_string(&mut output) {
return Err(format!("cannot read output: {}", err).into());
}
let mut output = output.lines();
let parse_error = |s: Option<&str>| -> Option<f64> {
match s {
Some(s) => {
match s.split(' ').last().map(str::parse::<f64>) {
Some(Ok(f)) => Some(f),
_ => None,
}
}
_ => None,
}
};
let (errin, errout, cmdout) = match (parse_error(output.next()), parse_error(output.next()), output.next()) {
(Some(errin), Some(errout), Some(cmdout)) => {
(errin, errout, cmdout)
}
_ => {
return Err("Could not parse herbie-inout output".into())
}
};
if errin <= errout {
return Ok(());
}
let mut parser = lisp::Parser::new();
let cmdout = match parser.parse(cmdout) {
Ok(cmdout) => cmdout,
_ => return Err("Could not understand herbie-inout cmdout".into()),
};
report(cx, expr, &cmdout, &bindings);
save(conf, &cmdin, &cmdout, "", errin, errout)
.map_err(|err| format!("Could not save database, got SQL error {}", err).into())
}
fn report(cx: &LateContext, expr: &Expr, cmdout: &LispExpr, bindings: &lisp::MatchBindings) {
cx.struct_span_lint(HERBIE, expr.span, "Numerically unstable expression")
.span_suggestion(expr.span, "Try this", cmdout.to_rust(cx, &bindings))
.emit();
}
fn save(
conf: &conf::Conf,
cmdin: &str, cmdout: &LispExpr,
seed: &str,
errin: f64, errout: f64
) -> Result<(), sql::Error> {
let connection = try!(sql::Connection::open_with_flags(
conf.db_path.as_ref(), sql::SQLITE_OPEN_READ_WRITE
));
try!(connection.execute("INSERT INTO HerbieResults (cmdin, cmdout, opts, errin, errout)
VALUES ($1, $2, $3, $4, $5)",
&[&cmdin, &cmdout.to_lisp("herbie"), &seed, &errin, &errout]));
Ok(())
}
| _herbie_ignore(a | identifier_name |
lint.rs | use conf;
use itertools::Itertools;
use lisp::LispExpr;
use lisp;
use rusqlite as sql;
use rustc::hir::*;
use rustc::hir::map::Node;
use rustc::lint::{LateContext, LintArray, LintContext, LintPass, LateLintPass};
use rustc::ty::TypeVariants;
use std::borrow::Cow;
use std::io::{Read, Write};
use std::process::{Command, Stdio};
use std::time::Duration;
use std;
use syntax::ast::MetaItemKind;
use syntax::ast::{Attribute, FloatTy};
use wait_timeout::ChildExt;
#[derive(Debug, Default)]
pub struct Herbie {
conf: Option<conf::Conf>,
initialized: bool,
subs: Vec<(LispExpr, LispExpr)>,
}
#[derive(Debug)]
pub enum InitError {
Conf {
error: conf::ConfError,
},
SQL {
error: sql::Error,
},
}
impl From<conf::ConfError> for InitError {
fn from(err: conf::ConfError) -> InitError {
InitError::Conf { error: err }
}
}
impl From<sql::Error> for InitError {
fn from(err: sql::Error) -> InitError {
InitError::SQL { error: err }
}
}
impl std::fmt::Display for InitError {
fn fmt(&self, f: &mut std::fmt::Formatter) -> Result<(), std::fmt::Error> {
match *self {
InitError::Conf { ref error } => write!(f, "Configuration error: {}", error),
InitError::SQL { ref error } => write!(f, "Got SQL error: {}", error),
}
}
}
impl Herbie {
pub fn new() -> Herbie {
Herbie::default()
}
pub fn init(&mut self) -> Result<(), InitError> {
if self.initialized {
return Ok(())
}
self.initialized = true;
let conf = try!(conf::read_conf());
let connection = try!(sql::Connection::open_with_flags(
conf.db_path.as_ref(), sql::SQLITE_OPEN_READ_ONLY
));
let mut query = try!(connection.prepare("SELECT * FROM HerbieResults"));
self.subs = try!(query.query(&[])).filter_map(|row| {
match row {
Ok(row) => {
let cmdin : String = row.get(1);
let cmdout : String = row.get(2);
// row.get(3) is opts ↔ Herbies options
let errin = row.get_checked(4).unwrap_or(0.);
let errout = row.get_checked(5).unwrap_or(0.);
if cmdin == cmdout || errin <= errout {
return None;
}
let mut parser = lisp::Parser::new();
match parser.parse(&cmdin) {
Ok(cmdin) => {
match parser.parse(&cmdout) {
Ok(cmdout) => {
Some((cmdin, cmdout))
}
Err(..) => None,
}
}
Err(..) => None,
}
}
Err(..) => None,
}
}).collect();
self.conf = Some(conf);
Ok(())
}
}
declare_lint!(pub HERBIE, Warn,
"checks for numerical instability");
impl LintPass for Herbie {
fn get_lints(&self) -> LintArray {
lint_array!(HERBIE)
}
}
impl LateLintPass for Herbie {
fn check_expr(&mut self, cx: &LateContext, expr: &Expr) {
fn is_herbie_ignore(attr: &Attribute) -> bool {
if let MetaItemKind::Word(ref word) = attr.node.value.node {
word == &"herbie_ignore"
}
else {
false
}
}
let attrs = match cx.tcx.map.find(cx.tcx.map.get_parent(expr.id)) {
Some(Node::NodeItem(item)) => &item.attrs,
Some(Node::NodeTraitItem(item)) => &item.attrs,
Some(Node::NodeImplItem(item)) => &item.attrs,
_ => panic!("In herbie-lint: how did I get there?"),
};
if attrs.iter().any(is_herbie_ignore) {
return;
}
let ty = cx.tcx.expr_ty(expr);
if ty.sty!= TypeVariants::TyFloat(FloatTy::F64) {
return;
}
if let Err(err) = self.init() {
cx.span_lint_note(
HERBIE,
cx.krate.span,
"Could not initialize Herbie-Lint",
cx.krate.span,
&err.to_string()
);
return;
}
let mut got_match = false;
for &(ref cmdin, ref cmdout) in &self.subs {
if let Some(bindings) = LispExpr::match_expr(expr, cmdin) {
report(cx, expr, cmdout, &bindings);
got_match = true;
}
}
let conf = self.conf.as_ref().expect("Configuration should be read by now");
if!got_match && conf.use_herbie!= conf::UseHerbieConf::No {
if let Err(err) = try_with_herbie(cx, expr, &conf) {
cx.span_lint(HERBIE, expr.span, &err);
}
}
}
}
fn try_with_herbie(cx: &LateContext, expr: &Expr, conf: &conf::Conf) -> Result<(), Cow<'static, str>> {
let (lisp_expr, nb_ids, bindings) = match LispExpr::from_expr(expr) {
Some(r) => r,
// not an error, the expression might for example contain a function unknown to Herbie
None => return Ok(()),
};
if lisp_expr.depth() <= 2 {
return Ok(());
}
let seed: &str = &conf.herbie_seed;
let mut command = Command::new("herbie-inout");
let command = command
.arg("--seed").arg(seed)
.arg("-o").arg("rules:numerics")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
;
let mut child = match command.spawn() {
Ok(child) => child,
Err(err) => {
return if conf.use_herbie == conf::UseHerbieConf::Yes {
// TODO: wiki
Err(format!("Could not call Herbie: {}", err).into())
}
else {
Ok(())
};
}
};
// TODO: link to wiki about Herbie.toml
cx.sess().diagnostic().span_note_without_error(
expr.span,
"Calling Herbie on the following expression, it might take a while"
);
let params = (0..nb_ids).map(|id| format!("herbie{}", id)).join(" ");
let cmdin = lisp_expr.to_lisp("herbie");
let lisp_expr = format!("(lambda ({}) {})\n", params, cmdin);
let lisp_expr = lisp_expr.as_bytes();
child.stdin
.as_mut().expect("Herbie-inout's stdin not captured")
.write(lisp_expr).expect("Could not write on herbie-inout's stdin")
;
match conf.timeout {
Some(timeout) => {
match child.wait_timeout(Duration::from_secs(timeout as u64)) {
Ok(Some(status)) if status.success() => (),
Ok(Some(status)) => {
return Err(format!("herbie-inout did not return successfully: status={}", status).into());
}
Ok(None) => {
cx.sess().diagnostic().span_note_without_error(expr.span, "Herbie timed out");
return Ok(());
}
Err(err) => {
return Err(format!("herbie-inout did not return successfully: {}", err).into());
}
}
}
None => {
match child.wait() {
Ok(status) if status.success() => (),
Ok(status) => {
return Err(format!("herbie-inout did not return successfully: status={}", status).into());
}
Err(err) => {
return Err(format!("herbie-inout did not return successfully: {}", err).into());
}
}
}
}
let mut stdout = if let Some(output) = child.stdout {
output
}
else {
return Err("cannot capture herbie-inout output".into());
};
let mut output = String::new(); | }
let mut output = output.lines();
let parse_error = |s: Option<&str>| -> Option<f64> {
match s {
Some(s) => {
match s.split(' ').last().map(str::parse::<f64>) {
Some(Ok(f)) => Some(f),
_ => None,
}
}
_ => None,
}
};
let (errin, errout, cmdout) = match (parse_error(output.next()), parse_error(output.next()), output.next()) {
(Some(errin), Some(errout), Some(cmdout)) => {
(errin, errout, cmdout)
}
_ => {
return Err("Could not parse herbie-inout output".into())
}
};
if errin <= errout {
return Ok(());
}
let mut parser = lisp::Parser::new();
let cmdout = match parser.parse(cmdout) {
Ok(cmdout) => cmdout,
_ => return Err("Could not understand herbie-inout cmdout".into()),
};
report(cx, expr, &cmdout, &bindings);
save(conf, &cmdin, &cmdout, "", errin, errout)
.map_err(|err| format!("Could not save database, got SQL error {}", err).into())
}
fn report(cx: &LateContext, expr: &Expr, cmdout: &LispExpr, bindings: &lisp::MatchBindings) {
cx.struct_span_lint(HERBIE, expr.span, "Numerically unstable expression")
.span_suggestion(expr.span, "Try this", cmdout.to_rust(cx, &bindings))
.emit();
}
fn save(
conf: &conf::Conf,
cmdin: &str, cmdout: &LispExpr,
seed: &str,
errin: f64, errout: f64
) -> Result<(), sql::Error> {
let connection = try!(sql::Connection::open_with_flags(
conf.db_path.as_ref(), sql::SQLITE_OPEN_READ_WRITE
));
try!(connection.execute("INSERT INTO HerbieResults (cmdin, cmdout, opts, errin, errout)
VALUES ($1, $2, $3, $4, $5)",
&[&cmdin, &cmdout.to_lisp("herbie"), &seed, &errin, &errout]));
Ok(())
} | if let Err(err) = stdout.read_to_string(&mut output) {
return Err(format!("cannot read output: {}", err).into()); | random_line_split |
lint.rs | use conf;
use itertools::Itertools;
use lisp::LispExpr;
use lisp;
use rusqlite as sql;
use rustc::hir::*;
use rustc::hir::map::Node;
use rustc::lint::{LateContext, LintArray, LintContext, LintPass, LateLintPass};
use rustc::ty::TypeVariants;
use std::borrow::Cow;
use std::io::{Read, Write};
use std::process::{Command, Stdio};
use std::time::Duration;
use std;
use syntax::ast::MetaItemKind;
use syntax::ast::{Attribute, FloatTy};
use wait_timeout::ChildExt;
#[derive(Debug, Default)]
pub struct Herbie {
conf: Option<conf::Conf>,
initialized: bool,
subs: Vec<(LispExpr, LispExpr)>,
}
#[derive(Debug)]
pub enum InitError {
Conf {
error: conf::ConfError,
},
SQL {
error: sql::Error,
},
}
impl From<conf::ConfError> for InitError {
fn from(err: conf::ConfError) -> InitError {
InitError::Conf { error: err }
}
}
impl From<sql::Error> for InitError {
fn from(err: sql::Error) -> InitError {
InitError::SQL { error: err }
}
}
impl std::fmt::Display for InitError {
fn fmt(&self, f: &mut std::fmt::Formatter) -> Result<(), std::fmt::Error> |
}
impl Herbie {
pub fn new() -> Herbie {
Herbie::default()
}
pub fn init(&mut self) -> Result<(), InitError> {
if self.initialized {
return Ok(())
}
self.initialized = true;
let conf = try!(conf::read_conf());
let connection = try!(sql::Connection::open_with_flags(
conf.db_path.as_ref(), sql::SQLITE_OPEN_READ_ONLY
));
let mut query = try!(connection.prepare("SELECT * FROM HerbieResults"));
self.subs = try!(query.query(&[])).filter_map(|row| {
match row {
Ok(row) => {
let cmdin : String = row.get(1);
let cmdout : String = row.get(2);
// row.get(3) is opts ↔ Herbies options
let errin = row.get_checked(4).unwrap_or(0.);
let errout = row.get_checked(5).unwrap_or(0.);
if cmdin == cmdout || errin <= errout {
return None;
}
let mut parser = lisp::Parser::new();
match parser.parse(&cmdin) {
Ok(cmdin) => {
match parser.parse(&cmdout) {
Ok(cmdout) => {
Some((cmdin, cmdout))
}
Err(..) => None,
}
}
Err(..) => None,
}
}
Err(..) => None,
}
}).collect();
self.conf = Some(conf);
Ok(())
}
}
declare_lint!(pub HERBIE, Warn,
"checks for numerical instability");
impl LintPass for Herbie {
fn get_lints(&self) -> LintArray {
lint_array!(HERBIE)
}
}
impl LateLintPass for Herbie {
fn check_expr(&mut self, cx: &LateContext, expr: &Expr) {
fn is_herbie_ignore(attr: &Attribute) -> bool {
if let MetaItemKind::Word(ref word) = attr.node.value.node {
word == &"herbie_ignore"
}
else {
false
}
}
let attrs = match cx.tcx.map.find(cx.tcx.map.get_parent(expr.id)) {
Some(Node::NodeItem(item)) => &item.attrs,
Some(Node::NodeTraitItem(item)) => &item.attrs,
Some(Node::NodeImplItem(item)) => &item.attrs,
_ => panic!("In herbie-lint: how did I get there?"),
};
if attrs.iter().any(is_herbie_ignore) {
return;
}
let ty = cx.tcx.expr_ty(expr);
if ty.sty!= TypeVariants::TyFloat(FloatTy::F64) {
return;
}
if let Err(err) = self.init() {
cx.span_lint_note(
HERBIE,
cx.krate.span,
"Could not initialize Herbie-Lint",
cx.krate.span,
&err.to_string()
);
return;
}
let mut got_match = false;
for &(ref cmdin, ref cmdout) in &self.subs {
if let Some(bindings) = LispExpr::match_expr(expr, cmdin) {
report(cx, expr, cmdout, &bindings);
got_match = true;
}
}
let conf = self.conf.as_ref().expect("Configuration should be read by now");
if!got_match && conf.use_herbie!= conf::UseHerbieConf::No {
if let Err(err) = try_with_herbie(cx, expr, &conf) {
cx.span_lint(HERBIE, expr.span, &err);
}
}
}
}
fn try_with_herbie(cx: &LateContext, expr: &Expr, conf: &conf::Conf) -> Result<(), Cow<'static, str>> {
let (lisp_expr, nb_ids, bindings) = match LispExpr::from_expr(expr) {
Some(r) => r,
// not an error, the expression might for example contain a function unknown to Herbie
None => return Ok(()),
};
if lisp_expr.depth() <= 2 {
return Ok(());
}
let seed: &str = &conf.herbie_seed;
let mut command = Command::new("herbie-inout");
let command = command
.arg("--seed").arg(seed)
.arg("-o").arg("rules:numerics")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
;
let mut child = match command.spawn() {
Ok(child) => child,
Err(err) => {
return if conf.use_herbie == conf::UseHerbieConf::Yes {
// TODO: wiki
Err(format!("Could not call Herbie: {}", err).into())
}
else {
Ok(())
};
}
};
// TODO: link to wiki about Herbie.toml
cx.sess().diagnostic().span_note_without_error(
expr.span,
"Calling Herbie on the following expression, it might take a while"
);
let params = (0..nb_ids).map(|id| format!("herbie{}", id)).join(" ");
let cmdin = lisp_expr.to_lisp("herbie");
let lisp_expr = format!("(lambda ({}) {})\n", params, cmdin);
let lisp_expr = lisp_expr.as_bytes();
child.stdin
.as_mut().expect("Herbie-inout's stdin not captured")
.write(lisp_expr).expect("Could not write on herbie-inout's stdin")
;
match conf.timeout {
Some(timeout) => {
match child.wait_timeout(Duration::from_secs(timeout as u64)) {
Ok(Some(status)) if status.success() => (),
Ok(Some(status)) => {
return Err(format!("herbie-inout did not return successfully: status={}", status).into());
}
Ok(None) => {
cx.sess().diagnostic().span_note_without_error(expr.span, "Herbie timed out");
return Ok(());
}
Err(err) => {
return Err(format!("herbie-inout did not return successfully: {}", err).into());
}
}
}
None => {
match child.wait() {
Ok(status) if status.success() => (),
Ok(status) => {
return Err(format!("herbie-inout did not return successfully: status={}", status).into());
}
Err(err) => {
return Err(format!("herbie-inout did not return successfully: {}", err).into());
}
}
}
}
let mut stdout = if let Some(output) = child.stdout {
output
}
else {
return Err("cannot capture herbie-inout output".into());
};
let mut output = String::new();
if let Err(err) = stdout.read_to_string(&mut output) {
return Err(format!("cannot read output: {}", err).into());
}
let mut output = output.lines();
let parse_error = |s: Option<&str>| -> Option<f64> {
match s {
Some(s) => {
match s.split(' ').last().map(str::parse::<f64>) {
Some(Ok(f)) => Some(f),
_ => None,
}
}
_ => None,
}
};
let (errin, errout, cmdout) = match (parse_error(output.next()), parse_error(output.next()), output.next()) {
(Some(errin), Some(errout), Some(cmdout)) => {
(errin, errout, cmdout)
}
_ => {
return Err("Could not parse herbie-inout output".into())
}
};
if errin <= errout {
return Ok(());
}
let mut parser = lisp::Parser::new();
let cmdout = match parser.parse(cmdout) {
Ok(cmdout) => cmdout,
_ => return Err("Could not understand herbie-inout cmdout".into()),
};
report(cx, expr, &cmdout, &bindings);
save(conf, &cmdin, &cmdout, "", errin, errout)
.map_err(|err| format!("Could not save database, got SQL error {}", err).into())
}
fn report(cx: &LateContext, expr: &Expr, cmdout: &LispExpr, bindings: &lisp::MatchBindings) {
cx.struct_span_lint(HERBIE, expr.span, "Numerically unstable expression")
.span_suggestion(expr.span, "Try this", cmdout.to_rust(cx, &bindings))
.emit();
}
fn save(
conf: &conf::Conf,
cmdin: &str, cmdout: &LispExpr,
seed: &str,
errin: f64, errout: f64
) -> Result<(), sql::Error> {
let connection = try!(sql::Connection::open_with_flags(
conf.db_path.as_ref(), sql::SQLITE_OPEN_READ_WRITE
));
try!(connection.execute("INSERT INTO HerbieResults (cmdin, cmdout, opts, errin, errout)
VALUES ($1, $2, $3, $4, $5)",
&[&cmdin, &cmdout.to_lisp("herbie"), &seed, &errin, &errout]));
Ok(())
}
| {
match *self {
InitError::Conf { ref error } => write!(f, "Configuration error: {}", error),
InitError::SQL { ref error } => write!(f, "Got SQL error: {}", error),
}
} | identifier_body |
coerce-bare-fn-to-closure-and-proc.rs | // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
fn foo() {}
fn main() | {
let f = foo;
let f_closure: || = f;
//~^ ERROR: cannot coerce non-statically resolved bare fn to closure
//~^^ HELP: consider embedding the function in a closure
let f_proc: proc() = f;
//~^ ERROR: cannot coerce non-statically resolved bare fn to closure
//~^^ HELP: consider embedding the function in a closure
} | identifier_body |
|
coerce-bare-fn-to-closure-and-proc.rs | // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
fn foo() {}
| //~^ ERROR: cannot coerce non-statically resolved bare fn to closure
//~^^ HELP: consider embedding the function in a closure
let f_proc: proc() = f;
//~^ ERROR: cannot coerce non-statically resolved bare fn to closure
//~^^ HELP: consider embedding the function in a closure
} | fn main() {
let f = foo;
let f_closure: || = f; | random_line_split |
coerce-bare-fn-to-closure-and-proc.rs | // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
fn | () {}
fn main() {
let f = foo;
let f_closure: || = f;
//~^ ERROR: cannot coerce non-statically resolved bare fn to closure
//~^^ HELP: consider embedding the function in a closure
let f_proc: proc() = f;
//~^ ERROR: cannot coerce non-statically resolved bare fn to closure
//~^^ HELP: consider embedding the function in a closure
}
| foo | identifier_name |
file.rs | use std::io::{Read, Write};
use std::io::BufReader;
use std::io::BufRead;
use std::path::{PathBuf, Path};
use std::fs::{File, DirBuilder, OpenOptions};
pub fn read_file<P: AsRef<Path>>(file: P) -> Result<String, String> {
let mut file = File::open(file)
.map_err(error_err!())
.map_err(|_| "Can't read the file".to_string())?;
let content = {
let mut s = String::new();
file.read_to_string(&mut s)
.map_err(error_err!())
.map_err(|err| format!("Can't read the file: {}", err))?;
s
};
Ok(content)
}
pub fn read_lines_from_file<P: AsRef<Path>>(file: P) -> Result<impl Iterator<Item=Result<String, ::std::io::Error>>, String> {
let file = File::open(file)
.map_err(error_err!())
.map_err(|_| "Can't read the file".to_string())?;
let lines = BufReader::new(file).lines();
Ok(lines)
}
pub fn write_file<P: AsRef<Path>>(file: P, content: &str) -> Result<(), String> where P: std::convert::AsRef<std::ffi::OsStr> {
let path = PathBuf::from(&file);
if let Some(parent_path) = path.parent() |
let mut file = OpenOptions::new()
.write(true)
.create(true)
.open(path)
.map_err(error_err!())
.map_err(|err| format!("Can't open the file: {}", err))?;
file
.write_all(content.as_bytes())
.map_err(error_err!())
.map_err(|err| format!("Can't write content: \"{}\" to the file: {}", content, err))
} | {
DirBuilder::new()
.recursive(true)
.create(parent_path)
.map_err(error_err!())
.map_err(|err| format!("Can't create the file: {}", err))?;
} | conditional_block |
file.rs | use std::io::{Read, Write};
use std::io::BufReader;
use std::io::BufRead;
use std::path::{PathBuf, Path};
use std::fs::{File, DirBuilder, OpenOptions};
pub fn read_file<P: AsRef<Path>>(file: P) -> Result<String, String> {
let mut file = File::open(file)
.map_err(error_err!())
.map_err(|_| "Can't read the file".to_string())?;
let content = { | .map_err(|err| format!("Can't read the file: {}", err))?;
s
};
Ok(content)
}
pub fn read_lines_from_file<P: AsRef<Path>>(file: P) -> Result<impl Iterator<Item=Result<String, ::std::io::Error>>, String> {
let file = File::open(file)
.map_err(error_err!())
.map_err(|_| "Can't read the file".to_string())?;
let lines = BufReader::new(file).lines();
Ok(lines)
}
pub fn write_file<P: AsRef<Path>>(file: P, content: &str) -> Result<(), String> where P: std::convert::AsRef<std::ffi::OsStr> {
let path = PathBuf::from(&file);
if let Some(parent_path) = path.parent() {
DirBuilder::new()
.recursive(true)
.create(parent_path)
.map_err(error_err!())
.map_err(|err| format!("Can't create the file: {}", err))?;
}
let mut file = OpenOptions::new()
.write(true)
.create(true)
.open(path)
.map_err(error_err!())
.map_err(|err| format!("Can't open the file: {}", err))?;
file
.write_all(content.as_bytes())
.map_err(error_err!())
.map_err(|err| format!("Can't write content: \"{}\" to the file: {}", content, err))
} | let mut s = String::new();
file.read_to_string(&mut s)
.map_err(error_err!()) | random_line_split |
file.rs | use std::io::{Read, Write};
use std::io::BufReader;
use std::io::BufRead;
use std::path::{PathBuf, Path};
use std::fs::{File, DirBuilder, OpenOptions};
pub fn read_file<P: AsRef<Path>>(file: P) -> Result<String, String> {
let mut file = File::open(file)
.map_err(error_err!())
.map_err(|_| "Can't read the file".to_string())?;
let content = {
let mut s = String::new();
file.read_to_string(&mut s)
.map_err(error_err!())
.map_err(|err| format!("Can't read the file: {}", err))?;
s
};
Ok(content)
}
pub fn read_lines_from_file<P: AsRef<Path>>(file: P) -> Result<impl Iterator<Item=Result<String, ::std::io::Error>>, String> {
let file = File::open(file)
.map_err(error_err!())
.map_err(|_| "Can't read the file".to_string())?;
let lines = BufReader::new(file).lines();
Ok(lines)
}
pub fn | <P: AsRef<Path>>(file: P, content: &str) -> Result<(), String> where P: std::convert::AsRef<std::ffi::OsStr> {
let path = PathBuf::from(&file);
if let Some(parent_path) = path.parent() {
DirBuilder::new()
.recursive(true)
.create(parent_path)
.map_err(error_err!())
.map_err(|err| format!("Can't create the file: {}", err))?;
}
let mut file = OpenOptions::new()
.write(true)
.create(true)
.open(path)
.map_err(error_err!())
.map_err(|err| format!("Can't open the file: {}", err))?;
file
.write_all(content.as_bytes())
.map_err(error_err!())
.map_err(|err| format!("Can't write content: \"{}\" to the file: {}", content, err))
} | write_file | identifier_name |
file.rs | use std::io::{Read, Write};
use std::io::BufReader;
use std::io::BufRead;
use std::path::{PathBuf, Path};
use std::fs::{File, DirBuilder, OpenOptions};
pub fn read_file<P: AsRef<Path>>(file: P) -> Result<String, String> |
pub fn read_lines_from_file<P: AsRef<Path>>(file: P) -> Result<impl Iterator<Item=Result<String, ::std::io::Error>>, String> {
let file = File::open(file)
.map_err(error_err!())
.map_err(|_| "Can't read the file".to_string())?;
let lines = BufReader::new(file).lines();
Ok(lines)
}
pub fn write_file<P: AsRef<Path>>(file: P, content: &str) -> Result<(), String> where P: std::convert::AsRef<std::ffi::OsStr> {
let path = PathBuf::from(&file);
if let Some(parent_path) = path.parent() {
DirBuilder::new()
.recursive(true)
.create(parent_path)
.map_err(error_err!())
.map_err(|err| format!("Can't create the file: {}", err))?;
}
let mut file = OpenOptions::new()
.write(true)
.create(true)
.open(path)
.map_err(error_err!())
.map_err(|err| format!("Can't open the file: {}", err))?;
file
.write_all(content.as_bytes())
.map_err(error_err!())
.map_err(|err| format!("Can't write content: \"{}\" to the file: {}", content, err))
} | {
let mut file = File::open(file)
.map_err(error_err!())
.map_err(|_| "Can't read the file".to_string())?;
let content = {
let mut s = String::new();
file.read_to_string(&mut s)
.map_err(error_err!())
.map_err(|err| format!("Can't read the file: {}", err))?;
s
};
Ok(content)
} | identifier_body |
iostat.rs | use alloc::string::String;
use alloc::vec::Vec;
use core::fmt::Write;
use core::str;
use crate::context;
use crate::scheme;
use crate::syscall::error::Result;
pub fn | () -> Result<Vec<u8>> {
let mut string = String::new();
{
let mut rows = Vec::new();
{
let contexts = context::contexts();
for (id, context_lock) in contexts.iter() {
let context = context_lock.read();
rows.push((*id, context.name.read().clone(), context.files.read().clone()));
}
}
for row in rows.iter() {
let id: usize = row.0.into();
let name = &row.1;
let _ = writeln!(string, "{}: {}", id, name);
for (fd, f) in row.2.iter().enumerate() {
let file = match *f {
None => continue,
Some(ref file) => file.clone()
};
let description = file.description.read();
let scheme = {
let schemes = scheme::schemes();
match schemes.get(description.scheme) {
Some(scheme) => scheme.clone(),
None => {
let _ = writeln!(string, " {:>4}: {:>8} {:>8} {:>08X}: no scheme", fd, description.scheme.into(), description.number, description.flags);
continue;
}
}
};
let mut fpath = [0; 4096];
match scheme.fpath(description.number, &mut fpath) {
Ok(path_len) => {
let fname = str::from_utf8(&fpath[..path_len]).unwrap_or("?");
let _ = writeln!(string, "{:>6}: {:>8} {:>8} {:>08X}: {}", fd, description.scheme.into(), description.number, description.flags, fname);
},
Err(err) => {
let _ = writeln!(string, "{:>6}: {:>8} {:>8} {:>08X}: {}", fd, description.scheme.into(), description.number, description.flags, err);
}
}
}
}
}
Ok(string.into_bytes())
}
| resource | identifier_name |
iostat.rs | use alloc::string::String;
use alloc::vec::Vec;
use core::fmt::Write;
use core::str;
use crate::context; | let mut string = String::new();
{
let mut rows = Vec::new();
{
let contexts = context::contexts();
for (id, context_lock) in contexts.iter() {
let context = context_lock.read();
rows.push((*id, context.name.read().clone(), context.files.read().clone()));
}
}
for row in rows.iter() {
let id: usize = row.0.into();
let name = &row.1;
let _ = writeln!(string, "{}: {}", id, name);
for (fd, f) in row.2.iter().enumerate() {
let file = match *f {
None => continue,
Some(ref file) => file.clone()
};
let description = file.description.read();
let scheme = {
let schemes = scheme::schemes();
match schemes.get(description.scheme) {
Some(scheme) => scheme.clone(),
None => {
let _ = writeln!(string, " {:>4}: {:>8} {:>8} {:>08X}: no scheme", fd, description.scheme.into(), description.number, description.flags);
continue;
}
}
};
let mut fpath = [0; 4096];
match scheme.fpath(description.number, &mut fpath) {
Ok(path_len) => {
let fname = str::from_utf8(&fpath[..path_len]).unwrap_or("?");
let _ = writeln!(string, "{:>6}: {:>8} {:>8} {:>08X}: {}", fd, description.scheme.into(), description.number, description.flags, fname);
},
Err(err) => {
let _ = writeln!(string, "{:>6}: {:>8} {:>8} {:>08X}: {}", fd, description.scheme.into(), description.number, description.flags, err);
}
}
}
}
}
Ok(string.into_bytes())
} | use crate::scheme;
use crate::syscall::error::Result;
pub fn resource() -> Result<Vec<u8>> { | random_line_split |
iostat.rs | use alloc::string::String;
use alloc::vec::Vec;
use core::fmt::Write;
use core::str;
use crate::context;
use crate::scheme;
use crate::syscall::error::Result;
pub fn resource() -> Result<Vec<u8>> | None => continue,
Some(ref file) => file.clone()
};
let description = file.description.read();
let scheme = {
let schemes = scheme::schemes();
match schemes.get(description.scheme) {
Some(scheme) => scheme.clone(),
None => {
let _ = writeln!(string, " {:>4}: {:>8} {:>8} {:>08X}: no scheme", fd, description.scheme.into(), description.number, description.flags);
continue;
}
}
};
let mut fpath = [0; 4096];
match scheme.fpath(description.number, &mut fpath) {
Ok(path_len) => {
let fname = str::from_utf8(&fpath[..path_len]).unwrap_or("?");
let _ = writeln!(string, "{:>6}: {:>8} {:>8} {:>08X}: {}", fd, description.scheme.into(), description.number, description.flags, fname);
},
Err(err) => {
let _ = writeln!(string, "{:>6}: {:>8} {:>8} {:>08X}: {}", fd, description.scheme.into(), description.number, description.flags, err);
}
}
}
}
}
Ok(string.into_bytes())
}
| {
let mut string = String::new();
{
let mut rows = Vec::new();
{
let contexts = context::contexts();
for (id, context_lock) in contexts.iter() {
let context = context_lock.read();
rows.push((*id, context.name.read().clone(), context.files.read().clone()));
}
}
for row in rows.iter() {
let id: usize = row.0.into();
let name = &row.1;
let _ = writeln!(string, "{}: {}", id, name);
for (fd, f) in row.2.iter().enumerate() {
let file = match *f { | identifier_body |
resource_files.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use std::fs::File;
use std::io::{self, Read};
use std::path::PathBuf;
use std::sync::{Arc, Mutex};
lazy_static! {
static ref CMD_RESOURCE_DIR: Arc<Mutex<Option<String>>> = {
Arc::new(Mutex::new(None))
};
}
pub fn set_resources_path(path: Option<String>) {
let mut dir = CMD_RESOURCE_DIR.lock().unwrap();
*dir = path;
}
#[cfg(target_os = "android")]
pub fn resources_dir_path() -> PathBuf {
PathBuf::from("/sdcard/servo/")
}
#[cfg(not(target_os = "android"))]
pub fn resources_dir_path() -> PathBuf {
use std::env;
match *CMD_RESOURCE_DIR.lock().unwrap() {
Some(ref path) => PathBuf::from(path),
None => {
// FIXME: Find a way to not rely on the executable being
// under `<servo source>[/$target_triple]/target/debug`
// or `<servo source>[/$target_triple]/target/release`.
let mut path = env::current_exe().expect("can't get exe path");
// Follow symlink
path = path.canonicalize().expect("path does not exist");
path.pop();
path.push("resources");
if!path.is_dir() |
path
}
}
}
pub fn read_resource_file(relative_path_components: &[&str]) -> io::Result<Vec<u8>> {
let mut path = resources_dir_path();
for component in relative_path_components {
path.push(component);
}
let mut file = try!(File::open(&path));
let mut data = Vec::new();
try!(file.read_to_end(&mut data));
Ok(data)
}
| { // resources dir not in same dir as exe?
// exe is probably in target/{debug,release} so we need to go back to topdir
path.pop();
path.pop();
path.pop();
path.push("resources");
if !path.is_dir() {
// exe is probably in target/$target_triple/{debug,release} so go back one more
path.pop();
path.pop();
path.push("resources");
}
} | conditional_block |
resource_files.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use std::fs::File;
use std::io::{self, Read};
use std::path::PathBuf;
use std::sync::{Arc, Mutex};
lazy_static! {
static ref CMD_RESOURCE_DIR: Arc<Mutex<Option<String>>> = {
Arc::new(Mutex::new(None))
};
}
pub fn set_resources_path(path: Option<String>) {
let mut dir = CMD_RESOURCE_DIR.lock().unwrap();
*dir = path; | }
#[cfg(target_os = "android")]
pub fn resources_dir_path() -> PathBuf {
PathBuf::from("/sdcard/servo/")
}
#[cfg(not(target_os = "android"))]
pub fn resources_dir_path() -> PathBuf {
use std::env;
match *CMD_RESOURCE_DIR.lock().unwrap() {
Some(ref path) => PathBuf::from(path),
None => {
// FIXME: Find a way to not rely on the executable being
// under `<servo source>[/$target_triple]/target/debug`
// or `<servo source>[/$target_triple]/target/release`.
let mut path = env::current_exe().expect("can't get exe path");
// Follow symlink
path = path.canonicalize().expect("path does not exist");
path.pop();
path.push("resources");
if!path.is_dir() { // resources dir not in same dir as exe?
// exe is probably in target/{debug,release} so we need to go back to topdir
path.pop();
path.pop();
path.pop();
path.push("resources");
if!path.is_dir() {
// exe is probably in target/$target_triple/{debug,release} so go back one more
path.pop();
path.pop();
path.push("resources");
}
}
path
}
}
}
pub fn read_resource_file(relative_path_components: &[&str]) -> io::Result<Vec<u8>> {
let mut path = resources_dir_path();
for component in relative_path_components {
path.push(component);
}
let mut file = try!(File::open(&path));
let mut data = Vec::new();
try!(file.read_to_end(&mut data));
Ok(data)
} | random_line_split |
|
resource_files.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use std::fs::File;
use std::io::{self, Read};
use std::path::PathBuf;
use std::sync::{Arc, Mutex};
lazy_static! {
static ref CMD_RESOURCE_DIR: Arc<Mutex<Option<String>>> = {
Arc::new(Mutex::new(None))
};
}
pub fn set_resources_path(path: Option<String>) {
let mut dir = CMD_RESOURCE_DIR.lock().unwrap();
*dir = path;
}
#[cfg(target_os = "android")]
pub fn resources_dir_path() -> PathBuf {
PathBuf::from("/sdcard/servo/")
}
#[cfg(not(target_os = "android"))]
pub fn | () -> PathBuf {
use std::env;
match *CMD_RESOURCE_DIR.lock().unwrap() {
Some(ref path) => PathBuf::from(path),
None => {
// FIXME: Find a way to not rely on the executable being
// under `<servo source>[/$target_triple]/target/debug`
// or `<servo source>[/$target_triple]/target/release`.
let mut path = env::current_exe().expect("can't get exe path");
// Follow symlink
path = path.canonicalize().expect("path does not exist");
path.pop();
path.push("resources");
if!path.is_dir() { // resources dir not in same dir as exe?
// exe is probably in target/{debug,release} so we need to go back to topdir
path.pop();
path.pop();
path.pop();
path.push("resources");
if!path.is_dir() {
// exe is probably in target/$target_triple/{debug,release} so go back one more
path.pop();
path.pop();
path.push("resources");
}
}
path
}
}
}
pub fn read_resource_file(relative_path_components: &[&str]) -> io::Result<Vec<u8>> {
let mut path = resources_dir_path();
for component in relative_path_components {
path.push(component);
}
let mut file = try!(File::open(&path));
let mut data = Vec::new();
try!(file.read_to_end(&mut data));
Ok(data)
}
| resources_dir_path | identifier_name |
resource_files.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use std::fs::File;
use std::io::{self, Read};
use std::path::PathBuf;
use std::sync::{Arc, Mutex};
lazy_static! {
static ref CMD_RESOURCE_DIR: Arc<Mutex<Option<String>>> = {
Arc::new(Mutex::new(None))
};
}
pub fn set_resources_path(path: Option<String>) {
let mut dir = CMD_RESOURCE_DIR.lock().unwrap();
*dir = path;
}
#[cfg(target_os = "android")]
pub fn resources_dir_path() -> PathBuf {
PathBuf::from("/sdcard/servo/")
}
#[cfg(not(target_os = "android"))]
pub fn resources_dir_path() -> PathBuf {
use std::env;
match *CMD_RESOURCE_DIR.lock().unwrap() {
Some(ref path) => PathBuf::from(path),
None => {
// FIXME: Find a way to not rely on the executable being
// under `<servo source>[/$target_triple]/target/debug`
// or `<servo source>[/$target_triple]/target/release`.
let mut path = env::current_exe().expect("can't get exe path");
// Follow symlink
path = path.canonicalize().expect("path does not exist");
path.pop();
path.push("resources");
if!path.is_dir() { // resources dir not in same dir as exe?
// exe is probably in target/{debug,release} so we need to go back to topdir
path.pop();
path.pop();
path.pop();
path.push("resources");
if!path.is_dir() {
// exe is probably in target/$target_triple/{debug,release} so go back one more
path.pop();
path.pop();
path.push("resources");
}
}
path
}
}
}
pub fn read_resource_file(relative_path_components: &[&str]) -> io::Result<Vec<u8>> | {
let mut path = resources_dir_path();
for component in relative_path_components {
path.push(component);
}
let mut file = try!(File::open(&path));
let mut data = Vec::new();
try!(file.read_to_end(&mut data));
Ok(data)
} | identifier_body |
|
where.rs | // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
#![crate_name = "foo"]
pub trait MyTrait { fn dummy(&self) { } }
// @has foo/struct.Alpha.html '//pre' "pub struct Alpha<A> where A: MyTrait"
pub struct Alpha<A>(A) where A: MyTrait;
// @has foo/trait.Bravo.html '//pre' "pub trait Bravo<B> where B: MyTrait"
pub trait Bravo<B> where B: MyTrait { fn get(&self, B: B); }
// @has foo/fn.charlie.html '//pre' "pub fn charlie<C>() where C: MyTrait"
pub fn charlie<C>() where C: MyTrait {}
pub struct Delta<D>(D);
// @has foo/struct.Delta.html '//*[@class="impl"]//code' \
// "impl<D> Delta<D> where D: MyTrait"
impl<D> Delta<D> where D: MyTrait {
pub fn delta() |
}
pub struct Echo<E>(E);
// @has foo/struct.Echo.html '//*[@class="impl"]//code' \
// "impl<E> MyTrait for Echo<E> where E: MyTrait"
// @has foo/trait.MyTrait.html '//*[@id="implementors-list"]//code' \
// "impl<E> MyTrait for Echo<E> where E: MyTrait"
impl<E> MyTrait for Echo<E> where E: MyTrait {}
pub enum Foxtrot<F> { Foxtrot1(F) }
// @has foo/enum.Foxtrot.html '//*[@class="impl"]//code' \
// "impl<F> MyTrait for Foxtrot<F> where F: MyTrait"
// @has foo/trait.MyTrait.html '//*[@id="implementors-list"]//code' \
// "impl<F> MyTrait for Foxtrot<F> where F: MyTrait"
impl<F> MyTrait for Foxtrot<F> where F: MyTrait {}
// @has foo/type.Golf.html '//pre[@class="rust typedef"]' \
// "type Golf<T> where T: Clone = (T, T)"
pub type Golf<T> where T: Clone = (T, T);
| {} | identifier_body |
where.rs | // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
| pub struct Alpha<A>(A) where A: MyTrait;
// @has foo/trait.Bravo.html '//pre' "pub trait Bravo<B> where B: MyTrait"
pub trait Bravo<B> where B: MyTrait { fn get(&self, B: B); }
// @has foo/fn.charlie.html '//pre' "pub fn charlie<C>() where C: MyTrait"
pub fn charlie<C>() where C: MyTrait {}
pub struct Delta<D>(D);
// @has foo/struct.Delta.html '//*[@class="impl"]//code' \
// "impl<D> Delta<D> where D: MyTrait"
impl<D> Delta<D> where D: MyTrait {
pub fn delta() {}
}
pub struct Echo<E>(E);
// @has foo/struct.Echo.html '//*[@class="impl"]//code' \
// "impl<E> MyTrait for Echo<E> where E: MyTrait"
// @has foo/trait.MyTrait.html '//*[@id="implementors-list"]//code' \
// "impl<E> MyTrait for Echo<E> where E: MyTrait"
impl<E> MyTrait for Echo<E> where E: MyTrait {}
pub enum Foxtrot<F> { Foxtrot1(F) }
// @has foo/enum.Foxtrot.html '//*[@class="impl"]//code' \
// "impl<F> MyTrait for Foxtrot<F> where F: MyTrait"
// @has foo/trait.MyTrait.html '//*[@id="implementors-list"]//code' \
// "impl<F> MyTrait for Foxtrot<F> where F: MyTrait"
impl<F> MyTrait for Foxtrot<F> where F: MyTrait {}
// @has foo/type.Golf.html '//pre[@class="rust typedef"]' \
// "type Golf<T> where T: Clone = (T, T)"
pub type Golf<T> where T: Clone = (T, T); | #![crate_name = "foo"]
pub trait MyTrait { fn dummy(&self) { } }
// @has foo/struct.Alpha.html '//pre' "pub struct Alpha<A> where A: MyTrait" | random_line_split |
where.rs | // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
#![crate_name = "foo"]
pub trait MyTrait { fn dummy(&self) { } }
// @has foo/struct.Alpha.html '//pre' "pub struct Alpha<A> where A: MyTrait"
pub struct | <A>(A) where A: MyTrait;
// @has foo/trait.Bravo.html '//pre' "pub trait Bravo<B> where B: MyTrait"
pub trait Bravo<B> where B: MyTrait { fn get(&self, B: B); }
// @has foo/fn.charlie.html '//pre' "pub fn charlie<C>() where C: MyTrait"
pub fn charlie<C>() where C: MyTrait {}
pub struct Delta<D>(D);
// @has foo/struct.Delta.html '//*[@class="impl"]//code' \
// "impl<D> Delta<D> where D: MyTrait"
impl<D> Delta<D> where D: MyTrait {
pub fn delta() {}
}
pub struct Echo<E>(E);
// @has foo/struct.Echo.html '//*[@class="impl"]//code' \
// "impl<E> MyTrait for Echo<E> where E: MyTrait"
// @has foo/trait.MyTrait.html '//*[@id="implementors-list"]//code' \
// "impl<E> MyTrait for Echo<E> where E: MyTrait"
impl<E> MyTrait for Echo<E> where E: MyTrait {}
pub enum Foxtrot<F> { Foxtrot1(F) }
// @has foo/enum.Foxtrot.html '//*[@class="impl"]//code' \
// "impl<F> MyTrait for Foxtrot<F> where F: MyTrait"
// @has foo/trait.MyTrait.html '//*[@id="implementors-list"]//code' \
// "impl<F> MyTrait for Foxtrot<F> where F: MyTrait"
impl<F> MyTrait for Foxtrot<F> where F: MyTrait {}
// @has foo/type.Golf.html '//pre[@class="rust typedef"]' \
// "type Golf<T> where T: Clone = (T, T)"
pub type Golf<T> where T: Clone = (T, T);
| Alpha | identifier_name |
authorized.rs | use session::game::{Session, GameState};
use session::game::chunk::{self, Ref};
use protocol::messages::authorized::*;
use std::io::Result;
use server::SERVER;
#[register_handlers]
impl Session {
pub fn handle_admin_quiet_command_message<'a>(&mut self, chunk: Ref<'a>,
msg: AdminQuietCommandMessage) -> Result<()> | chunk::teleport(chunk, ch, map_id, cell_id);
}
Ok(())
}
}
| {
let ch = match self.state {
GameState::InContext(ref mut ch) => ch,
_ => return Ok(()),
};
if ch.movements.is_some() {
return Ok(());
}
let map_id: i32 = match msg.base.content.split(" ").last().map(|id| id.parse()) {
Some(Ok(map_id)) => map_id,
_ => return Ok(()),
};
let cell_id = SERVER.with(|s| {
s.maps.get(&map_id).map(|m| m.get_free_cell()).unwrap_or(None)
});
if let Some(cell_id) = cell_id { | identifier_body |
authorized.rs | use session::game::{Session, GameState};
use session::game::chunk::{self, Ref};
use protocol::messages::authorized::*;
use std::io::Result;
use server::SERVER;
#[register_handlers]
impl Session {
pub fn | <'a>(&mut self, chunk: Ref<'a>,
msg: AdminQuietCommandMessage) -> Result<()> {
let ch = match self.state {
GameState::InContext(ref mut ch) => ch,
_ => return Ok(()),
};
if ch.movements.is_some() {
return Ok(());
}
let map_id: i32 = match msg.base.content.split(" ").last().map(|id| id.parse()) {
Some(Ok(map_id)) => map_id,
_ => return Ok(()),
};
let cell_id = SERVER.with(|s| {
s.maps.get(&map_id).map(|m| m.get_free_cell()).unwrap_or(None)
});
if let Some(cell_id) = cell_id {
chunk::teleport(chunk, ch, map_id, cell_id);
}
Ok(())
}
}
| handle_admin_quiet_command_message | identifier_name |
authorized.rs | use session::game::{Session, GameState};
use session::game::chunk::{self, Ref};
use protocol::messages::authorized::*;
use std::io::Result;
use server::SERVER;
#[register_handlers]
impl Session {
pub fn handle_admin_quiet_command_message<'a>(&mut self, chunk: Ref<'a>,
msg: AdminQuietCommandMessage) -> Result<()> {
let ch = match self.state {
GameState::InContext(ref mut ch) => ch,
_ => return Ok(()),
};
if ch.movements.is_some() {
return Ok(());
}
let map_id: i32 = match msg.base.content.split(" ").last().map(|id| id.parse()) {
Some(Ok(map_id)) => map_id,
_ => return Ok(()),
};
let cell_id = SERVER.with(|s| {
s.maps.get(&map_id).map(|m| m.get_free_cell()).unwrap_or(None)
});
if let Some(cell_id) = cell_id |
Ok(())
}
}
| {
chunk::teleport(chunk, ch, map_id, cell_id);
} | conditional_block |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.