file_name
large_stringlengths 4
69
| prefix
large_stringlengths 0
26.7k
| suffix
large_stringlengths 0
24.8k
| middle
large_stringlengths 0
2.12k
| fim_type
large_stringclasses 4
values |
---|---|---|---|---|
loader.rs | One of the immediate problems with linking the same library together twice
//! in the same problem is dealing with duplicate symbols. The primary way to
//! deal with this in rustc is to add hashes to the end of each symbol.
//!
//! In order to force hashes to change between versions of a library, if
//! desired, the compiler exposes an option `-C metadata=foo`, which is used to
//! initially seed each symbol hash. The string `foo` is prepended to each
//! string-to-hash to ensure that symbols change over time.
//!
//! ## Loading transitive dependencies
//!
//! Dealing with same-named-but-distinct crates is not just a local problem, but
//! one that also needs to be dealt with for transitive dependencies. Note that
//! in the letter above `--extern` flags only apply to the *local* set of
//! dependencies, not the upstream transitive dependencies. Consider this
//! dependency graph:
//!
//! ```notrust
//! A.1 A.2
//! | |
//! | |
//! B C
//! \ /
//! \ /
//! D
//! ```
//!
//! In this scenario, when we compile `D`, we need to be able to distinctly
//! resolve `A.1` and `A.2`, but an `--extern` flag cannot apply to these
//! transitive dependencies.
//!
//! Note that the key idea here is that `B` and `C` are both *already compiled*.
//! That is, they have already resolved their dependencies. Due to unrelated
//! technical reasons, when a library is compiled, it is only compatible with
//! the *exact same* version of the upstream libraries it was compiled against.
//! We use the "Strict Version Hash" to identify the exact copy of an upstream
//! library.
//!
//! With this knowledge, we know that `B` and `C` will depend on `A` with
//! different SVH values, so we crawl the normal `-L` paths looking for
//! `liba*.rlib` and filter based on the contained SVH.
//!
//! In the end, this ends up not needing `--extern` to specify upstream
//! transitive dependencies.
//!
//! # Wrapping up
//!
//! That's the general overview of loading crates in the compiler, but it's by
//! no means all of the necessary details. Take a look at the rest of
//! metadata::loader or metadata::creader for all the juicy details!
use back::archive::{METADATA_FILENAME};
use back::svh::Svh;
use driver::session::Session;
use llvm;
use llvm::{False, ObjectFile, mk_section_iter};
use llvm::archive_ro::ArchiveRO;
use metadata::cstore::{MetadataBlob, MetadataVec, MetadataArchive};
use metadata::decoder;
use metadata::encoder;
use metadata::filesearch::{FileSearch, FileMatches, FileDoesntMatch};
use syntax::codemap::Span;
use syntax::diagnostic::SpanHandler;
use util::fs;
use std::c_str::ToCStr;
use std::cmp;
use std::collections::hash_map::{Occupied, Vacant};
use std::collections::{HashMap, HashSet};
use std::io::fs::PathExtensions;
use std::io;
use std::ptr;
use std::slice;
use std::string;
use std::time::Duration;
use flate;
pub struct CrateMismatch {
path: Path,
got: String,
}
pub struct Context<'a> {
pub sess: &'a Session,
pub span: Span,
pub ident: &'a str,
pub crate_name: &'a str,
pub hash: Option<&'a Svh>,
pub triple: &'a str,
pub filesearch: FileSearch<'a>,
pub root: &'a Option<CratePaths>,
pub rejected_via_hash: Vec<CrateMismatch>,
pub rejected_via_triple: Vec<CrateMismatch>,
pub should_match_name: bool,
}
pub struct Library {
pub dylib: Option<Path>,
pub rlib: Option<Path>,
pub metadata: MetadataBlob,
}
pub struct ArchiveMetadata {
_archive: ArchiveRO,
// points into self._archive
data: *const [u8],
}
pub struct CratePaths {
pub ident: String,
pub dylib: Option<Path>,
pub rlib: Option<Path>
}
impl CratePaths {
fn paths(&self) -> Vec<Path> {
match (&self.dylib, &self.rlib) {
(&None, &None) => vec!(),
(&Some(ref p), &None) |
(&None, &Some(ref p)) => vec!(p.clone()),
(&Some(ref p1), &Some(ref p2)) => vec!(p1.clone(), p2.clone()),
}
}
}
impl<'a> Context<'a> {
pub fn maybe_load_library_crate(&mut self) -> Option<Library> {
self.find_library_crate()
}
pub fn load_library_crate(&mut self) -> Library {
match self.find_library_crate() {
Some(t) => t,
None => {
self.report_load_errs();
unreachable!()
}
}
}
pub fn report_load_errs(&mut self) {
let message = if self.rejected_via_hash.len() > 0 {
format!("found possibly newer version of crate `{}`",
self.ident)
} else if self.rejected_via_triple.len() > 0 {
format!("found incorrect triple for crate `{}`", self.ident)
} else {
format!("can't find crate for `{}`", self.ident)
};
let message = match self.root {
&None => message,
&Some(ref r) => format!("{} which `{}` depends on",
message, r.ident)
};
self.sess.span_err(self.span, message.as_slice());
let mismatches = self.rejected_via_triple.iter();
if self.rejected_via_triple.len() > 0 {
self.sess.span_note(self.span,
format!("expected triple of {}",
self.triple).as_slice());
for (i, &CrateMismatch{ ref path, ref got }) in mismatches.enumerate() {
self.sess.fileline_note(self.span,
format!("crate `{}` path {}{}, triple {}: {}",
self.ident, "#", i+1, got, path.display()).as_slice());
}
}
if self.rejected_via_hash.len() > 0 {
self.sess.span_note(self.span, "perhaps this crate needs \
to be recompiled?");
let mismatches = self.rejected_via_hash.iter();
for (i, &CrateMismatch{ ref path,.. }) in mismatches.enumerate() {
self.sess.fileline_note(self.span,
format!("crate `{}` path {}{}: {}",
self.ident, "#", i+1, path.display()).as_slice());
}
match self.root {
&None => {}
&Some(ref r) => {
for (i, path) in r.paths().iter().enumerate() {
self.sess.fileline_note(self.span,
format!("crate `{}` path #{}: {}",
r.ident, i+1, path.display()).as_slice());
}
}
}
}
self.sess.abort_if_errors();
}
fn | (&mut self) -> Option<Library> {
// If an SVH is specified, then this is a transitive dependency that
// must be loaded via -L plus some filtering.
if self.hash.is_none() {
self.should_match_name = false;
match self.find_commandline_library() {
Some(l) => return Some(l),
None => {}
}
self.should_match_name = true;
}
let dypair = self.dylibname();
// want: crate_name.dir_part() + prefix + crate_name.file_part + "-"
let dylib_prefix = format!("{}{}", dypair.ref0(), self.crate_name);
let rlib_prefix = format!("lib{}", self.crate_name);
let mut candidates = HashMap::new();
// First, find all possible candidate rlibs and dylibs purely based on
// the name of the files themselves. We're trying to match against an
// exact crate name and a possibly an exact hash.
//
// During this step, we can filter all found libraries based on the
// name and id found in the crate id (we ignore the path portion for
// filename matching), as well as the exact hash (if specified). If we
// end up having many candidates, we must look at the metadata to
// perform exact matches against hashes/crate ids. Note that opening up
// the metadata is where we do an exact match against the full contents
// of the crate id (path/name/id).
//
// The goal of this step is to look at as little metadata as possible.
self.filesearch.search(|path| {
let file = match path.filename_str() {
None => return FileDoesntMatch,
Some(file) => file,
};
let (hash, rlib) = if file.starts_with(rlib_prefix.as_slice()) &&
file.ends_with(".rlib") {
(file.slice(rlib_prefix.len(), file.len() - ".rlib".len()),
true)
} else if file.starts_with(dylib_prefix.as_slice()) &&
file.ends_with(dypair.ref1().as_slice()) {
(file.slice(dylib_prefix.len(), file.len() - dypair.ref1().len()),
false)
} else {
return FileDoesntMatch
};
info!("lib candidate: {}", path.display());
let slot = match candidates.entry(hash.to_string()) {
Occupied(entry) => entry.into_mut(),
Vacant(entry) => entry.set((HashSet::new(), HashSet::new())),
};
let (ref mut rlibs, ref mut dylibs) = *slot;
if rlib {
rlibs.insert(fs::realpath(path).unwrap());
} else {
dylibs.insert(fs::realpath(path).unwrap());
}
FileMatches
});
// We have now collected all known libraries into a set of candidates
// keyed of the filename hash listed. For each filename, we also have a
// list of rlibs/dylibs that apply. Here, we map each of these lists
// (per hash), to a Library candidate for returning.
//
// A Library candidate is created if the metadata for the set of
// libraries corresponds to the crate id and hash criteria that this
// search is being performed for.
let mut libraries = Vec::new();
for (_hash, (rlibs, dylibs)) in candidates.into_iter() {
let mut metadata = None;
let rlib = self.extract_one(rlibs, "rlib", &mut metadata);
let dylib = self.extract_one(dylibs, "dylib", &mut metadata);
match metadata {
Some(metadata) => {
libraries.push(Library {
dylib: dylib,
rlib: rlib,
metadata: metadata,
})
}
None => {}
}
}
// Having now translated all relevant found hashes into libraries, see
// what we've got and figure out if we found multiple candidates for
// libraries or not.
match libraries.len() {
0 => None,
1 => Some(libraries.into_iter().next().unwrap()),
_ => {
self.sess.span_err(self.span,
format!("multiple matching crates for `{}`",
self.crate_name).as_slice());
self.sess.note("candidates:");
for lib in libraries.iter() {
match lib.dylib {
Some(ref p) => {
self.sess.note(format!("path: {}",
p.display()).as_slice());
}
None => {}
}
match lib.rlib {
Some(ref p) => {
self.sess.note(format!("path: {}",
p.display()).as_slice());
}
None => {}
}
let data = lib.metadata.as_slice();
let name = decoder::get_crate_name(data);
note_crate_name(self.sess.diagnostic(), name.as_slice());
}
None
}
}
}
// Attempts to extract *one* library from the set `m`. If the set has no
// elements, `None` is returned. If the set has more than one element, then
// the errors and notes are emitted about the set of libraries.
//
// With only one library in the set, this function will extract it, and then
// read the metadata from it if `*slot` is `None`. If the metadata couldn't
// be read, it is assumed that the file isn't a valid rust library (no
// errors are emitted).
fn extract_one(&mut self, m: HashSet<Path>, flavor: &str,
slot: &mut Option<MetadataBlob>) -> Option<Path> {
let mut ret = None::<Path>;
let mut error = 0u;
if slot.is_some() {
// FIXME(#10786): for an optimization, we only read one of the
// library's metadata sections. In theory we should
// read both, but reading dylib metadata is quite
// slow.
if m.len() == 0 {
return None
} else if m.len() == 1 {
return Some(m.into_iter().next().unwrap())
}
}
for lib in m.into_iter() {
info!("{} reading metadata from: {}", flavor, lib.display());
let metadata = match get_metadata_section(self.sess.target.target.options.is_like_osx,
&lib) {
Ok(blob) => {
if self.crate_matches(blob.as_slice(), &lib) {
blob
} else {
info!("metadata mismatch");
continue
}
}
Err(_) => {
info!("no metadata found");
continue
}
};
if ret.is_some() {
self.sess.span_err(self.span,
format!("multiple {} candidates for `{}` \
found",
flavor,
self.crate_name).as_slice());
self.sess.span_note(self.span,
format!(r"candidate #1: {}",
ret.as_ref().unwrap()
.display()).as_slice());
error = 1;
ret = None;
}
if error > 0 {
error += 1;
self.sess.span_note(self.span,
format!(r"candidate #{}: {}", error,
lib.display()).as_slice());
continue
}
*slot = Some(metadata);
ret = Some(lib);
}
return if error > 0 {None} else {ret}
}
fn crate_matches(&mut self, crate_data: &[u8], libpath: &Path) -> bool {
if self.should_match_name {
match decoder::maybe_get_crate_name(crate_data) {
Some(ref name) if self.crate_name == name.as_slice() => {}
_ => { info!("Rejecting via crate name"); return false }
}
}
let hash = match decoder::maybe_get_crate_hash(crate_data) {
Some(hash) => hash, None => {
info!("Rejecting via lack of crate hash");
return false;
}
};
let triple = match decoder::get_crate_triple(crate_data) {
None => { debug!("triple not present"); return false }
Some(t) => t,
};
if triple.as_slice()!= self.triple {
info!("Rejecting via crate triple: expected {} got {}", self.triple, triple);
self.rejected_via_triple.push(CrateMismatch {
path: libpath.clone(),
got: triple.to_string()
});
return false;
}
match self.hash {
None => true,
Some(myhash) => {
if *myhash!= hash {
info!("Rejecting via hash: expected {} got {}", *myhash, hash);
self.rejected_via_hash.push(CrateMismatch {
path: libpath.clone(),
got: myhash.as_str().to_string()
});
false
} else {
true
}
}
}
}
// Returns the corresponding (prefix, suffix) that files need to have for
// dynamic libraries
fn dylibname(&self) -> (String, String) {
let t = &self.sess.target.target;
(t.options.dll_prefix.clone(), t.options.dll_suffix.clone())
}
fn find_commandline_library(&mut self) -> Option<Library> {
let locs = match self.sess.opts.externs.find_equiv(self.crate_name) {
Some(s) => s,
None => return None,
};
// First, filter out all libraries that look suspicious. We only accept
// files which actually exist that have the correct naming scheme for
// rlibs/dylibs.
let sess = self.sess;
| find_library_crate | identifier_name |
loader.rs | triple for crate `{}`", self.ident)
} else {
format!("can't find crate for `{}`", self.ident)
};
let message = match self.root {
&None => message,
&Some(ref r) => format!("{} which `{}` depends on",
message, r.ident)
};
self.sess.span_err(self.span, message.as_slice());
let mismatches = self.rejected_via_triple.iter();
if self.rejected_via_triple.len() > 0 {
self.sess.span_note(self.span,
format!("expected triple of {}",
self.triple).as_slice());
for (i, &CrateMismatch{ ref path, ref got }) in mismatches.enumerate() {
self.sess.fileline_note(self.span,
format!("crate `{}` path {}{}, triple {}: {}",
self.ident, "#", i+1, got, path.display()).as_slice());
}
}
if self.rejected_via_hash.len() > 0 {
self.sess.span_note(self.span, "perhaps this crate needs \
to be recompiled?");
let mismatches = self.rejected_via_hash.iter();
for (i, &CrateMismatch{ ref path,.. }) in mismatches.enumerate() {
self.sess.fileline_note(self.span,
format!("crate `{}` path {}{}: {}",
self.ident, "#", i+1, path.display()).as_slice());
}
match self.root {
&None => {}
&Some(ref r) => {
for (i, path) in r.paths().iter().enumerate() {
self.sess.fileline_note(self.span,
format!("crate `{}` path #{}: {}",
r.ident, i+1, path.display()).as_slice());
}
}
}
}
self.sess.abort_if_errors();
}
fn find_library_crate(&mut self) -> Option<Library> {
// If an SVH is specified, then this is a transitive dependency that
// must be loaded via -L plus some filtering.
if self.hash.is_none() {
self.should_match_name = false;
match self.find_commandline_library() {
Some(l) => return Some(l),
None => {}
}
self.should_match_name = true;
}
let dypair = self.dylibname();
// want: crate_name.dir_part() + prefix + crate_name.file_part + "-"
let dylib_prefix = format!("{}{}", dypair.ref0(), self.crate_name);
let rlib_prefix = format!("lib{}", self.crate_name);
let mut candidates = HashMap::new();
// First, find all possible candidate rlibs and dylibs purely based on
// the name of the files themselves. We're trying to match against an
// exact crate name and a possibly an exact hash.
//
// During this step, we can filter all found libraries based on the
// name and id found in the crate id (we ignore the path portion for
// filename matching), as well as the exact hash (if specified). If we
// end up having many candidates, we must look at the metadata to
// perform exact matches against hashes/crate ids. Note that opening up
// the metadata is where we do an exact match against the full contents
// of the crate id (path/name/id).
//
// The goal of this step is to look at as little metadata as possible.
self.filesearch.search(|path| {
let file = match path.filename_str() {
None => return FileDoesntMatch,
Some(file) => file,
};
let (hash, rlib) = if file.starts_with(rlib_prefix.as_slice()) &&
file.ends_with(".rlib") {
(file.slice(rlib_prefix.len(), file.len() - ".rlib".len()),
true)
} else if file.starts_with(dylib_prefix.as_slice()) &&
file.ends_with(dypair.ref1().as_slice()) {
(file.slice(dylib_prefix.len(), file.len() - dypair.ref1().len()),
false)
} else {
return FileDoesntMatch
};
info!("lib candidate: {}", path.display());
let slot = match candidates.entry(hash.to_string()) {
Occupied(entry) => entry.into_mut(),
Vacant(entry) => entry.set((HashSet::new(), HashSet::new())),
};
let (ref mut rlibs, ref mut dylibs) = *slot;
if rlib {
rlibs.insert(fs::realpath(path).unwrap());
} else {
dylibs.insert(fs::realpath(path).unwrap());
}
FileMatches
});
// We have now collected all known libraries into a set of candidates
// keyed of the filename hash listed. For each filename, we also have a
// list of rlibs/dylibs that apply. Here, we map each of these lists
// (per hash), to a Library candidate for returning.
//
// A Library candidate is created if the metadata for the set of
// libraries corresponds to the crate id and hash criteria that this
// search is being performed for.
let mut libraries = Vec::new();
for (_hash, (rlibs, dylibs)) in candidates.into_iter() {
let mut metadata = None;
let rlib = self.extract_one(rlibs, "rlib", &mut metadata);
let dylib = self.extract_one(dylibs, "dylib", &mut metadata);
match metadata {
Some(metadata) => {
libraries.push(Library {
dylib: dylib,
rlib: rlib,
metadata: metadata,
})
}
None => {}
}
}
// Having now translated all relevant found hashes into libraries, see
// what we've got and figure out if we found multiple candidates for
// libraries or not.
match libraries.len() {
0 => None,
1 => Some(libraries.into_iter().next().unwrap()),
_ => {
self.sess.span_err(self.span,
format!("multiple matching crates for `{}`",
self.crate_name).as_slice());
self.sess.note("candidates:");
for lib in libraries.iter() {
match lib.dylib {
Some(ref p) => {
self.sess.note(format!("path: {}",
p.display()).as_slice());
}
None => {}
}
match lib.rlib {
Some(ref p) => {
self.sess.note(format!("path: {}",
p.display()).as_slice());
}
None => {}
}
let data = lib.metadata.as_slice();
let name = decoder::get_crate_name(data);
note_crate_name(self.sess.diagnostic(), name.as_slice());
}
None
}
}
}
// Attempts to extract *one* library from the set `m`. If the set has no
// elements, `None` is returned. If the set has more than one element, then
// the errors and notes are emitted about the set of libraries.
//
// With only one library in the set, this function will extract it, and then
// read the metadata from it if `*slot` is `None`. If the metadata couldn't
// be read, it is assumed that the file isn't a valid rust library (no
// errors are emitted).
fn extract_one(&mut self, m: HashSet<Path>, flavor: &str,
slot: &mut Option<MetadataBlob>) -> Option<Path> {
let mut ret = None::<Path>;
let mut error = 0u;
if slot.is_some() {
// FIXME(#10786): for an optimization, we only read one of the
// library's metadata sections. In theory we should
// read both, but reading dylib metadata is quite
// slow.
if m.len() == 0 {
return None
} else if m.len() == 1 {
return Some(m.into_iter().next().unwrap())
}
}
for lib in m.into_iter() {
info!("{} reading metadata from: {}", flavor, lib.display());
let metadata = match get_metadata_section(self.sess.target.target.options.is_like_osx,
&lib) {
Ok(blob) => {
if self.crate_matches(blob.as_slice(), &lib) {
blob
} else {
info!("metadata mismatch");
continue
}
}
Err(_) => {
info!("no metadata found");
continue
}
};
if ret.is_some() {
self.sess.span_err(self.span,
format!("multiple {} candidates for `{}` \
found",
flavor,
self.crate_name).as_slice());
self.sess.span_note(self.span,
format!(r"candidate #1: {}",
ret.as_ref().unwrap()
.display()).as_slice());
error = 1;
ret = None;
}
if error > 0 {
error += 1;
self.sess.span_note(self.span,
format!(r"candidate #{}: {}", error,
lib.display()).as_slice());
continue
}
*slot = Some(metadata);
ret = Some(lib);
}
return if error > 0 {None} else {ret}
}
fn crate_matches(&mut self, crate_data: &[u8], libpath: &Path) -> bool {
if self.should_match_name {
match decoder::maybe_get_crate_name(crate_data) {
Some(ref name) if self.crate_name == name.as_slice() => {}
_ => { info!("Rejecting via crate name"); return false }
}
}
let hash = match decoder::maybe_get_crate_hash(crate_data) {
Some(hash) => hash, None => {
info!("Rejecting via lack of crate hash");
return false;
}
};
let triple = match decoder::get_crate_triple(crate_data) {
None => { debug!("triple not present"); return false }
Some(t) => t,
};
if triple.as_slice()!= self.triple {
info!("Rejecting via crate triple: expected {} got {}", self.triple, triple);
self.rejected_via_triple.push(CrateMismatch {
path: libpath.clone(),
got: triple.to_string()
});
return false;
}
match self.hash {
None => true,
Some(myhash) => {
if *myhash!= hash {
info!("Rejecting via hash: expected {} got {}", *myhash, hash);
self.rejected_via_hash.push(CrateMismatch {
path: libpath.clone(),
got: myhash.as_str().to_string()
});
false
} else {
true
}
}
}
}
// Returns the corresponding (prefix, suffix) that files need to have for
// dynamic libraries
fn dylibname(&self) -> (String, String) {
let t = &self.sess.target.target;
(t.options.dll_prefix.clone(), t.options.dll_suffix.clone())
}
fn find_commandline_library(&mut self) -> Option<Library> {
let locs = match self.sess.opts.externs.find_equiv(self.crate_name) {
Some(s) => s,
None => return None,
};
// First, filter out all libraries that look suspicious. We only accept
// files which actually exist that have the correct naming scheme for
// rlibs/dylibs.
let sess = self.sess;
let dylibname = self.dylibname();
let mut rlibs = HashSet::new();
let mut dylibs = HashSet::new();
{
let mut locs = locs.iter().map(|l| Path::new(l.as_slice())).filter(|loc| {
if!loc.exists() {
sess.err(format!("extern location for {} does not exist: {}",
self.crate_name, loc.display()).as_slice());
return false;
}
let file = match loc.filename_str() {
Some(file) => file,
None => {
sess.err(format!("extern location for {} is not a file: {}",
self.crate_name, loc.display()).as_slice());
return false;
}
};
if file.starts_with("lib") && file.ends_with(".rlib") {
return true
} else {
let (ref prefix, ref suffix) = dylibname;
if file.starts_with(prefix.as_slice()) && file.ends_with(suffix.as_slice()) {
return true
}
}
sess.err(format!("extern location for {} is of an unknown type: {}",
self.crate_name, loc.display()).as_slice());
false
});
// Now that we have an iterator of good candidates, make sure there's at
// most one rlib and at most one dylib.
for loc in locs {
if loc.filename_str().unwrap().ends_with(".rlib") {
rlibs.insert(fs::realpath(&loc).unwrap());
} else {
dylibs.insert(fs::realpath(&loc).unwrap());
}
}
};
// Extract the rlib/dylib pair.
let mut metadata = None;
let rlib = self.extract_one(rlibs, "rlib", &mut metadata);
let dylib = self.extract_one(dylibs, "dylib", &mut metadata);
if rlib.is_none() && dylib.is_none() { return None }
match metadata {
Some(metadata) => Some(Library {
dylib: dylib,
rlib: rlib,
metadata: metadata,
}),
None => None,
}
}
}
pub fn note_crate_name(diag: &SpanHandler, name: &str) {
diag.handler().note(format!("crate name: {}", name).as_slice());
}
impl ArchiveMetadata {
fn new(ar: ArchiveRO) -> Option<ArchiveMetadata> {
let data = match ar.read(METADATA_FILENAME) {
Some(data) => data as *const [u8],
None => {
debug!("didn't find '{}' in the archive", METADATA_FILENAME);
return None;
}
};
Some(ArchiveMetadata {
_archive: ar,
data: data,
})
}
pub fn as_slice<'a>(&'a self) -> &'a [u8] | { unsafe { &*self.data } } | identifier_body |
|
loader.rs | //! One of the immediate problems with linking the same library together twice
//! in the same problem is dealing with duplicate symbols. The primary way to
//! deal with this in rustc is to add hashes to the end of each symbol.
//!
//! In order to force hashes to change between versions of a library, if
//! desired, the compiler exposes an option `-C metadata=foo`, which is used to
//! initially seed each symbol hash. The string `foo` is prepended to each
//! string-to-hash to ensure that symbols change over time.
//!
//! ## Loading transitive dependencies
//!
//! Dealing with same-named-but-distinct crates is not just a local problem, but
//! one that also needs to be dealt with for transitive dependencies. Note that
//! in the letter above `--extern` flags only apply to the *local* set of
//! dependencies, not the upstream transitive dependencies. Consider this
//! dependency graph:
//!
//! ```notrust
//! A.1 A.2
//! | |
//! | |
//! B C
//! \ /
//! \ /
//! D
//! ```
//!
//! In this scenario, when we compile `D`, we need to be able to distinctly
//! resolve `A.1` and `A.2`, but an `--extern` flag cannot apply to these
//! transitive dependencies.
//!
//! Note that the key idea here is that `B` and `C` are both *already compiled*.
//! That is, they have already resolved their dependencies. Due to unrelated
//! technical reasons, when a library is compiled, it is only compatible with
//! the *exact same* version of the upstream libraries it was compiled against.
//! We use the "Strict Version Hash" to identify the exact copy of an upstream
//! library.
//!
//! With this knowledge, we know that `B` and `C` will depend on `A` with
//! different SVH values, so we crawl the normal `-L` paths looking for
//! `liba*.rlib` and filter based on the contained SVH.
//!
//! In the end, this ends up not needing `--extern` to specify upstream
//! transitive dependencies.
//!
//! # Wrapping up
//!
//! That's the general overview of loading crates in the compiler, but it's by
//! no means all of the necessary details. Take a look at the rest of
//! metadata::loader or metadata::creader for all the juicy details!
use back::archive::{METADATA_FILENAME};
use back::svh::Svh;
use driver::session::Session;
use llvm;
use llvm::{False, ObjectFile, mk_section_iter};
use llvm::archive_ro::ArchiveRO;
use metadata::cstore::{MetadataBlob, MetadataVec, MetadataArchive};
use metadata::decoder;
use metadata::encoder;
use metadata::filesearch::{FileSearch, FileMatches, FileDoesntMatch};
use syntax::codemap::Span;
use syntax::diagnostic::SpanHandler;
use util::fs;
use std::c_str::ToCStr;
use std::cmp;
use std::collections::hash_map::{Occupied, Vacant};
use std::collections::{HashMap, HashSet};
use std::io::fs::PathExtensions;
use std::io;
use std::ptr;
use std::slice;
use std::string;
use std::time::Duration;
use flate;
pub struct CrateMismatch {
path: Path,
got: String,
}
pub struct Context<'a> {
pub sess: &'a Session,
pub span: Span,
pub ident: &'a str,
pub crate_name: &'a str,
pub hash: Option<&'a Svh>,
pub triple: &'a str,
pub filesearch: FileSearch<'a>,
pub root: &'a Option<CratePaths>,
pub rejected_via_hash: Vec<CrateMismatch>,
pub rejected_via_triple: Vec<CrateMismatch>,
pub should_match_name: bool, | pub rlib: Option<Path>,
pub metadata: MetadataBlob,
}
pub struct ArchiveMetadata {
_archive: ArchiveRO,
// points into self._archive
data: *const [u8],
}
pub struct CratePaths {
pub ident: String,
pub dylib: Option<Path>,
pub rlib: Option<Path>
}
impl CratePaths {
fn paths(&self) -> Vec<Path> {
match (&self.dylib, &self.rlib) {
(&None, &None) => vec!(),
(&Some(ref p), &None) |
(&None, &Some(ref p)) => vec!(p.clone()),
(&Some(ref p1), &Some(ref p2)) => vec!(p1.clone(), p2.clone()),
}
}
}
impl<'a> Context<'a> {
pub fn maybe_load_library_crate(&mut self) -> Option<Library> {
self.find_library_crate()
}
pub fn load_library_crate(&mut self) -> Library {
match self.find_library_crate() {
Some(t) => t,
None => {
self.report_load_errs();
unreachable!()
}
}
}
pub fn report_load_errs(&mut self) {
let message = if self.rejected_via_hash.len() > 0 {
format!("found possibly newer version of crate `{}`",
self.ident)
} else if self.rejected_via_triple.len() > 0 {
format!("found incorrect triple for crate `{}`", self.ident)
} else {
format!("can't find crate for `{}`", self.ident)
};
let message = match self.root {
&None => message,
&Some(ref r) => format!("{} which `{}` depends on",
message, r.ident)
};
self.sess.span_err(self.span, message.as_slice());
let mismatches = self.rejected_via_triple.iter();
if self.rejected_via_triple.len() > 0 {
self.sess.span_note(self.span,
format!("expected triple of {}",
self.triple).as_slice());
for (i, &CrateMismatch{ ref path, ref got }) in mismatches.enumerate() {
self.sess.fileline_note(self.span,
format!("crate `{}` path {}{}, triple {}: {}",
self.ident, "#", i+1, got, path.display()).as_slice());
}
}
if self.rejected_via_hash.len() > 0 {
self.sess.span_note(self.span, "perhaps this crate needs \
to be recompiled?");
let mismatches = self.rejected_via_hash.iter();
for (i, &CrateMismatch{ ref path,.. }) in mismatches.enumerate() {
self.sess.fileline_note(self.span,
format!("crate `{}` path {}{}: {}",
self.ident, "#", i+1, path.display()).as_slice());
}
match self.root {
&None => {}
&Some(ref r) => {
for (i, path) in r.paths().iter().enumerate() {
self.sess.fileline_note(self.span,
format!("crate `{}` path #{}: {}",
r.ident, i+1, path.display()).as_slice());
}
}
}
}
self.sess.abort_if_errors();
}
fn find_library_crate(&mut self) -> Option<Library> {
// If an SVH is specified, then this is a transitive dependency that
// must be loaded via -L plus some filtering.
if self.hash.is_none() {
self.should_match_name = false;
match self.find_commandline_library() {
Some(l) => return Some(l),
None => {}
}
self.should_match_name = true;
}
let dypair = self.dylibname();
// want: crate_name.dir_part() + prefix + crate_name.file_part + "-"
let dylib_prefix = format!("{}{}", dypair.ref0(), self.crate_name);
let rlib_prefix = format!("lib{}", self.crate_name);
let mut candidates = HashMap::new();
// First, find all possible candidate rlibs and dylibs purely based on
// the name of the files themselves. We're trying to match against an
// exact crate name and a possibly an exact hash.
//
// During this step, we can filter all found libraries based on the
// name and id found in the crate id (we ignore the path portion for
// filename matching), as well as the exact hash (if specified). If we
// end up having many candidates, we must look at the metadata to
// perform exact matches against hashes/crate ids. Note that opening up
// the metadata is where we do an exact match against the full contents
// of the crate id (path/name/id).
//
// The goal of this step is to look at as little metadata as possible.
self.filesearch.search(|path| {
let file = match path.filename_str() {
None => return FileDoesntMatch,
Some(file) => file,
};
let (hash, rlib) = if file.starts_with(rlib_prefix.as_slice()) &&
file.ends_with(".rlib") {
(file.slice(rlib_prefix.len(), file.len() - ".rlib".len()),
true)
} else if file.starts_with(dylib_prefix.as_slice()) &&
file.ends_with(dypair.ref1().as_slice()) {
(file.slice(dylib_prefix.len(), file.len() - dypair.ref1().len()),
false)
} else {
return FileDoesntMatch
};
info!("lib candidate: {}", path.display());
let slot = match candidates.entry(hash.to_string()) {
Occupied(entry) => entry.into_mut(),
Vacant(entry) => entry.set((HashSet::new(), HashSet::new())),
};
let (ref mut rlibs, ref mut dylibs) = *slot;
if rlib {
rlibs.insert(fs::realpath(path).unwrap());
} else {
dylibs.insert(fs::realpath(path).unwrap());
}
FileMatches
});
// We have now collected all known libraries into a set of candidates
// keyed of the filename hash listed. For each filename, we also have a
// list of rlibs/dylibs that apply. Here, we map each of these lists
// (per hash), to a Library candidate for returning.
//
// A Library candidate is created if the metadata for the set of
// libraries corresponds to the crate id and hash criteria that this
// search is being performed for.
let mut libraries = Vec::new();
for (_hash, (rlibs, dylibs)) in candidates.into_iter() {
let mut metadata = None;
let rlib = self.extract_one(rlibs, "rlib", &mut metadata);
let dylib = self.extract_one(dylibs, "dylib", &mut metadata);
match metadata {
Some(metadata) => {
libraries.push(Library {
dylib: dylib,
rlib: rlib,
metadata: metadata,
})
}
None => {}
}
}
// Having now translated all relevant found hashes into libraries, see
// what we've got and figure out if we found multiple candidates for
// libraries or not.
match libraries.len() {
0 => None,
1 => Some(libraries.into_iter().next().unwrap()),
_ => {
self.sess.span_err(self.span,
format!("multiple matching crates for `{}`",
self.crate_name).as_slice());
self.sess.note("candidates:");
for lib in libraries.iter() {
match lib.dylib {
Some(ref p) => {
self.sess.note(format!("path: {}",
p.display()).as_slice());
}
None => {}
}
match lib.rlib {
Some(ref p) => {
self.sess.note(format!("path: {}",
p.display()).as_slice());
}
None => {}
}
let data = lib.metadata.as_slice();
let name = decoder::get_crate_name(data);
note_crate_name(self.sess.diagnostic(), name.as_slice());
}
None
}
}
}
// Attempts to extract *one* library from the set `m`. If the set has no
// elements, `None` is returned. If the set has more than one element, then
// the errors and notes are emitted about the set of libraries.
//
// With only one library in the set, this function will extract it, and then
// read the metadata from it if `*slot` is `None`. If the metadata couldn't
// be read, it is assumed that the file isn't a valid rust library (no
// errors are emitted).
fn extract_one(&mut self, m: HashSet<Path>, flavor: &str,
slot: &mut Option<MetadataBlob>) -> Option<Path> {
let mut ret = None::<Path>;
let mut error = 0u;
if slot.is_some() {
// FIXME(#10786): for an optimization, we only read one of the
// library's metadata sections. In theory we should
// read both, but reading dylib metadata is quite
// slow.
if m.len() == 0 {
return None
} else if m.len() == 1 {
return Some(m.into_iter().next().unwrap())
}
}
for lib in m.into_iter() {
info!("{} reading metadata from: {}", flavor, lib.display());
let metadata = match get_metadata_section(self.sess.target.target.options.is_like_osx,
&lib) {
Ok(blob) => {
if self.crate_matches(blob.as_slice(), &lib) {
blob
} else {
info!("metadata mismatch");
continue
}
}
Err(_) => {
info!("no metadata found");
continue
}
};
if ret.is_some() {
self.sess.span_err(self.span,
format!("multiple {} candidates for `{}` \
found",
flavor,
self.crate_name).as_slice());
self.sess.span_note(self.span,
format!(r"candidate #1: {}",
ret.as_ref().unwrap()
.display()).as_slice());
error = 1;
ret = None;
}
if error > 0 {
error += 1;
self.sess.span_note(self.span,
format!(r"candidate #{}: {}", error,
lib.display()).as_slice());
continue
}
*slot = Some(metadata);
ret = Some(lib);
}
return if error > 0 {None} else {ret}
}
fn crate_matches(&mut self, crate_data: &[u8], libpath: &Path) -> bool {
if self.should_match_name {
match decoder::maybe_get_crate_name(crate_data) {
Some(ref name) if self.crate_name == name.as_slice() => {}
_ => { info!("Rejecting via crate name"); return false }
}
}
let hash = match decoder::maybe_get_crate_hash(crate_data) {
Some(hash) => hash, None => {
info!("Rejecting via lack of crate hash");
return false;
}
};
let triple = match decoder::get_crate_triple(crate_data) {
None => { debug!("triple not present"); return false }
Some(t) => t,
};
if triple.as_slice()!= self.triple {
info!("Rejecting via crate triple: expected {} got {}", self.triple, triple);
self.rejected_via_triple.push(CrateMismatch {
path: libpath.clone(),
got: triple.to_string()
});
return false;
}
match self.hash {
None => true,
Some(myhash) => {
if *myhash!= hash {
info!("Rejecting via hash: expected {} got {}", *myhash, hash);
self.rejected_via_hash.push(CrateMismatch {
path: libpath.clone(),
got: myhash.as_str().to_string()
});
false
} else {
true
}
}
}
}
// Returns the corresponding (prefix, suffix) that files need to have for
// dynamic libraries
fn dylibname(&self) -> (String, String) {
let t = &self.sess.target.target;
(t.options.dll_prefix.clone(), t.options.dll_suffix.clone())
}
fn find_commandline_library(&mut self) -> Option<Library> {
let locs = match self.sess.opts.externs.find_equiv(self.crate_name) {
Some(s) => s,
None => return None,
};
// First, filter out all libraries that look suspicious. We only accept
// files which actually exist that have the correct naming scheme for
// rlibs/dylibs.
let sess = self.sess;
| }
pub struct Library {
pub dylib: Option<Path>, | random_line_split |
lib.rs | use std::env;
use std::error::Error;
use std::fs;
pub struct Config {
pub query: String,
pub filename: String,
pub case_sensitive: bool,
}
// ANCHOR: here
impl Config {
pub fn new(mut args: env::Args) -> Result<Config, &'static str> {
// --snip--
// ANCHOR_END: here
if args.len() < 3 {
return Err("not enough arguments");
}
let query = args[1].clone();
let filename = args[2].clone();
let case_sensitive = env::var("CASE_INSENSITIVE").is_err();
Ok(Config {
query,
filename,
case_sensitive,
})
}
}
pub fn run(config: Config) -> Result<(), Box<dyn Error>> {
let contents = fs::read_to_string(config.filename)?;
let results = if config.case_sensitive {
search(&config.query, &contents)
} else | ;
for line in results {
println!("{}", line);
}
Ok(())
}
pub fn search<'a>(query: &str, contents: &'a str) -> Vec<&'a str> {
let mut results = Vec::new();
for line in contents.lines() {
if line.contains(query) {
results.push(line);
}
}
results
}
pub fn search_case_insensitive<'a>(
query: &str,
contents: &'a str,
) -> Vec<&'a str> {
let query = query.to_lowercase();
let mut results = Vec::new();
for line in contents.lines() {
if line.to_lowercase().contains(&query) {
results.push(line);
}
}
results
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn case_sensitive() {
let query = "duct";
let contents = "\
Rust:
safe, fast, productive.
Pick three.
Duct tape.";
assert_eq!(vec!["safe, fast, productive."], search(query, contents));
}
#[test]
fn case_insensitive() {
let query = "rUsT";
let contents = "\
Rust:
safe, fast, productive.
Pick three.
Trust me.";
assert_eq!(
vec!["Rust:", "Trust me."],
search_case_insensitive(query, contents)
);
}
}
| {
search_case_insensitive(&config.query, &contents)
} | conditional_block |
lib.rs | use std::env;
use std::error::Error;
use std::fs;
pub struct Config {
pub query: String,
pub filename: String,
pub case_sensitive: bool,
}
// ANCHOR: here
impl Config {
pub fn new(mut args: env::Args) -> Result<Config, &'static str> {
// --snip--
// ANCHOR_END: here
if args.len() < 3 {
return Err("not enough arguments");
}
let query = args[1].clone();
let filename = args[2].clone();
let case_sensitive = env::var("CASE_INSENSITIVE").is_err();
Ok(Config {
query,
filename,
case_sensitive,
})
}
}
pub fn run(config: Config) -> Result<(), Box<dyn Error>> {
let contents = fs::read_to_string(config.filename)?;
let results = if config.case_sensitive {
search(&config.query, &contents) | };
for line in results {
println!("{}", line);
}
Ok(())
}
pub fn search<'a>(query: &str, contents: &'a str) -> Vec<&'a str> {
let mut results = Vec::new();
for line in contents.lines() {
if line.contains(query) {
results.push(line);
}
}
results
}
pub fn search_case_insensitive<'a>(
query: &str,
contents: &'a str,
) -> Vec<&'a str> {
let query = query.to_lowercase();
let mut results = Vec::new();
for line in contents.lines() {
if line.to_lowercase().contains(&query) {
results.push(line);
}
}
results
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn case_sensitive() {
let query = "duct";
let contents = "\
Rust:
safe, fast, productive.
Pick three.
Duct tape.";
assert_eq!(vec!["safe, fast, productive."], search(query, contents));
}
#[test]
fn case_insensitive() {
let query = "rUsT";
let contents = "\
Rust:
safe, fast, productive.
Pick three.
Trust me.";
assert_eq!(
vec!["Rust:", "Trust me."],
search_case_insensitive(query, contents)
);
}
} | } else {
search_case_insensitive(&config.query, &contents) | random_line_split |
lib.rs | use std::env;
use std::error::Error;
use std::fs;
pub struct Config {
pub query: String,
pub filename: String,
pub case_sensitive: bool,
}
// ANCHOR: here
impl Config {
pub fn new(mut args: env::Args) -> Result<Config, &'static str> |
}
pub fn run(config: Config) -> Result<(), Box<dyn Error>> {
let contents = fs::read_to_string(config.filename)?;
let results = if config.case_sensitive {
search(&config.query, &contents)
} else {
search_case_insensitive(&config.query, &contents)
};
for line in results {
println!("{}", line);
}
Ok(())
}
pub fn search<'a>(query: &str, contents: &'a str) -> Vec<&'a str> {
let mut results = Vec::new();
for line in contents.lines() {
if line.contains(query) {
results.push(line);
}
}
results
}
pub fn search_case_insensitive<'a>(
query: &str,
contents: &'a str,
) -> Vec<&'a str> {
let query = query.to_lowercase();
let mut results = Vec::new();
for line in contents.lines() {
if line.to_lowercase().contains(&query) {
results.push(line);
}
}
results
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn case_sensitive() {
let query = "duct";
let contents = "\
Rust:
safe, fast, productive.
Pick three.
Duct tape.";
assert_eq!(vec!["safe, fast, productive."], search(query, contents));
}
#[test]
fn case_insensitive() {
let query = "rUsT";
let contents = "\
Rust:
safe, fast, productive.
Pick three.
Trust me.";
assert_eq!(
vec!["Rust:", "Trust me."],
search_case_insensitive(query, contents)
);
}
}
| {
// --snip--
// ANCHOR_END: here
if args.len() < 3 {
return Err("not enough arguments");
}
let query = args[1].clone();
let filename = args[2].clone();
let case_sensitive = env::var("CASE_INSENSITIVE").is_err();
Ok(Config {
query,
filename,
case_sensitive,
})
} | identifier_body |
lib.rs | use std::env;
use std::error::Error;
use std::fs;
pub struct Config {
pub query: String,
pub filename: String,
pub case_sensitive: bool,
}
// ANCHOR: here
impl Config {
pub fn new(mut args: env::Args) -> Result<Config, &'static str> {
// --snip--
// ANCHOR_END: here
if args.len() < 3 {
return Err("not enough arguments");
}
let query = args[1].clone();
let filename = args[2].clone();
let case_sensitive = env::var("CASE_INSENSITIVE").is_err();
Ok(Config {
query,
filename,
case_sensitive,
})
}
}
pub fn run(config: Config) -> Result<(), Box<dyn Error>> {
let contents = fs::read_to_string(config.filename)?;
let results = if config.case_sensitive {
search(&config.query, &contents)
} else {
search_case_insensitive(&config.query, &contents)
};
for line in results {
println!("{}", line);
}
Ok(())
}
pub fn search<'a>(query: &str, contents: &'a str) -> Vec<&'a str> {
let mut results = Vec::new();
for line in contents.lines() {
if line.contains(query) {
results.push(line);
}
}
results
}
pub fn search_case_insensitive<'a>(
query: &str,
contents: &'a str,
) -> Vec<&'a str> {
let query = query.to_lowercase();
let mut results = Vec::new();
for line in contents.lines() {
if line.to_lowercase().contains(&query) {
results.push(line);
}
}
results
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn case_sensitive() {
let query = "duct";
let contents = "\
Rust:
safe, fast, productive.
Pick three.
Duct tape.";
assert_eq!(vec!["safe, fast, productive."], search(query, contents));
}
#[test]
fn | () {
let query = "rUsT";
let contents = "\
Rust:
safe, fast, productive.
Pick three.
Trust me.";
assert_eq!(
vec!["Rust:", "Trust me."],
search_case_insensitive(query, contents)
);
}
}
| case_insensitive | identifier_name |
glue.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
#![allow(unsafe_code)]
use app_units::Au;
use data::{NUM_THREADS, PerDocumentStyleData};
use env_logger;
use euclid::Size2D;
use gecko_bindings::bindings::{RawGeckoElementBorrowed, RawGeckoNodeBorrowed};
use gecko_bindings::bindings::{RawServoStyleSetBorrowed, RawServoStyleSetOwned, ServoNodeDataOwned};
use gecko_bindings::bindings::{RawServoStyleSetBorrowedMut, RawGeckoDocumentBorrowed};
use gecko_bindings::bindings::{RawServoStyleSheetBorrowed, ServoComputedValuesBorrowed};
use gecko_bindings::bindings::{RawServoStyleSheetStrong, ServoComputedValuesStrong};
use gecko_bindings::bindings::{ServoComputedValuesBorrowedOrNull, ServoDeclarationBlock};
use gecko_bindings::bindings::{ServoDeclarationBlockBorrowed, ServoDeclarationBlockStrong};
use gecko_bindings::bindings::{ThreadSafePrincipalHolder, ThreadSafeURIHolder, nsHTMLCSSStyleSheet};
use gecko_bindings::ptr::{GeckoArcPrincipal, GeckoArcURI};
use gecko_bindings::structs::{SheetParsingMode, nsIAtom};
use gecko_bindings::structs::ServoElementSnapshot;
use gecko_bindings::structs::nsRestyleHint;
use gecko_bindings::sugar::ownership::{FFIArcHelpers, HasArcFFI, HasBoxFFI};
use gecko_bindings::sugar::ownership::{HasFFI, HasSimpleFFI, Strong};
use gecko_string_cache::Atom;
use snapshot::GeckoElementSnapshot;
use std::mem::transmute;
use std::ptr;
use std::slice;
use std::str::from_utf8_unchecked;
use std::sync::{Arc, Mutex};
use std::sync::atomic::{AtomicBool, AtomicPtr, Ordering};
use style::arc_ptr_eq;
use style::context::{LocalStyleContextCreationInfo, ReflowGoal, SharedStyleContext};
use style::dom::{TDocument, TElement, TNode};
use style::error_reporting::StdoutErrorReporter;
use style::gecko_selector_impl::{GeckoSelectorImpl, PseudoElement};
use style::parallel;
use style::parser::ParserContextExtraData;
use style::properties::{ComputedValues, PropertyDeclarationBlock, parse_one_declaration};
use style::selector_impl::PseudoElementCascadeType;
use style::sequential;
use style::stylesheets::{Origin, Stylesheet};
use style::timer::Timer;
use traversal::RecalcStyleOnly;
use url::Url;
use wrapper::{DUMMY_BASE_URL, GeckoDocument, GeckoElement, GeckoNode, NonOpaqueStyleData};
/*
* For Gecko->Servo function calls, we need to redeclare the same signature that was declared in
* the C header in Gecko. In order to catch accidental mismatches, we run rust-bindgen against
* those signatures as well, giving us a second declaration of all the Servo_* functions in this
* crate. If there's a mismatch, LLVM will assert and abort, which is a rather awful thing to
* depend on but good enough for our purposes.
*/
#[no_mangle]
pub extern "C" fn Servo_Initialize() -> () {
// Enable standard Rust logging.
//
// See https://doc.rust-lang.org/log/env_logger/index.html for instructions.
env_logger::init().unwrap();
// Allocate our default computed values.
unsafe { ComputedValues::initialize(); }
}
#[no_mangle]
pub extern "C" fn Servo_Shutdown() -> () {
// Destroy our default computed values.
unsafe { ComputedValues::shutdown(); }
}
fn restyle_subtree(node: GeckoNode, raw_data: RawServoStyleSetBorrowedMut) {
debug_assert!(node.is_element() || node.is_text_node());
// Force the creation of our lazily-constructed initial computed values on
// the main thread, since it's not safe to call elsewhere.
//
// FIXME(bholley): this should move into Servo_Initialize as soon as we get
// rid of the HackilyFindSomeDeviceContext stuff that happens during
// initial_values computation, since that stuff needs to be called further
// along in startup than the sensible place to call Servo_Initialize.
ComputedValues::initial_values();
// The stylist consumes stylesheets lazily.
let per_doc_data = PerDocumentStyleData::from_ffi_mut(raw_data);
per_doc_data.flush_stylesheets();
let local_context_data =
LocalStyleContextCreationInfo::new(per_doc_data.new_animations_sender.clone());
let shared_style_context = SharedStyleContext {
viewport_size: Size2D::new(Au(0), Au(0)),
screen_size_changed: false,
generation: 0,
goal: ReflowGoal::ForScriptQuery,
stylist: per_doc_data.stylist.clone(),
running_animations: per_doc_data.running_animations.clone(),
expired_animations: per_doc_data.expired_animations.clone(),
error_reporter: Box::new(StdoutErrorReporter),
local_context_creation_data: Mutex::new(local_context_data),
timer: Timer::new(),
};
// We ensure this is true before calling Servo_RestyleSubtree()
debug_assert!(node.is_dirty() || node.has_dirty_descendants());
if per_doc_data.num_threads == 1 || per_doc_data.work_queue.is_none() {
sequential::traverse_dom::<GeckoNode, RecalcStyleOnly>(node, &shared_style_context);
} else {
parallel::traverse_dom::<GeckoNode, RecalcStyleOnly>(node, &shared_style_context,
per_doc_data.work_queue.as_mut().unwrap());
}
}
#[no_mangle]
pub extern "C" fn Servo_RestyleSubtree(node: RawGeckoNodeBorrowed,
raw_data: RawServoStyleSetBorrowedMut) -> () {
let node = GeckoNode(node);
restyle_subtree(node, raw_data);
}
#[no_mangle]
pub extern "C" fn Servo_RestyleDocument(doc: RawGeckoDocumentBorrowed, raw_data: RawServoStyleSetBorrowedMut) -> () {
let document = GeckoDocument(doc);
let node = match document.root_node() {
Some(x) => x,
None => return,
};
restyle_subtree(node, raw_data);
}
#[no_mangle]
pub extern "C" fn Servo_StyleWorkerThreadCount() -> u32 {
*NUM_THREADS as u32
}
#[no_mangle]
pub extern "C" fn Servo_NodeData_Drop(data: ServoNodeDataOwned) -> () {
let _ = data.into_box::<NonOpaqueStyleData>();
}
#[no_mangle]
pub extern "C" fn Servo_StyleSheet_FromUTF8Bytes(bytes: *const u8,
length: u32,
mode: SheetParsingMode,
base_bytes: *const u8,
base_length: u32,
base: *mut ThreadSafeURIHolder,
referrer: *mut ThreadSafeURIHolder,
principal: *mut ThreadSafePrincipalHolder)
-> RawServoStyleSheetStrong {
let input = unsafe { from_utf8_unchecked(slice::from_raw_parts(bytes, length as usize)) };
let origin = match mode {
SheetParsingMode::eAuthorSheetFeatures => Origin::Author,
SheetParsingMode::eUserSheetFeatures => Origin::User,
SheetParsingMode::eAgentSheetFeatures => Origin::UserAgent,
};
let base_str = unsafe { from_utf8_unchecked(slice::from_raw_parts(base_bytes, base_length as usize)) };
let url = Url::parse(base_str).unwrap();
let extra_data = ParserContextExtraData {
base: Some(GeckoArcURI::new(base)),
referrer: Some(GeckoArcURI::new(referrer)),
principal: Some(GeckoArcPrincipal::new(principal)),
};
let sheet = Arc::new(Stylesheet::from_str(input, url, origin, Box::new(StdoutErrorReporter),
extra_data));
unsafe {
transmute(sheet)
}
}
#[no_mangle]
pub extern "C" fn Servo_StyleSet_AppendStyleSheet(raw_data: RawServoStyleSetBorrowedMut,
raw_sheet: RawServoStyleSheetBorrowed) {
let data = PerDocumentStyleData::from_ffi_mut(raw_data);
let sheet = HasArcFFI::as_arc(&raw_sheet);
data.stylesheets.retain(|x|!arc_ptr_eq(x, sheet));
data.stylesheets.push(sheet.clone());
data.stylesheets_changed = true;
}
#[no_mangle]
pub extern "C" fn Servo_StyleSet_PrependStyleSheet(raw_data: RawServoStyleSetBorrowedMut,
raw_sheet: RawServoStyleSheetBorrowed) {
let data = PerDocumentStyleData::from_ffi_mut(raw_data);
let sheet = HasArcFFI::as_arc(&raw_sheet);
data.stylesheets.retain(|x|!arc_ptr_eq(x, sheet));
data.stylesheets.insert(0, sheet.clone());
data.stylesheets_changed = true;
}
#[no_mangle]
pub extern "C" fn Servo_StyleSet_InsertStyleSheetBefore(raw_data: RawServoStyleSetBorrowedMut,
raw_sheet: RawServoStyleSheetBorrowed,
raw_reference: RawServoStyleSheetBorrowed) {
let data = PerDocumentStyleData::from_ffi_mut(raw_data);
let sheet = HasArcFFI::as_arc(&raw_sheet);
let reference = HasArcFFI::as_arc(&raw_reference);
data.stylesheets.retain(|x|!arc_ptr_eq(x, sheet));
let index = data.stylesheets.iter().position(|x| arc_ptr_eq(x, reference)).unwrap();
data.stylesheets.insert(index, sheet.clone());
data.stylesheets_changed = true;
}
#[no_mangle]
pub extern "C" fn Servo_StyleSet_RemoveStyleSheet(raw_data: RawServoStyleSetBorrowedMut,
raw_sheet: RawServoStyleSheetBorrowed) {
let data = PerDocumentStyleData::from_ffi_mut(raw_data);
let sheet = HasArcFFI::as_arc(&raw_sheet);
data.stylesheets.retain(|x|!arc_ptr_eq(x, sheet));
data.stylesheets_changed = true;
}
#[no_mangle]
pub extern "C" fn Servo_StyleSheet_HasRules(raw_sheet: RawServoStyleSheetBorrowed) -> bool {
!Stylesheet::as_arc(&raw_sheet).rules.is_empty()
}
#[no_mangle]
pub extern "C" fn Servo_StyleSheet_AddRef(sheet: RawServoStyleSheetBorrowed) -> () {
unsafe { Stylesheet::addref(sheet) };
}
#[no_mangle]
pub extern "C" fn Servo_StyleSheet_Release(sheet: RawServoStyleSheetBorrowed) -> () {
unsafe { Stylesheet::release(sheet) };
}
#[no_mangle]
pub extern "C" fn Servo_ComputedValues_Get(node: RawGeckoNodeBorrowed)
-> ServoComputedValuesStrong {
let node = GeckoNode(node);
let arc_cv = match node.borrow_data().map_or(None, |data| data.style.clone()) {
Some(style) => style,
None => {
// FIXME(bholley): This case subverts the intended semantics of this
// function, and exists only to make stylo builds more robust corner-
// cases where Gecko wants the style for a node that Servo never
// traversed. We should remove this as soon as possible.
error!("stylo: encountered unstyled node, substituting default values.");
Arc::new(ComputedValues::initial_values().clone())
},
};
arc_cv.into_strong()
}
#[no_mangle]
pub extern "C" fn Servo_ComputedValues_GetForAnonymousBox(parent_style_or_null: ServoComputedValuesBorrowedOrNull,
pseudo_tag: *mut nsIAtom,
raw_data: RawServoStyleSetBorrowedMut)
-> ServoComputedValuesStrong {
// The stylist consumes stylesheets lazily.
let data = PerDocumentStyleData::from_ffi_mut(raw_data);
data.flush_stylesheets();
let atom = Atom::from(pseudo_tag);
let pseudo = PseudoElement::from_atom_unchecked(atom, /* anon_box = */ true);
let maybe_parent = parent_style_or_null.as_arc_opt();
let new_computed = data.stylist.precomputed_values_for_pseudo(&pseudo, maybe_parent);
new_computed.map_or(Strong::null(), |c| c.into_strong())
}
#[no_mangle]
pub extern "C" fn Servo_ComputedValues_GetForPseudoElement(parent_style: ServoComputedValuesBorrowed,
match_element: RawGeckoElementBorrowed,
pseudo_tag: *mut nsIAtom,
raw_data: RawServoStyleSetBorrowedMut,
is_probe: bool)
-> ServoComputedValuesStrong {
debug_assert!(!(match_element as *const _).is_null());
let parent_or_null = || {
if is_probe {
Strong::null()
} else {
ComputedValues::as_arc(&parent_style).clone().into_strong()
}
};
let atom = Atom::from(pseudo_tag);
let pseudo = PseudoElement::from_atom_unchecked(atom, /* anon_box = */ false);
// The stylist consumes stylesheets lazily.
let data = PerDocumentStyleData::from_ffi_mut(raw_data);
data.flush_stylesheets();
let element = GeckoElement(match_element);
match GeckoSelectorImpl::pseudo_element_cascade_type(&pseudo) {
PseudoElementCascadeType::Eager => {
let node = element.as_node();
let maybe_computed = node.borrow_data()
.and_then(|data| {
data.per_pseudo.get(&pseudo).map(|c| c.clone())
});
maybe_computed.map_or_else(parent_or_null, FFIArcHelpers::into_strong)
}
PseudoElementCascadeType::Lazy => {
let parent = ComputedValues::as_arc(&parent_style);
data.stylist
.lazily_compute_pseudo_element_style(&element, &pseudo, parent)
.map_or_else(parent_or_null, FFIArcHelpers::into_strong)
}
PseudoElementCascadeType::Precomputed => {
unreachable!("Anonymous pseudo found in \
Servo_GetComputedValuesForPseudoElement");
}
}
}
#[no_mangle]
pub extern "C" fn Servo_ComputedValues_Inherit(parent_style: ServoComputedValuesBorrowedOrNull)
-> ServoComputedValuesStrong {
let style = if parent_style.is_null() {
Arc::new(ComputedValues::initial_values().clone())
} else {
ComputedValues::inherit_from(parent_style.as_arc())
};
style.into_strong()
}
#[no_mangle]
pub extern "C" fn Servo_ComputedValues_AddRef(ptr: ServoComputedValuesBorrowed) -> () {
unsafe { ComputedValues::addref(ptr) };
}
#[no_mangle]
pub extern "C" fn Servo_ComputedValues_Release(ptr: ServoComputedValuesBorrowed) -> () {
unsafe { ComputedValues::release(ptr) };
}
#[no_mangle]
pub extern "C" fn Servo_StyleSet_Init() -> RawServoStyleSetOwned {
let data = Box::new(PerDocumentStyleData::new());
data.into_ffi()
}
#[no_mangle]
pub extern "C" fn Servo_StyleSet_Drop(data: RawServoStyleSetOwned) -> () |
pub struct GeckoDeclarationBlock {
pub declarations: Option<Arc<PropertyDeclarationBlock>>,
// XXX The following two fields are made atomic to work around the
// ownership system so that they can be changed inside a shared
// instance. It wouldn't provide safety as Rust usually promises,
// but it is fine as far as we only access them in a single thread.
// If we need to access them in different threads, we would need
// to redesign how it works with MiscContainer in Gecko side.
pub cache: AtomicPtr<nsHTMLCSSStyleSheet>,
pub immutable: AtomicBool,
}
unsafe impl HasFFI for GeckoDeclarationBlock {
type FFIType = ServoDeclarationBlock;
}
unsafe impl HasArcFFI for GeckoDeclarationBlock {}
#[no_mangle]
pub extern "C" fn Servo_ParseStyleAttribute(bytes: *const u8, length: u32,
cache: *mut nsHTMLCSSStyleSheet)
-> ServoDeclarationBlockStrong {
let value = unsafe { from_utf8_unchecked(slice::from_raw_parts(bytes, length as usize)) };
Arc::new(GeckoDeclarationBlock {
declarations: GeckoElement::parse_style_attribute(value).map(Arc::new),
cache: AtomicPtr::new(cache),
immutable: AtomicBool::new(false),
}).into_strong()
}
#[no_mangle]
pub extern "C" fn Servo_DeclarationBlock_AddRef(declarations: ServoDeclarationBlockBorrowed) {
unsafe { GeckoDeclarationBlock::addref(declarations) };
}
#[no_mangle]
pub extern "C" fn Servo_DeclarationBlock_Release(declarations: ServoDeclarationBlockBorrowed) {
unsafe { GeckoDeclarationBlock::release(declarations) };
}
#[no_mangle]
pub extern "C" fn Servo_DeclarationBlock_GetCache(declarations: ServoDeclarationBlockBorrowed)
-> *mut nsHTMLCSSStyleSheet {
GeckoDeclarationBlock::as_arc(&declarations).cache.load(Ordering::Relaxed)
}
#[no_mangle]
pub extern "C" fn Servo_DeclarationBlock_SetImmutable(declarations: ServoDeclarationBlockBorrowed) {
GeckoDeclarationBlock::as_arc(&declarations).immutable.store(true, Ordering::Relaxed)
}
#[no_mangle]
pub extern "C" fn Servo_DeclarationBlock_ClearCachePointer(declarations: ServoDeclarationBlockBorrowed) {
GeckoDeclarationBlock::as_arc(&declarations).cache.store(ptr::null_mut(), Ordering::Relaxed)
}
#[no_mangle]
pub extern "C" fn Servo_CSSSupports(property: *const u8, property_length: u32,
value: *const u8, value_length: u32) -> bool {
let property = unsafe { from_utf8_unchecked(slice::from_raw_parts(property, property_length as usize)) };
let value = unsafe { from_utf8_unchecked(slice::from_raw_parts(value, value_length as usize)) };
let base_url = &*DUMMY_BASE_URL;
let extra_data = ParserContextExtraData::default();
match parse_one_declaration(&property, &value, &base_url, Box::new(StdoutErrorReporter), extra_data) {
Ok(decls) =>!decls.is_empty(),
Err(()) => false,
}
}
#[no_mangle]
pub extern "C" fn Servo_ComputeRestyleHint(element: RawGeckoElementBorrowed,
snapshot: *mut ServoElementSnapshot,
raw_data: RawServoStyleSetBorrowed) -> nsRestyleHint {
let per_doc_data = PerDocumentStyleData::from_ffi(raw_data);
let snapshot = unsafe { GeckoElementSnapshot::from_raw(snapshot) };
let element = GeckoElement(element);
// NB: This involves an FFI call, we can get rid of it easily if needed.
let current_state = element.get_state();
let hint = per_doc_data.stylist
.compute_restyle_hint(&element, &snapshot,
current_state);
// NB: Binary representations match.
unsafe { transmute(hint.bits() as u32) }
}
| {
let _ = data.into_box::<PerDocumentStyleData>();
} | identifier_body |
glue.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
#![allow(unsafe_code)]
use app_units::Au;
use data::{NUM_THREADS, PerDocumentStyleData};
use env_logger;
use euclid::Size2D;
use gecko_bindings::bindings::{RawGeckoElementBorrowed, RawGeckoNodeBorrowed};
use gecko_bindings::bindings::{RawServoStyleSetBorrowed, RawServoStyleSetOwned, ServoNodeDataOwned};
use gecko_bindings::bindings::{RawServoStyleSetBorrowedMut, RawGeckoDocumentBorrowed};
use gecko_bindings::bindings::{RawServoStyleSheetBorrowed, ServoComputedValuesBorrowed};
use gecko_bindings::bindings::{RawServoStyleSheetStrong, ServoComputedValuesStrong};
use gecko_bindings::bindings::{ServoComputedValuesBorrowedOrNull, ServoDeclarationBlock};
use gecko_bindings::bindings::{ServoDeclarationBlockBorrowed, ServoDeclarationBlockStrong};
use gecko_bindings::bindings::{ThreadSafePrincipalHolder, ThreadSafeURIHolder, nsHTMLCSSStyleSheet};
use gecko_bindings::ptr::{GeckoArcPrincipal, GeckoArcURI};
use gecko_bindings::structs::{SheetParsingMode, nsIAtom};
use gecko_bindings::structs::ServoElementSnapshot;
use gecko_bindings::structs::nsRestyleHint;
use gecko_bindings::sugar::ownership::{FFIArcHelpers, HasArcFFI, HasBoxFFI};
use gecko_bindings::sugar::ownership::{HasFFI, HasSimpleFFI, Strong};
use gecko_string_cache::Atom;
use snapshot::GeckoElementSnapshot;
use std::mem::transmute;
use std::ptr;
use std::slice;
use std::str::from_utf8_unchecked;
use std::sync::{Arc, Mutex};
use std::sync::atomic::{AtomicBool, AtomicPtr, Ordering};
use style::arc_ptr_eq;
use style::context::{LocalStyleContextCreationInfo, ReflowGoal, SharedStyleContext};
use style::dom::{TDocument, TElement, TNode};
use style::error_reporting::StdoutErrorReporter;
use style::gecko_selector_impl::{GeckoSelectorImpl, PseudoElement};
use style::parallel;
use style::parser::ParserContextExtraData;
use style::properties::{ComputedValues, PropertyDeclarationBlock, parse_one_declaration};
use style::selector_impl::PseudoElementCascadeType;
use style::sequential;
use style::stylesheets::{Origin, Stylesheet};
use style::timer::Timer;
use traversal::RecalcStyleOnly;
use url::Url;
use wrapper::{DUMMY_BASE_URL, GeckoDocument, GeckoElement, GeckoNode, NonOpaqueStyleData};
/*
* For Gecko->Servo function calls, we need to redeclare the same signature that was declared in
* the C header in Gecko. In order to catch accidental mismatches, we run rust-bindgen against
* those signatures as well, giving us a second declaration of all the Servo_* functions in this
* crate. If there's a mismatch, LLVM will assert and abort, which is a rather awful thing to
* depend on but good enough for our purposes.
*/
#[no_mangle]
pub extern "C" fn Servo_Initialize() -> () {
// Enable standard Rust logging.
//
// See https://doc.rust-lang.org/log/env_logger/index.html for instructions.
env_logger::init().unwrap();
// Allocate our default computed values.
unsafe { ComputedValues::initialize(); }
}
#[no_mangle]
pub extern "C" fn Servo_Shutdown() -> () {
// Destroy our default computed values.
unsafe { ComputedValues::shutdown(); }
}
fn restyle_subtree(node: GeckoNode, raw_data: RawServoStyleSetBorrowedMut) {
debug_assert!(node.is_element() || node.is_text_node());
// Force the creation of our lazily-constructed initial computed values on
// the main thread, since it's not safe to call elsewhere.
//
// FIXME(bholley): this should move into Servo_Initialize as soon as we get
// rid of the HackilyFindSomeDeviceContext stuff that happens during
// initial_values computation, since that stuff needs to be called further
// along in startup than the sensible place to call Servo_Initialize.
ComputedValues::initial_values();
// The stylist consumes stylesheets lazily.
let per_doc_data = PerDocumentStyleData::from_ffi_mut(raw_data);
per_doc_data.flush_stylesheets();
let local_context_data =
LocalStyleContextCreationInfo::new(per_doc_data.new_animations_sender.clone());
let shared_style_context = SharedStyleContext {
viewport_size: Size2D::new(Au(0), Au(0)),
screen_size_changed: false,
generation: 0,
goal: ReflowGoal::ForScriptQuery,
stylist: per_doc_data.stylist.clone(),
running_animations: per_doc_data.running_animations.clone(),
expired_animations: per_doc_data.expired_animations.clone(),
error_reporter: Box::new(StdoutErrorReporter),
local_context_creation_data: Mutex::new(local_context_data),
timer: Timer::new(),
};
// We ensure this is true before calling Servo_RestyleSubtree()
debug_assert!(node.is_dirty() || node.has_dirty_descendants());
if per_doc_data.num_threads == 1 || per_doc_data.work_queue.is_none() {
sequential::traverse_dom::<GeckoNode, RecalcStyleOnly>(node, &shared_style_context);
} else {
parallel::traverse_dom::<GeckoNode, RecalcStyleOnly>(node, &shared_style_context,
per_doc_data.work_queue.as_mut().unwrap());
}
}
#[no_mangle]
pub extern "C" fn Servo_RestyleSubtree(node: RawGeckoNodeBorrowed,
raw_data: RawServoStyleSetBorrowedMut) -> () {
let node = GeckoNode(node);
restyle_subtree(node, raw_data);
}
#[no_mangle]
pub extern "C" fn Servo_RestyleDocument(doc: RawGeckoDocumentBorrowed, raw_data: RawServoStyleSetBorrowedMut) -> () {
let document = GeckoDocument(doc);
let node = match document.root_node() {
Some(x) => x,
None => return,
};
restyle_subtree(node, raw_data);
}
#[no_mangle]
pub extern "C" fn Servo_StyleWorkerThreadCount() -> u32 {
*NUM_THREADS as u32
}
#[no_mangle]
pub extern "C" fn Servo_NodeData_Drop(data: ServoNodeDataOwned) -> () {
let _ = data.into_box::<NonOpaqueStyleData>();
}
#[no_mangle]
pub extern "C" fn Servo_StyleSheet_FromUTF8Bytes(bytes: *const u8,
length: u32,
mode: SheetParsingMode,
base_bytes: *const u8,
base_length: u32,
base: *mut ThreadSafeURIHolder,
referrer: *mut ThreadSafeURIHolder,
principal: *mut ThreadSafePrincipalHolder)
-> RawServoStyleSheetStrong {
let input = unsafe { from_utf8_unchecked(slice::from_raw_parts(bytes, length as usize)) };
let origin = match mode {
SheetParsingMode::eAuthorSheetFeatures => Origin::Author,
SheetParsingMode::eUserSheetFeatures => Origin::User,
SheetParsingMode::eAgentSheetFeatures => Origin::UserAgent,
};
let base_str = unsafe { from_utf8_unchecked(slice::from_raw_parts(base_bytes, base_length as usize)) };
let url = Url::parse(base_str).unwrap();
let extra_data = ParserContextExtraData {
base: Some(GeckoArcURI::new(base)),
referrer: Some(GeckoArcURI::new(referrer)),
principal: Some(GeckoArcPrincipal::new(principal)),
};
let sheet = Arc::new(Stylesheet::from_str(input, url, origin, Box::new(StdoutErrorReporter),
extra_data));
unsafe {
transmute(sheet)
}
}
#[no_mangle]
pub extern "C" fn Servo_StyleSet_AppendStyleSheet(raw_data: RawServoStyleSetBorrowedMut,
raw_sheet: RawServoStyleSheetBorrowed) {
let data = PerDocumentStyleData::from_ffi_mut(raw_data);
let sheet = HasArcFFI::as_arc(&raw_sheet);
data.stylesheets.retain(|x|!arc_ptr_eq(x, sheet));
data.stylesheets.push(sheet.clone());
data.stylesheets_changed = true;
}
#[no_mangle]
pub extern "C" fn Servo_StyleSet_PrependStyleSheet(raw_data: RawServoStyleSetBorrowedMut,
raw_sheet: RawServoStyleSheetBorrowed) {
let data = PerDocumentStyleData::from_ffi_mut(raw_data);
let sheet = HasArcFFI::as_arc(&raw_sheet);
data.stylesheets.retain(|x|!arc_ptr_eq(x, sheet));
data.stylesheets.insert(0, sheet.clone());
data.stylesheets_changed = true;
}
#[no_mangle]
pub extern "C" fn Servo_StyleSet_InsertStyleSheetBefore(raw_data: RawServoStyleSetBorrowedMut,
raw_sheet: RawServoStyleSheetBorrowed,
raw_reference: RawServoStyleSheetBorrowed) {
let data = PerDocumentStyleData::from_ffi_mut(raw_data);
let sheet = HasArcFFI::as_arc(&raw_sheet);
let reference = HasArcFFI::as_arc(&raw_reference);
data.stylesheets.retain(|x|!arc_ptr_eq(x, sheet));
let index = data.stylesheets.iter().position(|x| arc_ptr_eq(x, reference)).unwrap();
data.stylesheets.insert(index, sheet.clone());
data.stylesheets_changed = true;
}
#[no_mangle]
pub extern "C" fn Servo_StyleSet_RemoveStyleSheet(raw_data: RawServoStyleSetBorrowedMut,
raw_sheet: RawServoStyleSheetBorrowed) {
let data = PerDocumentStyleData::from_ffi_mut(raw_data);
let sheet = HasArcFFI::as_arc(&raw_sheet);
data.stylesheets.retain(|x|!arc_ptr_eq(x, sheet));
data.stylesheets_changed = true;
}
#[no_mangle]
pub extern "C" fn Servo_StyleSheet_HasRules(raw_sheet: RawServoStyleSheetBorrowed) -> bool {
!Stylesheet::as_arc(&raw_sheet).rules.is_empty()
}
#[no_mangle]
pub extern "C" fn Servo_StyleSheet_AddRef(sheet: RawServoStyleSheetBorrowed) -> () {
unsafe { Stylesheet::addref(sheet) };
}
#[no_mangle]
pub extern "C" fn Servo_StyleSheet_Release(sheet: RawServoStyleSheetBorrowed) -> () {
unsafe { Stylesheet::release(sheet) };
}
#[no_mangle]
pub extern "C" fn Servo_ComputedValues_Get(node: RawGeckoNodeBorrowed)
-> ServoComputedValuesStrong {
let node = GeckoNode(node);
let arc_cv = match node.borrow_data().map_or(None, |data| data.style.clone()) {
Some(style) => style,
None => {
// FIXME(bholley): This case subverts the intended semantics of this
// function, and exists only to make stylo builds more robust corner-
// cases where Gecko wants the style for a node that Servo never
// traversed. We should remove this as soon as possible.
error!("stylo: encountered unstyled node, substituting default values.");
Arc::new(ComputedValues::initial_values().clone())
},
};
arc_cv.into_strong()
}
#[no_mangle]
pub extern "C" fn Servo_ComputedValues_GetForAnonymousBox(parent_style_or_null: ServoComputedValuesBorrowedOrNull,
pseudo_tag: *mut nsIAtom,
raw_data: RawServoStyleSetBorrowedMut)
-> ServoComputedValuesStrong {
// The stylist consumes stylesheets lazily.
let data = PerDocumentStyleData::from_ffi_mut(raw_data);
data.flush_stylesheets();
let atom = Atom::from(pseudo_tag);
let pseudo = PseudoElement::from_atom_unchecked(atom, /* anon_box = */ true);
let maybe_parent = parent_style_or_null.as_arc_opt();
let new_computed = data.stylist.precomputed_values_for_pseudo(&pseudo, maybe_parent);
new_computed.map_or(Strong::null(), |c| c.into_strong())
}
#[no_mangle]
pub extern "C" fn Servo_ComputedValues_GetForPseudoElement(parent_style: ServoComputedValuesBorrowed,
match_element: RawGeckoElementBorrowed,
pseudo_tag: *mut nsIAtom,
raw_data: RawServoStyleSetBorrowedMut,
is_probe: bool)
-> ServoComputedValuesStrong {
debug_assert!(!(match_element as *const _).is_null());
let parent_or_null = || {
if is_probe {
Strong::null()
} else {
ComputedValues::as_arc(&parent_style).clone().into_strong()
}
};
let atom = Atom::from(pseudo_tag);
let pseudo = PseudoElement::from_atom_unchecked(atom, /* anon_box = */ false);
// The stylist consumes stylesheets lazily.
let data = PerDocumentStyleData::from_ffi_mut(raw_data);
data.flush_stylesheets();
let element = GeckoElement(match_element);
match GeckoSelectorImpl::pseudo_element_cascade_type(&pseudo) {
PseudoElementCascadeType::Eager => {
let node = element.as_node();
let maybe_computed = node.borrow_data()
.and_then(|data| {
data.per_pseudo.get(&pseudo).map(|c| c.clone())
});
maybe_computed.map_or_else(parent_or_null, FFIArcHelpers::into_strong)
}
PseudoElementCascadeType::Lazy => {
let parent = ComputedValues::as_arc(&parent_style);
data.stylist
.lazily_compute_pseudo_element_style(&element, &pseudo, parent)
.map_or_else(parent_or_null, FFIArcHelpers::into_strong)
}
PseudoElementCascadeType::Precomputed => {
unreachable!("Anonymous pseudo found in \
Servo_GetComputedValuesForPseudoElement");
}
}
}
#[no_mangle]
pub extern "C" fn Servo_ComputedValues_Inherit(parent_style: ServoComputedValuesBorrowedOrNull)
-> ServoComputedValuesStrong {
let style = if parent_style.is_null() {
Arc::new(ComputedValues::initial_values().clone())
} else {
ComputedValues::inherit_from(parent_style.as_arc())
};
style.into_strong()
}
#[no_mangle]
pub extern "C" fn | (ptr: ServoComputedValuesBorrowed) -> () {
unsafe { ComputedValues::addref(ptr) };
}
#[no_mangle]
pub extern "C" fn Servo_ComputedValues_Release(ptr: ServoComputedValuesBorrowed) -> () {
unsafe { ComputedValues::release(ptr) };
}
#[no_mangle]
pub extern "C" fn Servo_StyleSet_Init() -> RawServoStyleSetOwned {
let data = Box::new(PerDocumentStyleData::new());
data.into_ffi()
}
#[no_mangle]
pub extern "C" fn Servo_StyleSet_Drop(data: RawServoStyleSetOwned) -> () {
let _ = data.into_box::<PerDocumentStyleData>();
}
pub struct GeckoDeclarationBlock {
pub declarations: Option<Arc<PropertyDeclarationBlock>>,
// XXX The following two fields are made atomic to work around the
// ownership system so that they can be changed inside a shared
// instance. It wouldn't provide safety as Rust usually promises,
// but it is fine as far as we only access them in a single thread.
// If we need to access them in different threads, we would need
// to redesign how it works with MiscContainer in Gecko side.
pub cache: AtomicPtr<nsHTMLCSSStyleSheet>,
pub immutable: AtomicBool,
}
unsafe impl HasFFI for GeckoDeclarationBlock {
type FFIType = ServoDeclarationBlock;
}
unsafe impl HasArcFFI for GeckoDeclarationBlock {}
#[no_mangle]
pub extern "C" fn Servo_ParseStyleAttribute(bytes: *const u8, length: u32,
cache: *mut nsHTMLCSSStyleSheet)
-> ServoDeclarationBlockStrong {
let value = unsafe { from_utf8_unchecked(slice::from_raw_parts(bytes, length as usize)) };
Arc::new(GeckoDeclarationBlock {
declarations: GeckoElement::parse_style_attribute(value).map(Arc::new),
cache: AtomicPtr::new(cache),
immutable: AtomicBool::new(false),
}).into_strong()
}
#[no_mangle]
pub extern "C" fn Servo_DeclarationBlock_AddRef(declarations: ServoDeclarationBlockBorrowed) {
unsafe { GeckoDeclarationBlock::addref(declarations) };
}
#[no_mangle]
pub extern "C" fn Servo_DeclarationBlock_Release(declarations: ServoDeclarationBlockBorrowed) {
unsafe { GeckoDeclarationBlock::release(declarations) };
}
#[no_mangle]
pub extern "C" fn Servo_DeclarationBlock_GetCache(declarations: ServoDeclarationBlockBorrowed)
-> *mut nsHTMLCSSStyleSheet {
GeckoDeclarationBlock::as_arc(&declarations).cache.load(Ordering::Relaxed)
}
#[no_mangle]
pub extern "C" fn Servo_DeclarationBlock_SetImmutable(declarations: ServoDeclarationBlockBorrowed) {
GeckoDeclarationBlock::as_arc(&declarations).immutable.store(true, Ordering::Relaxed)
}
#[no_mangle]
pub extern "C" fn Servo_DeclarationBlock_ClearCachePointer(declarations: ServoDeclarationBlockBorrowed) {
GeckoDeclarationBlock::as_arc(&declarations).cache.store(ptr::null_mut(), Ordering::Relaxed)
}
#[no_mangle]
pub extern "C" fn Servo_CSSSupports(property: *const u8, property_length: u32,
value: *const u8, value_length: u32) -> bool {
let property = unsafe { from_utf8_unchecked(slice::from_raw_parts(property, property_length as usize)) };
let value = unsafe { from_utf8_unchecked(slice::from_raw_parts(value, value_length as usize)) };
let base_url = &*DUMMY_BASE_URL;
let extra_data = ParserContextExtraData::default();
match parse_one_declaration(&property, &value, &base_url, Box::new(StdoutErrorReporter), extra_data) {
Ok(decls) =>!decls.is_empty(),
Err(()) => false,
}
}
#[no_mangle]
pub extern "C" fn Servo_ComputeRestyleHint(element: RawGeckoElementBorrowed,
snapshot: *mut ServoElementSnapshot,
raw_data: RawServoStyleSetBorrowed) -> nsRestyleHint {
let per_doc_data = PerDocumentStyleData::from_ffi(raw_data);
let snapshot = unsafe { GeckoElementSnapshot::from_raw(snapshot) };
let element = GeckoElement(element);
// NB: This involves an FFI call, we can get rid of it easily if needed.
let current_state = element.get_state();
let hint = per_doc_data.stylist
.compute_restyle_hint(&element, &snapshot,
current_state);
// NB: Binary representations match.
unsafe { transmute(hint.bits() as u32) }
}
| Servo_ComputedValues_AddRef | identifier_name |
glue.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
#![allow(unsafe_code)]
use app_units::Au;
use data::{NUM_THREADS, PerDocumentStyleData};
use env_logger;
use euclid::Size2D;
use gecko_bindings::bindings::{RawGeckoElementBorrowed, RawGeckoNodeBorrowed};
use gecko_bindings::bindings::{RawServoStyleSetBorrowed, RawServoStyleSetOwned, ServoNodeDataOwned};
use gecko_bindings::bindings::{RawServoStyleSetBorrowedMut, RawGeckoDocumentBorrowed};
use gecko_bindings::bindings::{RawServoStyleSheetBorrowed, ServoComputedValuesBorrowed};
use gecko_bindings::bindings::{RawServoStyleSheetStrong, ServoComputedValuesStrong};
use gecko_bindings::bindings::{ServoComputedValuesBorrowedOrNull, ServoDeclarationBlock};
use gecko_bindings::bindings::{ServoDeclarationBlockBorrowed, ServoDeclarationBlockStrong};
use gecko_bindings::bindings::{ThreadSafePrincipalHolder, ThreadSafeURIHolder, nsHTMLCSSStyleSheet};
use gecko_bindings::ptr::{GeckoArcPrincipal, GeckoArcURI};
use gecko_bindings::structs::{SheetParsingMode, nsIAtom};
use gecko_bindings::structs::ServoElementSnapshot;
use gecko_bindings::structs::nsRestyleHint;
use gecko_bindings::sugar::ownership::{FFIArcHelpers, HasArcFFI, HasBoxFFI};
use gecko_bindings::sugar::ownership::{HasFFI, HasSimpleFFI, Strong};
use gecko_string_cache::Atom;
use snapshot::GeckoElementSnapshot;
use std::mem::transmute;
use std::ptr;
use std::slice;
use std::str::from_utf8_unchecked;
use std::sync::{Arc, Mutex};
use std::sync::atomic::{AtomicBool, AtomicPtr, Ordering};
use style::arc_ptr_eq;
use style::context::{LocalStyleContextCreationInfo, ReflowGoal, SharedStyleContext};
use style::dom::{TDocument, TElement, TNode};
use style::error_reporting::StdoutErrorReporter;
use style::gecko_selector_impl::{GeckoSelectorImpl, PseudoElement};
use style::parallel;
use style::parser::ParserContextExtraData;
use style::properties::{ComputedValues, PropertyDeclarationBlock, parse_one_declaration};
use style::selector_impl::PseudoElementCascadeType;
use style::sequential;
use style::stylesheets::{Origin, Stylesheet};
use style::timer::Timer;
use traversal::RecalcStyleOnly;
use url::Url;
use wrapper::{DUMMY_BASE_URL, GeckoDocument, GeckoElement, GeckoNode, NonOpaqueStyleData};
/*
* For Gecko->Servo function calls, we need to redeclare the same signature that was declared in
* the C header in Gecko. In order to catch accidental mismatches, we run rust-bindgen against
* those signatures as well, giving us a second declaration of all the Servo_* functions in this
* crate. If there's a mismatch, LLVM will assert and abort, which is a rather awful thing to
* depend on but good enough for our purposes.
*/
#[no_mangle]
pub extern "C" fn Servo_Initialize() -> () {
// Enable standard Rust logging.
//
// See https://doc.rust-lang.org/log/env_logger/index.html for instructions.
env_logger::init().unwrap();
// Allocate our default computed values.
unsafe { ComputedValues::initialize(); }
}
#[no_mangle]
pub extern "C" fn Servo_Shutdown() -> () {
// Destroy our default computed values.
unsafe { ComputedValues::shutdown(); }
}
fn restyle_subtree(node: GeckoNode, raw_data: RawServoStyleSetBorrowedMut) {
debug_assert!(node.is_element() || node.is_text_node());
// Force the creation of our lazily-constructed initial computed values on
// the main thread, since it's not safe to call elsewhere.
//
// FIXME(bholley): this should move into Servo_Initialize as soon as we get
// rid of the HackilyFindSomeDeviceContext stuff that happens during
// initial_values computation, since that stuff needs to be called further
// along in startup than the sensible place to call Servo_Initialize.
ComputedValues::initial_values();
// The stylist consumes stylesheets lazily.
let per_doc_data = PerDocumentStyleData::from_ffi_mut(raw_data);
per_doc_data.flush_stylesheets();
let local_context_data =
LocalStyleContextCreationInfo::new(per_doc_data.new_animations_sender.clone());
let shared_style_context = SharedStyleContext {
viewport_size: Size2D::new(Au(0), Au(0)),
screen_size_changed: false,
generation: 0,
goal: ReflowGoal::ForScriptQuery,
stylist: per_doc_data.stylist.clone(),
running_animations: per_doc_data.running_animations.clone(),
expired_animations: per_doc_data.expired_animations.clone(),
error_reporter: Box::new(StdoutErrorReporter),
local_context_creation_data: Mutex::new(local_context_data),
timer: Timer::new(),
};
// We ensure this is true before calling Servo_RestyleSubtree()
debug_assert!(node.is_dirty() || node.has_dirty_descendants());
if per_doc_data.num_threads == 1 || per_doc_data.work_queue.is_none() {
sequential::traverse_dom::<GeckoNode, RecalcStyleOnly>(node, &shared_style_context);
} else {
parallel::traverse_dom::<GeckoNode, RecalcStyleOnly>(node, &shared_style_context,
per_doc_data.work_queue.as_mut().unwrap());
}
}
#[no_mangle]
pub extern "C" fn Servo_RestyleSubtree(node: RawGeckoNodeBorrowed,
raw_data: RawServoStyleSetBorrowedMut) -> () {
let node = GeckoNode(node);
restyle_subtree(node, raw_data);
}
#[no_mangle]
pub extern "C" fn Servo_RestyleDocument(doc: RawGeckoDocumentBorrowed, raw_data: RawServoStyleSetBorrowedMut) -> () {
let document = GeckoDocument(doc);
let node = match document.root_node() {
Some(x) => x,
None => return,
};
restyle_subtree(node, raw_data);
}
#[no_mangle]
pub extern "C" fn Servo_StyleWorkerThreadCount() -> u32 {
*NUM_THREADS as u32
}
#[no_mangle]
pub extern "C" fn Servo_NodeData_Drop(data: ServoNodeDataOwned) -> () {
let _ = data.into_box::<NonOpaqueStyleData>();
}
#[no_mangle]
pub extern "C" fn Servo_StyleSheet_FromUTF8Bytes(bytes: *const u8,
length: u32,
mode: SheetParsingMode,
base_bytes: *const u8,
base_length: u32,
base: *mut ThreadSafeURIHolder,
referrer: *mut ThreadSafeURIHolder,
principal: *mut ThreadSafePrincipalHolder)
-> RawServoStyleSheetStrong {
let input = unsafe { from_utf8_unchecked(slice::from_raw_parts(bytes, length as usize)) };
let origin = match mode {
SheetParsingMode::eAuthorSheetFeatures => Origin::Author,
SheetParsingMode::eUserSheetFeatures => Origin::User,
SheetParsingMode::eAgentSheetFeatures => Origin::UserAgent,
};
let base_str = unsafe { from_utf8_unchecked(slice::from_raw_parts(base_bytes, base_length as usize)) };
let url = Url::parse(base_str).unwrap();
let extra_data = ParserContextExtraData {
base: Some(GeckoArcURI::new(base)),
referrer: Some(GeckoArcURI::new(referrer)),
principal: Some(GeckoArcPrincipal::new(principal)),
};
let sheet = Arc::new(Stylesheet::from_str(input, url, origin, Box::new(StdoutErrorReporter),
extra_data));
unsafe {
transmute(sheet)
}
}
#[no_mangle]
pub extern "C" fn Servo_StyleSet_AppendStyleSheet(raw_data: RawServoStyleSetBorrowedMut,
raw_sheet: RawServoStyleSheetBorrowed) {
let data = PerDocumentStyleData::from_ffi_mut(raw_data);
let sheet = HasArcFFI::as_arc(&raw_sheet);
data.stylesheets.retain(|x|!arc_ptr_eq(x, sheet));
data.stylesheets.push(sheet.clone());
data.stylesheets_changed = true;
}
#[no_mangle]
pub extern "C" fn Servo_StyleSet_PrependStyleSheet(raw_data: RawServoStyleSetBorrowedMut,
raw_sheet: RawServoStyleSheetBorrowed) {
let data = PerDocumentStyleData::from_ffi_mut(raw_data);
let sheet = HasArcFFI::as_arc(&raw_sheet);
data.stylesheets.retain(|x|!arc_ptr_eq(x, sheet));
data.stylesheets.insert(0, sheet.clone());
data.stylesheets_changed = true;
}
#[no_mangle]
pub extern "C" fn Servo_StyleSet_InsertStyleSheetBefore(raw_data: RawServoStyleSetBorrowedMut,
raw_sheet: RawServoStyleSheetBorrowed,
raw_reference: RawServoStyleSheetBorrowed) {
let data = PerDocumentStyleData::from_ffi_mut(raw_data);
let sheet = HasArcFFI::as_arc(&raw_sheet);
let reference = HasArcFFI::as_arc(&raw_reference);
data.stylesheets.retain(|x|!arc_ptr_eq(x, sheet));
let index = data.stylesheets.iter().position(|x| arc_ptr_eq(x, reference)).unwrap();
data.stylesheets.insert(index, sheet.clone());
data.stylesheets_changed = true;
}
#[no_mangle]
pub extern "C" fn Servo_StyleSet_RemoveStyleSheet(raw_data: RawServoStyleSetBorrowedMut,
raw_sheet: RawServoStyleSheetBorrowed) {
let data = PerDocumentStyleData::from_ffi_mut(raw_data);
let sheet = HasArcFFI::as_arc(&raw_sheet);
data.stylesheets.retain(|x|!arc_ptr_eq(x, sheet));
data.stylesheets_changed = true;
}
#[no_mangle]
pub extern "C" fn Servo_StyleSheet_HasRules(raw_sheet: RawServoStyleSheetBorrowed) -> bool {
!Stylesheet::as_arc(&raw_sheet).rules.is_empty()
}
#[no_mangle]
pub extern "C" fn Servo_StyleSheet_AddRef(sheet: RawServoStyleSheetBorrowed) -> () {
unsafe { Stylesheet::addref(sheet) };
}
#[no_mangle]
pub extern "C" fn Servo_StyleSheet_Release(sheet: RawServoStyleSheetBorrowed) -> () {
unsafe { Stylesheet::release(sheet) };
}
#[no_mangle]
pub extern "C" fn Servo_ComputedValues_Get(node: RawGeckoNodeBorrowed)
-> ServoComputedValuesStrong {
let node = GeckoNode(node);
let arc_cv = match node.borrow_data().map_or(None, |data| data.style.clone()) {
Some(style) => style,
None => {
// FIXME(bholley): This case subverts the intended semantics of this
// function, and exists only to make stylo builds more robust corner-
// cases where Gecko wants the style for a node that Servo never
// traversed. We should remove this as soon as possible.
error!("stylo: encountered unstyled node, substituting default values.");
Arc::new(ComputedValues::initial_values().clone())
},
};
arc_cv.into_strong()
}
#[no_mangle]
pub extern "C" fn Servo_ComputedValues_GetForAnonymousBox(parent_style_or_null: ServoComputedValuesBorrowedOrNull,
pseudo_tag: *mut nsIAtom,
raw_data: RawServoStyleSetBorrowedMut)
-> ServoComputedValuesStrong {
// The stylist consumes stylesheets lazily.
let data = PerDocumentStyleData::from_ffi_mut(raw_data);
data.flush_stylesheets();
let atom = Atom::from(pseudo_tag);
let pseudo = PseudoElement::from_atom_unchecked(atom, /* anon_box = */ true);
let maybe_parent = parent_style_or_null.as_arc_opt();
let new_computed = data.stylist.precomputed_values_for_pseudo(&pseudo, maybe_parent);
new_computed.map_or(Strong::null(), |c| c.into_strong())
}
#[no_mangle]
pub extern "C" fn Servo_ComputedValues_GetForPseudoElement(parent_style: ServoComputedValuesBorrowed,
match_element: RawGeckoElementBorrowed,
pseudo_tag: *mut nsIAtom,
raw_data: RawServoStyleSetBorrowedMut,
is_probe: bool)
-> ServoComputedValuesStrong {
debug_assert!(!(match_element as *const _).is_null());
let parent_or_null = || {
if is_probe {
Strong::null()
} else {
ComputedValues::as_arc(&parent_style).clone().into_strong()
}
};
let atom = Atom::from(pseudo_tag);
let pseudo = PseudoElement::from_atom_unchecked(atom, /* anon_box = */ false);
// The stylist consumes stylesheets lazily.
let data = PerDocumentStyleData::from_ffi_mut(raw_data);
data.flush_stylesheets();
let element = GeckoElement(match_element);
match GeckoSelectorImpl::pseudo_element_cascade_type(&pseudo) {
PseudoElementCascadeType::Eager => {
let node = element.as_node();
let maybe_computed = node.borrow_data()
.and_then(|data| {
data.per_pseudo.get(&pseudo).map(|c| c.clone())
});
maybe_computed.map_or_else(parent_or_null, FFIArcHelpers::into_strong)
}
PseudoElementCascadeType::Lazy => {
let parent = ComputedValues::as_arc(&parent_style);
data.stylist
.lazily_compute_pseudo_element_style(&element, &pseudo, parent)
.map_or_else(parent_or_null, FFIArcHelpers::into_strong)
}
PseudoElementCascadeType::Precomputed => {
unreachable!("Anonymous pseudo found in \
Servo_GetComputedValuesForPseudoElement");
}
}
}
#[no_mangle]
pub extern "C" fn Servo_ComputedValues_Inherit(parent_style: ServoComputedValuesBorrowedOrNull)
-> ServoComputedValuesStrong {
let style = if parent_style.is_null() {
Arc::new(ComputedValues::initial_values().clone())
} else {
ComputedValues::inherit_from(parent_style.as_arc())
};
style.into_strong()
}
#[no_mangle]
pub extern "C" fn Servo_ComputedValues_AddRef(ptr: ServoComputedValuesBorrowed) -> () {
unsafe { ComputedValues::addref(ptr) };
}
#[no_mangle]
pub extern "C" fn Servo_ComputedValues_Release(ptr: ServoComputedValuesBorrowed) -> () {
unsafe { ComputedValues::release(ptr) };
}
#[no_mangle]
pub extern "C" fn Servo_StyleSet_Init() -> RawServoStyleSetOwned {
let data = Box::new(PerDocumentStyleData::new());
data.into_ffi()
}
#[no_mangle]
pub extern "C" fn Servo_StyleSet_Drop(data: RawServoStyleSetOwned) -> () {
let _ = data.into_box::<PerDocumentStyleData>();
}
pub struct GeckoDeclarationBlock {
pub declarations: Option<Arc<PropertyDeclarationBlock>>,
// XXX The following two fields are made atomic to work around the
// ownership system so that they can be changed inside a shared
// instance. It wouldn't provide safety as Rust usually promises,
// but it is fine as far as we only access them in a single thread.
// If we need to access them in different threads, we would need
// to redesign how it works with MiscContainer in Gecko side.
pub cache: AtomicPtr<nsHTMLCSSStyleSheet>,
pub immutable: AtomicBool,
}
unsafe impl HasFFI for GeckoDeclarationBlock {
type FFIType = ServoDeclarationBlock;
}
unsafe impl HasArcFFI for GeckoDeclarationBlock {}
#[no_mangle]
pub extern "C" fn Servo_ParseStyleAttribute(bytes: *const u8, length: u32,
cache: *mut nsHTMLCSSStyleSheet)
-> ServoDeclarationBlockStrong {
let value = unsafe { from_utf8_unchecked(slice::from_raw_parts(bytes, length as usize)) };
Arc::new(GeckoDeclarationBlock {
declarations: GeckoElement::parse_style_attribute(value).map(Arc::new),
cache: AtomicPtr::new(cache),
immutable: AtomicBool::new(false),
}).into_strong()
}
#[no_mangle]
pub extern "C" fn Servo_DeclarationBlock_AddRef(declarations: ServoDeclarationBlockBorrowed) {
unsafe { GeckoDeclarationBlock::addref(declarations) };
}
#[no_mangle]
pub extern "C" fn Servo_DeclarationBlock_Release(declarations: ServoDeclarationBlockBorrowed) {
unsafe { GeckoDeclarationBlock::release(declarations) };
}
#[no_mangle]
pub extern "C" fn Servo_DeclarationBlock_GetCache(declarations: ServoDeclarationBlockBorrowed)
-> *mut nsHTMLCSSStyleSheet {
GeckoDeclarationBlock::as_arc(&declarations).cache.load(Ordering::Relaxed)
}
#[no_mangle]
pub extern "C" fn Servo_DeclarationBlock_SetImmutable(declarations: ServoDeclarationBlockBorrowed) {
GeckoDeclarationBlock::as_arc(&declarations).immutable.store(true, Ordering::Relaxed)
}
#[no_mangle]
pub extern "C" fn Servo_DeclarationBlock_ClearCachePointer(declarations: ServoDeclarationBlockBorrowed) {
GeckoDeclarationBlock::as_arc(&declarations).cache.store(ptr::null_mut(), Ordering::Relaxed)
}
#[no_mangle]
pub extern "C" fn Servo_CSSSupports(property: *const u8, property_length: u32,
value: *const u8, value_length: u32) -> bool {
let property = unsafe { from_utf8_unchecked(slice::from_raw_parts(property, property_length as usize)) };
let value = unsafe { from_utf8_unchecked(slice::from_raw_parts(value, value_length as usize)) };
let base_url = &*DUMMY_BASE_URL;
let extra_data = ParserContextExtraData::default();
match parse_one_declaration(&property, &value, &base_url, Box::new(StdoutErrorReporter), extra_data) {
Ok(decls) =>!decls.is_empty(),
Err(()) => false,
}
}
#[no_mangle] | let per_doc_data = PerDocumentStyleData::from_ffi(raw_data);
let snapshot = unsafe { GeckoElementSnapshot::from_raw(snapshot) };
let element = GeckoElement(element);
// NB: This involves an FFI call, we can get rid of it easily if needed.
let current_state = element.get_state();
let hint = per_doc_data.stylist
.compute_restyle_hint(&element, &snapshot,
current_state);
// NB: Binary representations match.
unsafe { transmute(hint.bits() as u32) }
} | pub extern "C" fn Servo_ComputeRestyleHint(element: RawGeckoElementBorrowed,
snapshot: *mut ServoElementSnapshot,
raw_data: RawServoStyleSetBorrowed) -> nsRestyleHint { | random_line_split |
lib.rs | //! Provides an interface for reading input from a *Nintendo GameCube Controller Adapter for Wii U*
//! USB device.
//!
//! Third party clones such as the 4-port Mayflash adapter in "Wii U mode" are also supported.
//!
//! This library depends on `libusb`, which is available as a dynamic library on many platforms
//! including Linux, Windows, and Mac OS X.
//!
//! Currently, rumble commands are **unimplemented**.
//!
//! # Usage
//!
//! ```norun
//! extern crate gcnctrlusb;
//!
//! fn main() {
//! // Panics if `libusb` is not found or otherwise fails.
//! let mut scanner = gcnctrlusb::Scanner::new().unwrap();
//! // Panics if a valid device was not found.
//! let mut adapter = scanner.find_adapter().unwrap().unwrap();
//! // Panics if the USB driver fails to open a connection to the device.
//! let mut listener = adapter.listen().unwrap();
//!
//! while let Ok(controllers) = listener.read() {
//! println!("Controller port 1: {:?}", controllers[0]);
//! }
//! }
//! ```
extern crate libusb;
use libusb::{Context, Device, DeviceHandle};
use std::error::Error as StdError;
use std::fmt::Error as FmtError;
use std::fmt::{Display, Formatter};
use std::time::Duration;
const VENDOR_ID: u16 = 0x057e;
const PRODUCT_ID: u16 = 0x0337;
/// Searches for GameCube controller adapter USB devices.
pub struct Scanner {
context: Context,
}
impl Scanner {
/// Initializes USB driver connectivity and returns a `Scanner` instance.
///
/// An error is returned if `libusb` is not loaded or driver initialization otherwise fails.
pub fn new() -> Result<Scanner, Error> {
Ok(Scanner { context: try!(Context::new()) })
}
/// Returns the first adapter found, or `None` if no adapter was found.
pub fn find_adapter<'a>(&'a mut self) -> Result<Option<Adapter<'a>>, Error> {
for mut device in try!(self.context.devices()).iter() {
let desc = try!(device.device_descriptor());
if desc.vendor_id() == VENDOR_ID && desc.product_id() == PRODUCT_ID {
return Ok(Some(Adapter { device: device }));
}
}
Ok(None)
}
}
/// A wrapper around the unopened USB device.
pub struct Adapter<'a> {
device: Device<'a>,
}
impl<'a> Adapter<'a> {
/// Opens the USB device and initializes the hardware for reading controller data.
///
/// If the device is inaccessible or unrecognizable, an error is returned. For example, the
/// device will be inaccessible if a previous `Listener` for this adapter is still alive.
pub fn listen(&mut self) -> Result<Listener<'a>, Error> {
let mut handle = try!(self.device.open());
let config = try!(self.device.config_descriptor(0));
let mut interface_descriptor: Option<_> = None;
let mut endpoint_in = None;
let mut endpoint_out = None;
for interface in config.interfaces() {
interface_descriptor = None;
endpoint_in = None;
endpoint_out = None;
for desc in interface.descriptors() {
for endpoint in desc.endpoint_descriptors() {
match endpoint.direction() {
libusb::Direction::In => endpoint_in = Some(endpoint.address()),
libusb::Direction::Out => endpoint_out = Some(endpoint.address()),
}
}
interface_descriptor = Some(desc);
}
}
if interface_descriptor.is_none() || endpoint_in.is_none() || endpoint_out.is_none() {
return Err(Error::UnrecognizedProtocol);
}
let interface_descriptor = interface_descriptor.unwrap();
let interface_number = interface_descriptor.interface_number();
let has_kernel_driver = match handle.kernel_driver_active(interface_number) {
Ok(true) => {
try!(handle.detach_kernel_driver(interface_number));
true
},
_ => false,
};
try!(handle.set_active_configuration(config.number()));
try!(handle.claim_interface(interface_number));
let setting = interface_descriptor.setting_number();
try!(handle.set_alternate_setting(interface_number, setting));
// Tell the adapter to start sending packets.
let timeout = Duration::from_secs(1);
try!(handle.write_interrupt(endpoint_out.unwrap(), &[0x13], timeout));
Ok(Listener {
handle: handle,
buffer: [0; 37],
has_kernel_driver: has_kernel_driver,
interface: interface_number,
endpoint_in: endpoint_in.unwrap(),
})
}
}
/// An interface that reads packets of controller data on each iteration.
///
/// This interface owns an opened handle to the USB device that is closed once the `Listener`
/// instance is dropped.
pub struct Listener<'a> {
handle: DeviceHandle<'a>,
buffer: [u8; 37],
has_kernel_driver: bool,
interface: u8,
endpoint_in: u8,
}
impl<'a> Listener<'a> {
/// Reads a data packet and returns the states for each of the four possibly connected
/// controllers.
///
/// If reading a single packet takes over 1 second, a timeout error with occur. In testing,
/// these packets are available at over 100 times per second.
///
/// Reasons an error may occur include:
///
/// * The USB device becomes disconnected
/// * The USB driver throws an error, fatal or not
/// * A USB message was successfully read, but it was not the right size
///
/// It is wise to treat all errors returned as fatal, and to reestablish the adapter connection
/// through `Scanner::find_adapter`.
pub fn read(&mut self) -> Result<[Option<Controller>; 4], Error> {
let timeout = Duration::from_secs(1);
match self.handle.read_interrupt(self.endpoint_in, &mut self.buffer, timeout) {
Ok(read) if read == 37 => Ok(Controller::parse_packet(&self.buffer)),
Ok(_) => Err(Error::InvalidPacket),
Err(err) => Err(Error::Usb(err)),
}
}
}
impl<'a> Drop for Listener<'a> {
fn drop(&mut self) {
if self.has_kernel_driver {
let _ = self.handle.attach_kernel_driver(self.interface);
}
}
}
/// The state of a GameCube controller at a given moment in time.
///
/// Note that the hardware will likely never report either extreme of the spectrum for any of the
/// analog inputs. For example, all `u8` fields may report only within the range of `30` to `225`.
/// Also, the hardware will likely never report a perfect `127` for the resting position of any of
/// the joystick axes. Keep in my that this library does not do any analog dead zone correction.
#[derive(Clone, Copy, Debug)]
pub struct Controller {
/// The classification of this controller.
pub kind: ControllerKind,
/// "A" button status.
pub a: bool,
/// "B" button status.
pub b: bool,
/// "X" button status.
pub x: bool,
/// "Y" button status.
pub y: bool,
/// Directional pad up button status.
pub up: bool,
/// Directional pad down button status.
pub down: bool,
/// Directional pad left button status.
pub left: bool,
/// Directional pad right button status.
pub right: bool,
/// Digital "L" button (full depression) status.
pub l: bool,
/// Digital "R" button (full depression) status.
pub r: bool,
/// The level of depression of the analog "L" button, `0` being completely up, `255` being
/// completely pressed in.
pub l_analog: u8,
/// The level of depression of the analog "R" button, `0` being completely up, `255` being
/// completely pressed in.
pub r_analog: u8,
/// "Z" button status.
pub z: bool,
/// Start button status.
pub start: bool,
/// The x-axis position of the primary analog joystick, `0` being completely left, `255` being
/// completely right.
pub stick_x: u8,
/// The y-axis position of the primary analog joystick, `0` being completely down, `255` being
/// completely up.
pub stick_y: u8,
/// The x-axis position of the secondary ("C") analog joystick, `0` being completely left,
/// `255` being completely right.
pub c_stick_x: u8,
/// The y-axis position of the secondary ("C") analog joystick, `0` being completely down,
/// `255` being completely up.
pub c_stick_y: u8,
}
impl Controller {
// # Panics
//
// Panics if `data` is not at least 9 bytes.
fn parse(data: &[u8]) -> Option<Controller> {
let kind = match data[0] >> 4 {
0 => return None,
1 => ControllerKind::Wired,
2 => ControllerKind::Wireless,
_ => ControllerKind::Unknown,
};
Some(Controller {
kind: kind, | right: data[1] & (1 << 5)!= 0,
down: data[1] & (1 << 6)!= 0,
up: data[1] & (1 << 7)!= 0,
start: data[2] & (1 << 0)!= 0,
z: data[2] & (1 << 1)!= 0,
r: data[2] & (1 << 2)!= 0,
l: data[2] & (1 << 3)!= 0,
stick_x: data[3],
stick_y: data[4],
c_stick_x: data[5],
c_stick_y: data[6],
l_analog: data[7],
r_analog: data[8],
})
}
// # Panics
//
// Panics if `data` is not at least 37 bytes.
fn parse_packet(data: &[u8]) -> [Option<Controller>; 4] {
[
Controller::parse(&data[1..10]),
Controller::parse(&data[10..19]),
Controller::parse(&data[19..28]),
Controller::parse(&data[28..37])
]
}
}
/// The classification of a GameCube controller.
#[derive(Clone, Copy, Debug)]
pub enum ControllerKind {
/// The controller is wired and likely supports rumble.
Wired,
/// The controller is wireless and likely does not supports rumble.
Wireless,
/// The controller is of an unknown type.
Unknown,
}
/// An error that occurs during usage of this library.
#[derive(Debug)]
pub enum Error {
/// A USB driver error that can occur at any time while utilizing this library.
Usb(libusb::Error),
/// A seemingly valid adapter was found, but its communication protocol could not be resolved.
UnrecognizedProtocol,
/// An invalid message was read from the adapter, likely due to a device or driver failure.
InvalidPacket,
}
impl StdError for Error {
fn description(&self) -> &str {
match *self {
Error::Usb(ref err) => err.description(),
Error::UnrecognizedProtocol => "USB adapter protocol unrecognized",
Error::InvalidPacket => "Invalid data packet received",
}
}
fn cause(&self) -> Option<&StdError> {
match *self {
Error::Usb(ref err) => err.cause(),
_ => None,
}
}
}
impl Display for Error {
fn fmt(&self, f: &mut Formatter) -> Result<(), FmtError> {
match *self {
Error::Usb(ref err) => Display::fmt(err, f),
_ => self.description().fmt(f),
}
}
}
impl From<libusb::Error> for Error {
fn from(err: libusb::Error) -> Error {
Error::Usb(err)
}
} | a: data[1] & (1 << 0) != 0,
b: data[1] & (1 << 1) != 0,
x: data[1] & (1 << 2) != 0,
y: data[1] & (1 << 3) != 0,
left: data[1] & (1 << 4) != 0, | random_line_split |
lib.rs | //! Provides an interface for reading input from a *Nintendo GameCube Controller Adapter for Wii U*
//! USB device.
//!
//! Third party clones such as the 4-port Mayflash adapter in "Wii U mode" are also supported.
//!
//! This library depends on `libusb`, which is available as a dynamic library on many platforms
//! including Linux, Windows, and Mac OS X.
//!
//! Currently, rumble commands are **unimplemented**.
//!
//! # Usage
//!
//! ```norun
//! extern crate gcnctrlusb;
//!
//! fn main() {
//! // Panics if `libusb` is not found or otherwise fails.
//! let mut scanner = gcnctrlusb::Scanner::new().unwrap();
//! // Panics if a valid device was not found.
//! let mut adapter = scanner.find_adapter().unwrap().unwrap();
//! // Panics if the USB driver fails to open a connection to the device.
//! let mut listener = adapter.listen().unwrap();
//!
//! while let Ok(controllers) = listener.read() {
//! println!("Controller port 1: {:?}", controllers[0]);
//! }
//! }
//! ```
extern crate libusb;
use libusb::{Context, Device, DeviceHandle};
use std::error::Error as StdError;
use std::fmt::Error as FmtError;
use std::fmt::{Display, Formatter};
use std::time::Duration;
const VENDOR_ID: u16 = 0x057e;
const PRODUCT_ID: u16 = 0x0337;
/// Searches for GameCube controller adapter USB devices.
pub struct Scanner {
context: Context,
}
impl Scanner {
/// Initializes USB driver connectivity and returns a `Scanner` instance.
///
/// An error is returned if `libusb` is not loaded or driver initialization otherwise fails.
pub fn new() -> Result<Scanner, Error> {
Ok(Scanner { context: try!(Context::new()) })
}
/// Returns the first adapter found, or `None` if no adapter was found.
pub fn find_adapter<'a>(&'a mut self) -> Result<Option<Adapter<'a>>, Error> {
for mut device in try!(self.context.devices()).iter() {
let desc = try!(device.device_descriptor());
if desc.vendor_id() == VENDOR_ID && desc.product_id() == PRODUCT_ID {
return Ok(Some(Adapter { device: device }));
}
}
Ok(None)
}
}
/// A wrapper around the unopened USB device.
pub struct Adapter<'a> {
device: Device<'a>,
}
impl<'a> Adapter<'a> {
/// Opens the USB device and initializes the hardware for reading controller data.
///
/// If the device is inaccessible or unrecognizable, an error is returned. For example, the
/// device will be inaccessible if a previous `Listener` for this adapter is still alive.
pub fn listen(&mut self) -> Result<Listener<'a>, Error> {
let mut handle = try!(self.device.open());
let config = try!(self.device.config_descriptor(0));
let mut interface_descriptor: Option<_> = None;
let mut endpoint_in = None;
let mut endpoint_out = None;
for interface in config.interfaces() {
interface_descriptor = None;
endpoint_in = None;
endpoint_out = None;
for desc in interface.descriptors() {
for endpoint in desc.endpoint_descriptors() {
match endpoint.direction() {
libusb::Direction::In => endpoint_in = Some(endpoint.address()),
libusb::Direction::Out => endpoint_out = Some(endpoint.address()),
}
}
interface_descriptor = Some(desc);
}
}
if interface_descriptor.is_none() || endpoint_in.is_none() || endpoint_out.is_none() {
return Err(Error::UnrecognizedProtocol);
}
let interface_descriptor = interface_descriptor.unwrap();
let interface_number = interface_descriptor.interface_number();
let has_kernel_driver = match handle.kernel_driver_active(interface_number) {
Ok(true) => {
try!(handle.detach_kernel_driver(interface_number));
true
},
_ => false,
};
try!(handle.set_active_configuration(config.number()));
try!(handle.claim_interface(interface_number));
let setting = interface_descriptor.setting_number();
try!(handle.set_alternate_setting(interface_number, setting));
// Tell the adapter to start sending packets.
let timeout = Duration::from_secs(1);
try!(handle.write_interrupt(endpoint_out.unwrap(), &[0x13], timeout));
Ok(Listener {
handle: handle,
buffer: [0; 37],
has_kernel_driver: has_kernel_driver,
interface: interface_number,
endpoint_in: endpoint_in.unwrap(),
})
}
}
/// An interface that reads packets of controller data on each iteration.
///
/// This interface owns an opened handle to the USB device that is closed once the `Listener`
/// instance is dropped.
pub struct | <'a> {
handle: DeviceHandle<'a>,
buffer: [u8; 37],
has_kernel_driver: bool,
interface: u8,
endpoint_in: u8,
}
impl<'a> Listener<'a> {
/// Reads a data packet and returns the states for each of the four possibly connected
/// controllers.
///
/// If reading a single packet takes over 1 second, a timeout error with occur. In testing,
/// these packets are available at over 100 times per second.
///
/// Reasons an error may occur include:
///
/// * The USB device becomes disconnected
/// * The USB driver throws an error, fatal or not
/// * A USB message was successfully read, but it was not the right size
///
/// It is wise to treat all errors returned as fatal, and to reestablish the adapter connection
/// through `Scanner::find_adapter`.
pub fn read(&mut self) -> Result<[Option<Controller>; 4], Error> {
let timeout = Duration::from_secs(1);
match self.handle.read_interrupt(self.endpoint_in, &mut self.buffer, timeout) {
Ok(read) if read == 37 => Ok(Controller::parse_packet(&self.buffer)),
Ok(_) => Err(Error::InvalidPacket),
Err(err) => Err(Error::Usb(err)),
}
}
}
impl<'a> Drop for Listener<'a> {
fn drop(&mut self) {
if self.has_kernel_driver {
let _ = self.handle.attach_kernel_driver(self.interface);
}
}
}
/// The state of a GameCube controller at a given moment in time.
///
/// Note that the hardware will likely never report either extreme of the spectrum for any of the
/// analog inputs. For example, all `u8` fields may report only within the range of `30` to `225`.
/// Also, the hardware will likely never report a perfect `127` for the resting position of any of
/// the joystick axes. Keep in my that this library does not do any analog dead zone correction.
#[derive(Clone, Copy, Debug)]
pub struct Controller {
/// The classification of this controller.
pub kind: ControllerKind,
/// "A" button status.
pub a: bool,
/// "B" button status.
pub b: bool,
/// "X" button status.
pub x: bool,
/// "Y" button status.
pub y: bool,
/// Directional pad up button status.
pub up: bool,
/// Directional pad down button status.
pub down: bool,
/// Directional pad left button status.
pub left: bool,
/// Directional pad right button status.
pub right: bool,
/// Digital "L" button (full depression) status.
pub l: bool,
/// Digital "R" button (full depression) status.
pub r: bool,
/// The level of depression of the analog "L" button, `0` being completely up, `255` being
/// completely pressed in.
pub l_analog: u8,
/// The level of depression of the analog "R" button, `0` being completely up, `255` being
/// completely pressed in.
pub r_analog: u8,
/// "Z" button status.
pub z: bool,
/// Start button status.
pub start: bool,
/// The x-axis position of the primary analog joystick, `0` being completely left, `255` being
/// completely right.
pub stick_x: u8,
/// The y-axis position of the primary analog joystick, `0` being completely down, `255` being
/// completely up.
pub stick_y: u8,
/// The x-axis position of the secondary ("C") analog joystick, `0` being completely left,
/// `255` being completely right.
pub c_stick_x: u8,
/// The y-axis position of the secondary ("C") analog joystick, `0` being completely down,
/// `255` being completely up.
pub c_stick_y: u8,
}
impl Controller {
// # Panics
//
// Panics if `data` is not at least 9 bytes.
fn parse(data: &[u8]) -> Option<Controller> {
let kind = match data[0] >> 4 {
0 => return None,
1 => ControllerKind::Wired,
2 => ControllerKind::Wireless,
_ => ControllerKind::Unknown,
};
Some(Controller {
kind: kind,
a: data[1] & (1 << 0)!= 0,
b: data[1] & (1 << 1)!= 0,
x: data[1] & (1 << 2)!= 0,
y: data[1] & (1 << 3)!= 0,
left: data[1] & (1 << 4)!= 0,
right: data[1] & (1 << 5)!= 0,
down: data[1] & (1 << 6)!= 0,
up: data[1] & (1 << 7)!= 0,
start: data[2] & (1 << 0)!= 0,
z: data[2] & (1 << 1)!= 0,
r: data[2] & (1 << 2)!= 0,
l: data[2] & (1 << 3)!= 0,
stick_x: data[3],
stick_y: data[4],
c_stick_x: data[5],
c_stick_y: data[6],
l_analog: data[7],
r_analog: data[8],
})
}
// # Panics
//
// Panics if `data` is not at least 37 bytes.
fn parse_packet(data: &[u8]) -> [Option<Controller>; 4] {
[
Controller::parse(&data[1..10]),
Controller::parse(&data[10..19]),
Controller::parse(&data[19..28]),
Controller::parse(&data[28..37])
]
}
}
/// The classification of a GameCube controller.
#[derive(Clone, Copy, Debug)]
pub enum ControllerKind {
/// The controller is wired and likely supports rumble.
Wired,
/// The controller is wireless and likely does not supports rumble.
Wireless,
/// The controller is of an unknown type.
Unknown,
}
/// An error that occurs during usage of this library.
#[derive(Debug)]
pub enum Error {
/// A USB driver error that can occur at any time while utilizing this library.
Usb(libusb::Error),
/// A seemingly valid adapter was found, but its communication protocol could not be resolved.
UnrecognizedProtocol,
/// An invalid message was read from the adapter, likely due to a device or driver failure.
InvalidPacket,
}
impl StdError for Error {
fn description(&self) -> &str {
match *self {
Error::Usb(ref err) => err.description(),
Error::UnrecognizedProtocol => "USB adapter protocol unrecognized",
Error::InvalidPacket => "Invalid data packet received",
}
}
fn cause(&self) -> Option<&StdError> {
match *self {
Error::Usb(ref err) => err.cause(),
_ => None,
}
}
}
impl Display for Error {
fn fmt(&self, f: &mut Formatter) -> Result<(), FmtError> {
match *self {
Error::Usb(ref err) => Display::fmt(err, f),
_ => self.description().fmt(f),
}
}
}
impl From<libusb::Error> for Error {
fn from(err: libusb::Error) -> Error {
Error::Usb(err)
}
}
| Listener | identifier_name |
lib.rs | //! Provides an interface for reading input from a *Nintendo GameCube Controller Adapter for Wii U*
//! USB device.
//!
//! Third party clones such as the 4-port Mayflash adapter in "Wii U mode" are also supported.
//!
//! This library depends on `libusb`, which is available as a dynamic library on many platforms
//! including Linux, Windows, and Mac OS X.
//!
//! Currently, rumble commands are **unimplemented**.
//!
//! # Usage
//!
//! ```norun
//! extern crate gcnctrlusb;
//!
//! fn main() {
//! // Panics if `libusb` is not found or otherwise fails.
//! let mut scanner = gcnctrlusb::Scanner::new().unwrap();
//! // Panics if a valid device was not found.
//! let mut adapter = scanner.find_adapter().unwrap().unwrap();
//! // Panics if the USB driver fails to open a connection to the device.
//! let mut listener = adapter.listen().unwrap();
//!
//! while let Ok(controllers) = listener.read() {
//! println!("Controller port 1: {:?}", controllers[0]);
//! }
//! }
//! ```
extern crate libusb;
use libusb::{Context, Device, DeviceHandle};
use std::error::Error as StdError;
use std::fmt::Error as FmtError;
use std::fmt::{Display, Formatter};
use std::time::Duration;
const VENDOR_ID: u16 = 0x057e;
const PRODUCT_ID: u16 = 0x0337;
/// Searches for GameCube controller adapter USB devices.
pub struct Scanner {
context: Context,
}
impl Scanner {
/// Initializes USB driver connectivity and returns a `Scanner` instance.
///
/// An error is returned if `libusb` is not loaded or driver initialization otherwise fails.
pub fn new() -> Result<Scanner, Error> |
/// Returns the first adapter found, or `None` if no adapter was found.
pub fn find_adapter<'a>(&'a mut self) -> Result<Option<Adapter<'a>>, Error> {
for mut device in try!(self.context.devices()).iter() {
let desc = try!(device.device_descriptor());
if desc.vendor_id() == VENDOR_ID && desc.product_id() == PRODUCT_ID {
return Ok(Some(Adapter { device: device }));
}
}
Ok(None)
}
}
/// A wrapper around the unopened USB device.
pub struct Adapter<'a> {
device: Device<'a>,
}
impl<'a> Adapter<'a> {
/// Opens the USB device and initializes the hardware for reading controller data.
///
/// If the device is inaccessible or unrecognizable, an error is returned. For example, the
/// device will be inaccessible if a previous `Listener` for this adapter is still alive.
pub fn listen(&mut self) -> Result<Listener<'a>, Error> {
let mut handle = try!(self.device.open());
let config = try!(self.device.config_descriptor(0));
let mut interface_descriptor: Option<_> = None;
let mut endpoint_in = None;
let mut endpoint_out = None;
for interface in config.interfaces() {
interface_descriptor = None;
endpoint_in = None;
endpoint_out = None;
for desc in interface.descriptors() {
for endpoint in desc.endpoint_descriptors() {
match endpoint.direction() {
libusb::Direction::In => endpoint_in = Some(endpoint.address()),
libusb::Direction::Out => endpoint_out = Some(endpoint.address()),
}
}
interface_descriptor = Some(desc);
}
}
if interface_descriptor.is_none() || endpoint_in.is_none() || endpoint_out.is_none() {
return Err(Error::UnrecognizedProtocol);
}
let interface_descriptor = interface_descriptor.unwrap();
let interface_number = interface_descriptor.interface_number();
let has_kernel_driver = match handle.kernel_driver_active(interface_number) {
Ok(true) => {
try!(handle.detach_kernel_driver(interface_number));
true
},
_ => false,
};
try!(handle.set_active_configuration(config.number()));
try!(handle.claim_interface(interface_number));
let setting = interface_descriptor.setting_number();
try!(handle.set_alternate_setting(interface_number, setting));
// Tell the adapter to start sending packets.
let timeout = Duration::from_secs(1);
try!(handle.write_interrupt(endpoint_out.unwrap(), &[0x13], timeout));
Ok(Listener {
handle: handle,
buffer: [0; 37],
has_kernel_driver: has_kernel_driver,
interface: interface_number,
endpoint_in: endpoint_in.unwrap(),
})
}
}
/// An interface that reads packets of controller data on each iteration.
///
/// This interface owns an opened handle to the USB device that is closed once the `Listener`
/// instance is dropped.
pub struct Listener<'a> {
handle: DeviceHandle<'a>,
buffer: [u8; 37],
has_kernel_driver: bool,
interface: u8,
endpoint_in: u8,
}
impl<'a> Listener<'a> {
/// Reads a data packet and returns the states for each of the four possibly connected
/// controllers.
///
/// If reading a single packet takes over 1 second, a timeout error with occur. In testing,
/// these packets are available at over 100 times per second.
///
/// Reasons an error may occur include:
///
/// * The USB device becomes disconnected
/// * The USB driver throws an error, fatal or not
/// * A USB message was successfully read, but it was not the right size
///
/// It is wise to treat all errors returned as fatal, and to reestablish the adapter connection
/// through `Scanner::find_adapter`.
pub fn read(&mut self) -> Result<[Option<Controller>; 4], Error> {
let timeout = Duration::from_secs(1);
match self.handle.read_interrupt(self.endpoint_in, &mut self.buffer, timeout) {
Ok(read) if read == 37 => Ok(Controller::parse_packet(&self.buffer)),
Ok(_) => Err(Error::InvalidPacket),
Err(err) => Err(Error::Usb(err)),
}
}
}
impl<'a> Drop for Listener<'a> {
fn drop(&mut self) {
if self.has_kernel_driver {
let _ = self.handle.attach_kernel_driver(self.interface);
}
}
}
/// The state of a GameCube controller at a given moment in time.
///
/// Note that the hardware will likely never report either extreme of the spectrum for any of the
/// analog inputs. For example, all `u8` fields may report only within the range of `30` to `225`.
/// Also, the hardware will likely never report a perfect `127` for the resting position of any of
/// the joystick axes. Keep in my that this library does not do any analog dead zone correction.
#[derive(Clone, Copy, Debug)]
pub struct Controller {
/// The classification of this controller.
pub kind: ControllerKind,
/// "A" button status.
pub a: bool,
/// "B" button status.
pub b: bool,
/// "X" button status.
pub x: bool,
/// "Y" button status.
pub y: bool,
/// Directional pad up button status.
pub up: bool,
/// Directional pad down button status.
pub down: bool,
/// Directional pad left button status.
pub left: bool,
/// Directional pad right button status.
pub right: bool,
/// Digital "L" button (full depression) status.
pub l: bool,
/// Digital "R" button (full depression) status.
pub r: bool,
/// The level of depression of the analog "L" button, `0` being completely up, `255` being
/// completely pressed in.
pub l_analog: u8,
/// The level of depression of the analog "R" button, `0` being completely up, `255` being
/// completely pressed in.
pub r_analog: u8,
/// "Z" button status.
pub z: bool,
/// Start button status.
pub start: bool,
/// The x-axis position of the primary analog joystick, `0` being completely left, `255` being
/// completely right.
pub stick_x: u8,
/// The y-axis position of the primary analog joystick, `0` being completely down, `255` being
/// completely up.
pub stick_y: u8,
/// The x-axis position of the secondary ("C") analog joystick, `0` being completely left,
/// `255` being completely right.
pub c_stick_x: u8,
/// The y-axis position of the secondary ("C") analog joystick, `0` being completely down,
/// `255` being completely up.
pub c_stick_y: u8,
}
impl Controller {
// # Panics
//
// Panics if `data` is not at least 9 bytes.
fn parse(data: &[u8]) -> Option<Controller> {
let kind = match data[0] >> 4 {
0 => return None,
1 => ControllerKind::Wired,
2 => ControllerKind::Wireless,
_ => ControllerKind::Unknown,
};
Some(Controller {
kind: kind,
a: data[1] & (1 << 0)!= 0,
b: data[1] & (1 << 1)!= 0,
x: data[1] & (1 << 2)!= 0,
y: data[1] & (1 << 3)!= 0,
left: data[1] & (1 << 4)!= 0,
right: data[1] & (1 << 5)!= 0,
down: data[1] & (1 << 6)!= 0,
up: data[1] & (1 << 7)!= 0,
start: data[2] & (1 << 0)!= 0,
z: data[2] & (1 << 1)!= 0,
r: data[2] & (1 << 2)!= 0,
l: data[2] & (1 << 3)!= 0,
stick_x: data[3],
stick_y: data[4],
c_stick_x: data[5],
c_stick_y: data[6],
l_analog: data[7],
r_analog: data[8],
})
}
// # Panics
//
// Panics if `data` is not at least 37 bytes.
fn parse_packet(data: &[u8]) -> [Option<Controller>; 4] {
[
Controller::parse(&data[1..10]),
Controller::parse(&data[10..19]),
Controller::parse(&data[19..28]),
Controller::parse(&data[28..37])
]
}
}
/// The classification of a GameCube controller.
#[derive(Clone, Copy, Debug)]
pub enum ControllerKind {
/// The controller is wired and likely supports rumble.
Wired,
/// The controller is wireless and likely does not supports rumble.
Wireless,
/// The controller is of an unknown type.
Unknown,
}
/// An error that occurs during usage of this library.
#[derive(Debug)]
pub enum Error {
/// A USB driver error that can occur at any time while utilizing this library.
Usb(libusb::Error),
/// A seemingly valid adapter was found, but its communication protocol could not be resolved.
UnrecognizedProtocol,
/// An invalid message was read from the adapter, likely due to a device or driver failure.
InvalidPacket,
}
impl StdError for Error {
fn description(&self) -> &str {
match *self {
Error::Usb(ref err) => err.description(),
Error::UnrecognizedProtocol => "USB adapter protocol unrecognized",
Error::InvalidPacket => "Invalid data packet received",
}
}
fn cause(&self) -> Option<&StdError> {
match *self {
Error::Usb(ref err) => err.cause(),
_ => None,
}
}
}
impl Display for Error {
fn fmt(&self, f: &mut Formatter) -> Result<(), FmtError> {
match *self {
Error::Usb(ref err) => Display::fmt(err, f),
_ => self.description().fmt(f),
}
}
}
impl From<libusb::Error> for Error {
fn from(err: libusb::Error) -> Error {
Error::Usb(err)
}
}
| {
Ok(Scanner { context: try!(Context::new()) })
} | identifier_body |
extern-generic.rs | // Copyright 2016 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// ignore-tidy-linelength
// We specify -Z incremental here because we want to test the partitioning for
// incremental compilation
// compile-flags:-Zprint-mono-items=eager -Zincremental=tmp/partitioning-tests/extern-generic -Zshare-generics=y
#![allow(dead_code)]
#![crate_type="lib"]
// aux-build:cgu_generic_function.rs
extern crate cgu_generic_function;
//~ MONO_ITEM fn extern_generic::user[0] @@ extern_generic[Internal]
fn user() |
mod mod1 {
use cgu_generic_function;
//~ MONO_ITEM fn extern_generic::mod1[0]::user[0] @@ extern_generic-mod1[Internal]
fn user() {
let _ = cgu_generic_function::foo("abc");
}
mod mod1 {
use cgu_generic_function;
//~ MONO_ITEM fn extern_generic::mod1[0]::mod1[0]::user[0] @@ extern_generic-mod1-mod1[Internal]
fn user() {
let _ = cgu_generic_function::foo("abc");
}
}
}
mod mod2 {
use cgu_generic_function;
//~ MONO_ITEM fn extern_generic::mod2[0]::user[0] @@ extern_generic-mod2[Internal]
fn user() {
let _ = cgu_generic_function::foo("abc");
}
}
mod mod3 {
//~ MONO_ITEM fn extern_generic::mod3[0]::non_user[0] @@ extern_generic-mod3[Internal]
fn non_user() {}
}
// Make sure the two generic functions from the extern crate get instantiated
// once for the current crate
//~ MONO_ITEM fn cgu_generic_function::foo[0]<&str> @@ cgu_generic_function-in-extern_generic.volatile[External]
//~ MONO_ITEM fn cgu_generic_function::bar[0]<&str> @@ cgu_generic_function-in-extern_generic.volatile[External]
| {
let _ = cgu_generic_function::foo("abc");
} | identifier_body |
extern-generic.rs | // Copyright 2016 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// ignore-tidy-linelength
// We specify -Z incremental here because we want to test the partitioning for
// incremental compilation
// compile-flags:-Zprint-mono-items=eager -Zincremental=tmp/partitioning-tests/extern-generic -Zshare-generics=y
#![allow(dead_code)]
#![crate_type="lib"]
// aux-build:cgu_generic_function.rs
extern crate cgu_generic_function;
//~ MONO_ITEM fn extern_generic::user[0] @@ extern_generic[Internal]
fn user() {
let _ = cgu_generic_function::foo("abc");
}
mod mod1 {
use cgu_generic_function;
//~ MONO_ITEM fn extern_generic::mod1[0]::user[0] @@ extern_generic-mod1[Internal]
fn user() {
let _ = cgu_generic_function::foo("abc");
} |
mod mod1 {
use cgu_generic_function;
//~ MONO_ITEM fn extern_generic::mod1[0]::mod1[0]::user[0] @@ extern_generic-mod1-mod1[Internal]
fn user() {
let _ = cgu_generic_function::foo("abc");
}
}
}
mod mod2 {
use cgu_generic_function;
//~ MONO_ITEM fn extern_generic::mod2[0]::user[0] @@ extern_generic-mod2[Internal]
fn user() {
let _ = cgu_generic_function::foo("abc");
}
}
mod mod3 {
//~ MONO_ITEM fn extern_generic::mod3[0]::non_user[0] @@ extern_generic-mod3[Internal]
fn non_user() {}
}
// Make sure the two generic functions from the extern crate get instantiated
// once for the current crate
//~ MONO_ITEM fn cgu_generic_function::foo[0]<&str> @@ cgu_generic_function-in-extern_generic.volatile[External]
//~ MONO_ITEM fn cgu_generic_function::bar[0]<&str> @@ cgu_generic_function-in-extern_generic.volatile[External] | random_line_split |
|
extern-generic.rs | // Copyright 2016 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// ignore-tidy-linelength
// We specify -Z incremental here because we want to test the partitioning for
// incremental compilation
// compile-flags:-Zprint-mono-items=eager -Zincremental=tmp/partitioning-tests/extern-generic -Zshare-generics=y
#![allow(dead_code)]
#![crate_type="lib"]
// aux-build:cgu_generic_function.rs
extern crate cgu_generic_function;
//~ MONO_ITEM fn extern_generic::user[0] @@ extern_generic[Internal]
fn user() {
let _ = cgu_generic_function::foo("abc");
}
mod mod1 {
use cgu_generic_function;
//~ MONO_ITEM fn extern_generic::mod1[0]::user[0] @@ extern_generic-mod1[Internal]
fn user() {
let _ = cgu_generic_function::foo("abc");
}
mod mod1 {
use cgu_generic_function;
//~ MONO_ITEM fn extern_generic::mod1[0]::mod1[0]::user[0] @@ extern_generic-mod1-mod1[Internal]
fn user() {
let _ = cgu_generic_function::foo("abc");
}
}
}
mod mod2 {
use cgu_generic_function;
//~ MONO_ITEM fn extern_generic::mod2[0]::user[0] @@ extern_generic-mod2[Internal]
fn | () {
let _ = cgu_generic_function::foo("abc");
}
}
mod mod3 {
//~ MONO_ITEM fn extern_generic::mod3[0]::non_user[0] @@ extern_generic-mod3[Internal]
fn non_user() {}
}
// Make sure the two generic functions from the extern crate get instantiated
// once for the current crate
//~ MONO_ITEM fn cgu_generic_function::foo[0]<&str> @@ cgu_generic_function-in-extern_generic.volatile[External]
//~ MONO_ITEM fn cgu_generic_function::bar[0]<&str> @@ cgu_generic_function-in-extern_generic.volatile[External]
| user | identifier_name |
htmlstyleelement.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use cssparser::Parser as CssParser;
use dom::bindings::cell::DOMRefCell;
use dom::bindings::codegen::Bindings::HTMLStyleElementBinding;
use dom::bindings::codegen::Bindings::NodeBinding::NodeMethods;
use dom::bindings::inheritance::Castable;
use dom::bindings::js::Root;
use dom::bindings::str::DOMString;
use dom::document::Document;
use dom::element::Element;
use dom::htmlelement::HTMLElement;
use dom::node::{ChildrenMutation, Node, document_from_node, window_from_node};
use dom::virtualmethods::VirtualMethods;
use script_layout_interface::message::Msg;
use std::sync::Arc;
use string_cache::Atom;
use style::media_queries::parse_media_query_list;
use style::parser::ParserContextExtraData;
use style::stylesheets::{Stylesheet, Origin};
#[dom_struct]
pub struct HTMLStyleElement {
htmlelement: HTMLElement,
stylesheet: DOMRefCell<Option<Arc<Stylesheet>>>,
}
impl HTMLStyleElement {
fn new_inherited(local_name: Atom,
prefix: Option<DOMString>,
document: &Document) -> HTMLStyleElement {
HTMLStyleElement {
htmlelement: HTMLElement::new_inherited(local_name, prefix, document),
stylesheet: DOMRefCell::new(None),
}
}
#[allow(unrooted_must_root)]
pub fn new(local_name: Atom,
prefix: Option<DOMString>,
document: &Document) -> Root<HTMLStyleElement> |
pub fn parse_own_css(&self) {
let node = self.upcast::<Node>();
let element = self.upcast::<Element>();
assert!(node.is_in_doc());
let win = window_from_node(node);
let url = win.get_url();
let mq_attribute = element.get_attribute(&ns!(), &atom!("media"));
let mq_str = match mq_attribute {
Some(a) => String::from(&**a.value()),
None => String::new(),
};
let data = node.GetTextContent().expect("Element.textContent must be a string");
let mut sheet = Stylesheet::from_str(&data, url, Origin::Author, win.css_error_reporter(),
ParserContextExtraData::default());
let mut css_parser = CssParser::new(&mq_str);
let media = parse_media_query_list(&mut css_parser);
sheet.set_media(Some(media));
let sheet = Arc::new(sheet);
win.layout_chan().send(Msg::AddStylesheet(sheet.clone())).unwrap();
*self.stylesheet.borrow_mut() = Some(sheet);
let doc = document_from_node(self);
doc.r().invalidate_stylesheets();
}
pub fn get_stylesheet(&self) -> Option<Arc<Stylesheet>> {
self.stylesheet.borrow().clone()
}
}
impl VirtualMethods for HTMLStyleElement {
fn super_type(&self) -> Option<&VirtualMethods> {
Some(self.upcast::<HTMLElement>() as &VirtualMethods)
}
fn children_changed(&self, mutation: &ChildrenMutation) {
if let Some(ref s) = self.super_type() {
s.children_changed(mutation);
}
if self.upcast::<Node>().is_in_doc() {
self.parse_own_css();
}
}
fn bind_to_tree(&self, tree_in_doc: bool) {
if let Some(ref s) = self.super_type() {
s.bind_to_tree(tree_in_doc);
}
if tree_in_doc {
self.parse_own_css();
}
}
}
| {
Node::reflect_node(box HTMLStyleElement::new_inherited(local_name, prefix, document),
document,
HTMLStyleElementBinding::Wrap)
} | identifier_body |
htmlstyleelement.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use cssparser::Parser as CssParser;
use dom::bindings::cell::DOMRefCell;
use dom::bindings::codegen::Bindings::HTMLStyleElementBinding;
use dom::bindings::codegen::Bindings::NodeBinding::NodeMethods;
use dom::bindings::inheritance::Castable;
use dom::bindings::js::Root;
use dom::bindings::str::DOMString;
use dom::document::Document;
use dom::element::Element;
use dom::htmlelement::HTMLElement;
use dom::node::{ChildrenMutation, Node, document_from_node, window_from_node};
use dom::virtualmethods::VirtualMethods;
use script_layout_interface::message::Msg;
use std::sync::Arc;
use string_cache::Atom;
use style::media_queries::parse_media_query_list;
use style::parser::ParserContextExtraData;
use style::stylesheets::{Stylesheet, Origin};
#[dom_struct]
pub struct HTMLStyleElement {
htmlelement: HTMLElement,
stylesheet: DOMRefCell<Option<Arc<Stylesheet>>>,
}
impl HTMLStyleElement {
fn new_inherited(local_name: Atom,
prefix: Option<DOMString>,
document: &Document) -> HTMLStyleElement {
HTMLStyleElement {
htmlelement: HTMLElement::new_inherited(local_name, prefix, document),
stylesheet: DOMRefCell::new(None),
}
}
#[allow(unrooted_must_root)]
pub fn new(local_name: Atom,
prefix: Option<DOMString>,
document: &Document) -> Root<HTMLStyleElement> {
Node::reflect_node(box HTMLStyleElement::new_inherited(local_name, prefix, document),
document,
HTMLStyleElementBinding::Wrap)
}
pub fn parse_own_css(&self) {
let node = self.upcast::<Node>();
let element = self.upcast::<Element>();
assert!(node.is_in_doc());
let win = window_from_node(node);
let url = win.get_url();
let mq_attribute = element.get_attribute(&ns!(), &atom!("media"));
let mq_str = match mq_attribute {
Some(a) => String::from(&**a.value()),
None => String::new(),
};
let data = node.GetTextContent().expect("Element.textContent must be a string");
let mut sheet = Stylesheet::from_str(&data, url, Origin::Author, win.css_error_reporter(),
ParserContextExtraData::default());
let mut css_parser = CssParser::new(&mq_str);
let media = parse_media_query_list(&mut css_parser);
sheet.set_media(Some(media));
let sheet = Arc::new(sheet);
win.layout_chan().send(Msg::AddStylesheet(sheet.clone())).unwrap();
*self.stylesheet.borrow_mut() = Some(sheet);
let doc = document_from_node(self);
doc.r().invalidate_stylesheets();
}
pub fn get_stylesheet(&self) -> Option<Arc<Stylesheet>> {
self.stylesheet.borrow().clone()
}
}
impl VirtualMethods for HTMLStyleElement {
fn super_type(&self) -> Option<&VirtualMethods> {
Some(self.upcast::<HTMLElement>() as &VirtualMethods)
}
fn children_changed(&self, mutation: &ChildrenMutation) {
if let Some(ref s) = self.super_type() {
s.children_changed(mutation);
}
if self.upcast::<Node>().is_in_doc() {
self.parse_own_css();
}
}
fn bind_to_tree(&self, tree_in_doc: bool) {
if let Some(ref s) = self.super_type() {
s.bind_to_tree(tree_in_doc);
}
if tree_in_doc |
}
}
| {
self.parse_own_css();
} | conditional_block |
htmlstyleelement.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use cssparser::Parser as CssParser;
use dom::bindings::cell::DOMRefCell;
use dom::bindings::codegen::Bindings::HTMLStyleElementBinding;
use dom::bindings::codegen::Bindings::NodeBinding::NodeMethods;
use dom::bindings::inheritance::Castable;
use dom::bindings::js::Root;
use dom::bindings::str::DOMString;
use dom::document::Document;
use dom::element::Element;
use dom::htmlelement::HTMLElement;
use dom::node::{ChildrenMutation, Node, document_from_node, window_from_node};
use dom::virtualmethods::VirtualMethods;
use script_layout_interface::message::Msg;
use std::sync::Arc;
use string_cache::Atom;
use style::media_queries::parse_media_query_list;
use style::parser::ParserContextExtraData;
use style::stylesheets::{Stylesheet, Origin};
#[dom_struct]
pub struct HTMLStyleElement {
htmlelement: HTMLElement,
stylesheet: DOMRefCell<Option<Arc<Stylesheet>>>,
}
impl HTMLStyleElement {
fn new_inherited(local_name: Atom,
prefix: Option<DOMString>,
document: &Document) -> HTMLStyleElement {
HTMLStyleElement {
htmlelement: HTMLElement::new_inherited(local_name, prefix, document),
stylesheet: DOMRefCell::new(None),
}
}
#[allow(unrooted_must_root)]
pub fn new(local_name: Atom,
prefix: Option<DOMString>,
document: &Document) -> Root<HTMLStyleElement> {
Node::reflect_node(box HTMLStyleElement::new_inherited(local_name, prefix, document),
document,
HTMLStyleElementBinding::Wrap)
}
pub fn parse_own_css(&self) {
let node = self.upcast::<Node>();
let element = self.upcast::<Element>();
assert!(node.is_in_doc());
let win = window_from_node(node);
let url = win.get_url();
let mq_attribute = element.get_attribute(&ns!(), &atom!("media"));
let mq_str = match mq_attribute {
Some(a) => String::from(&**a.value()),
None => String::new(),
};
let data = node.GetTextContent().expect("Element.textContent must be a string");
let mut sheet = Stylesheet::from_str(&data, url, Origin::Author, win.css_error_reporter(),
ParserContextExtraData::default());
let mut css_parser = CssParser::new(&mq_str);
let media = parse_media_query_list(&mut css_parser);
sheet.set_media(Some(media));
let sheet = Arc::new(sheet);
win.layout_chan().send(Msg::AddStylesheet(sheet.clone())).unwrap();
*self.stylesheet.borrow_mut() = Some(sheet);
let doc = document_from_node(self);
doc.r().invalidate_stylesheets();
}
pub fn get_stylesheet(&self) -> Option<Arc<Stylesheet>> {
self.stylesheet.borrow().clone()
}
}
impl VirtualMethods for HTMLStyleElement {
fn super_type(&self) -> Option<&VirtualMethods> { | if let Some(ref s) = self.super_type() {
s.children_changed(mutation);
}
if self.upcast::<Node>().is_in_doc() {
self.parse_own_css();
}
}
fn bind_to_tree(&self, tree_in_doc: bool) {
if let Some(ref s) = self.super_type() {
s.bind_to_tree(tree_in_doc);
}
if tree_in_doc {
self.parse_own_css();
}
}
} | Some(self.upcast::<HTMLElement>() as &VirtualMethods)
}
fn children_changed(&self, mutation: &ChildrenMutation) { | random_line_split |
htmlstyleelement.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use cssparser::Parser as CssParser;
use dom::bindings::cell::DOMRefCell;
use dom::bindings::codegen::Bindings::HTMLStyleElementBinding;
use dom::bindings::codegen::Bindings::NodeBinding::NodeMethods;
use dom::bindings::inheritance::Castable;
use dom::bindings::js::Root;
use dom::bindings::str::DOMString;
use dom::document::Document;
use dom::element::Element;
use dom::htmlelement::HTMLElement;
use dom::node::{ChildrenMutation, Node, document_from_node, window_from_node};
use dom::virtualmethods::VirtualMethods;
use script_layout_interface::message::Msg;
use std::sync::Arc;
use string_cache::Atom;
use style::media_queries::parse_media_query_list;
use style::parser::ParserContextExtraData;
use style::stylesheets::{Stylesheet, Origin};
#[dom_struct]
pub struct HTMLStyleElement {
htmlelement: HTMLElement,
stylesheet: DOMRefCell<Option<Arc<Stylesheet>>>,
}
impl HTMLStyleElement {
fn | (local_name: Atom,
prefix: Option<DOMString>,
document: &Document) -> HTMLStyleElement {
HTMLStyleElement {
htmlelement: HTMLElement::new_inherited(local_name, prefix, document),
stylesheet: DOMRefCell::new(None),
}
}
#[allow(unrooted_must_root)]
pub fn new(local_name: Atom,
prefix: Option<DOMString>,
document: &Document) -> Root<HTMLStyleElement> {
Node::reflect_node(box HTMLStyleElement::new_inherited(local_name, prefix, document),
document,
HTMLStyleElementBinding::Wrap)
}
pub fn parse_own_css(&self) {
let node = self.upcast::<Node>();
let element = self.upcast::<Element>();
assert!(node.is_in_doc());
let win = window_from_node(node);
let url = win.get_url();
let mq_attribute = element.get_attribute(&ns!(), &atom!("media"));
let mq_str = match mq_attribute {
Some(a) => String::from(&**a.value()),
None => String::new(),
};
let data = node.GetTextContent().expect("Element.textContent must be a string");
let mut sheet = Stylesheet::from_str(&data, url, Origin::Author, win.css_error_reporter(),
ParserContextExtraData::default());
let mut css_parser = CssParser::new(&mq_str);
let media = parse_media_query_list(&mut css_parser);
sheet.set_media(Some(media));
let sheet = Arc::new(sheet);
win.layout_chan().send(Msg::AddStylesheet(sheet.clone())).unwrap();
*self.stylesheet.borrow_mut() = Some(sheet);
let doc = document_from_node(self);
doc.r().invalidate_stylesheets();
}
pub fn get_stylesheet(&self) -> Option<Arc<Stylesheet>> {
self.stylesheet.borrow().clone()
}
}
impl VirtualMethods for HTMLStyleElement {
fn super_type(&self) -> Option<&VirtualMethods> {
Some(self.upcast::<HTMLElement>() as &VirtualMethods)
}
fn children_changed(&self, mutation: &ChildrenMutation) {
if let Some(ref s) = self.super_type() {
s.children_changed(mutation);
}
if self.upcast::<Node>().is_in_doc() {
self.parse_own_css();
}
}
fn bind_to_tree(&self, tree_in_doc: bool) {
if let Some(ref s) = self.super_type() {
s.bind_to_tree(tree_in_doc);
}
if tree_in_doc {
self.parse_own_css();
}
}
}
| new_inherited | identifier_name |
constref.rs | // Copyright 2017 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// run-pass
const CONST_REF: &[u8; 3] = b"foo";
trait Foo {
const CONST_REF_DEFAULT: &'static [u8; 3] = b"bar";
const CONST_REF: &'static [u8; 3];
}
impl Foo for i32 {
const CONST_REF: &'static [u8; 3] = b"jjj";
}
impl Foo for i64 {
const CONST_REF_DEFAULT: &'static [u8; 3] = b"ggg";
const CONST_REF: &'static [u8; 3] = b"fff";
}
// Check that (associated and free) const references are not mistaken for a
// non-reference pattern (in which case they would be auto-dereferenced, making
// the types mismatched).
fn const_ref() -> bool {
let f = b"foo";
match f {
CONST_REF => true,
_ => false,
}
}
fn associated_const_ref() -> bool {
match (b"bar", b"jjj", b"ggg", b"fff") {
(i32::CONST_REF_DEFAULT, i32::CONST_REF, i64::CONST_REF_DEFAULT, i64::CONST_REF) => true,
_ => false,
}
}
pub fn main() | {
assert!(const_ref());
assert!(associated_const_ref());
} | identifier_body |
|
constref.rs | // Copyright 2017 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// run-pass
const CONST_REF: &[u8; 3] = b"foo";
trait Foo {
const CONST_REF_DEFAULT: &'static [u8; 3] = b"bar";
const CONST_REF: &'static [u8; 3];
}
impl Foo for i32 {
const CONST_REF: &'static [u8; 3] = b"jjj";
}
impl Foo for i64 {
const CONST_REF_DEFAULT: &'static [u8; 3] = b"ggg";
const CONST_REF: &'static [u8; 3] = b"fff";
}
// Check that (associated and free) const references are not mistaken for a
// non-reference pattern (in which case they would be auto-dereferenced, making
// the types mismatched).
fn const_ref() -> bool {
let f = b"foo";
match f {
CONST_REF => true,
_ => false,
}
}
fn associated_const_ref() -> bool {
match (b"bar", b"jjj", b"ggg", b"fff") {
(i32::CONST_REF_DEFAULT, i32::CONST_REF, i64::CONST_REF_DEFAULT, i64::CONST_REF) => true,
_ => false,
}
}
pub fn | () {
assert!(const_ref());
assert!(associated_const_ref());
}
| main | identifier_name |
constref.rs | // Copyright 2017 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// run-pass
const CONST_REF: &[u8; 3] = b"foo";
trait Foo {
const CONST_REF_DEFAULT: &'static [u8; 3] = b"bar";
const CONST_REF: &'static [u8; 3];
}
impl Foo for i32 {
const CONST_REF: &'static [u8; 3] = b"jjj";
}
impl Foo for i64 {
const CONST_REF_DEFAULT: &'static [u8; 3] = b"ggg";
const CONST_REF: &'static [u8; 3] = b"fff";
}
// Check that (associated and free) const references are not mistaken for a
// non-reference pattern (in which case they would be auto-dereferenced, making
// the types mismatched).
fn const_ref() -> bool {
let f = b"foo";
match f {
CONST_REF => true,
_ => false,
}
}
fn associated_const_ref() -> bool {
match (b"bar", b"jjj", b"ggg", b"fff") {
(i32::CONST_REF_DEFAULT, i32::CONST_REF, i64::CONST_REF_DEFAULT, i64::CONST_REF) => true,
_ => false,
}
}
| pub fn main() {
assert!(const_ref());
assert!(associated_const_ref());
} | random_line_split |
|
query16.rs | use timely::dataflow::*;
// use timely::dataflow::operators::*;
use timely::dataflow::operators::probe::Handle as ProbeHandle;
// use differential_dataflow::AsCollection;
use differential_dataflow::operators::*;
use differential_dataflow::lattice::Lattice;
use ::Collections;
// -- $ID$
// -- TPC-H/TPC-R Parts/Supplier Relationship Query (Q16)
// -- Functional Query Definition
// -- Approved February 1998
// :x
// :o
// select
// p_brand,
// p_type,
// p_size,
// count(distinct ps_suppkey) as supplier_cnt
// from
// partsupp,
// part
// where
// p_partkey = ps_partkey
// and p_brand <> ':1'
// and p_type not like ':2%'
// and p_size in (:3, :4, :5, :6, :7, :8, :9, :10)
// and ps_suppkey not in (
// select
// s_suppkey
// from
// supplier
// where
// s_comment like '%Customer%Complaints%'
// )
// group by
// p_brand,
// p_type,
// p_size
// order by
// supplier_cnt desc,
// p_brand,
// p_type,
// p_size;
// :n -1
fn starts_with(source: &[u8], query: &[u8]) -> bool {
source.len() >= query.len() && &source[..query.len()] == query
}
fn substring2(source: &[u8], query1: &[u8], query2: &[u8]) -> bool {
if let Some(pos) = (0.. (source.len() - query1.len())).position(|i| &source[i..][..query1.len()] == query1) {
(pos.. query2.len()).any(|i| &source[i..][..query2.len()] == query2)
}
else { false }
}
pub fn query<G: Scope>(collections: &mut Collections<G>) -> ProbeHandle<G::Timestamp>
where G::Timestamp: Lattice+Ord {
println!("TODO: query 16 could use a count_u if it joins after to re-collect its attributes");
let suppliers =
collections
.suppliers()
.flat_map(|s|
if substring2(s.comment.as_bytes(), b"Customer", b"Complaints") {
Some((s.supp_key))
}
else { None }
);
let parts = collections
.partsupps()
.map(|ps| (ps.supp_key, ps.part_key))
.antijoin_u(&suppliers)
.map(|(_supp_key, part_key)| part_key);
collections
.parts()
.flat_map(|p|
if!starts_with(&p.brand, b"Brand45") &&!starts_with(&p.typ.as_bytes(), b"MEDIUM POLISHED") && [49, 14, 23, 45, 19, 3, 36, 9].contains(&p.size) {
Some((p.part_key, (p.brand, p.typ.to_string(), p.size)))
}
else |
)
.semijoin_u(&parts)
.count()
.probe()
} | { None } | conditional_block |
query16.rs | use timely::dataflow::*;
// use timely::dataflow::operators::*;
use timely::dataflow::operators::probe::Handle as ProbeHandle;
// use differential_dataflow::AsCollection;
use differential_dataflow::operators::*; | use differential_dataflow::lattice::Lattice;
use ::Collections;
// -- $ID$
// -- TPC-H/TPC-R Parts/Supplier Relationship Query (Q16)
// -- Functional Query Definition
// -- Approved February 1998
// :x
// :o
// select
// p_brand,
// p_type,
// p_size,
// count(distinct ps_suppkey) as supplier_cnt
// from
// partsupp,
// part
// where
// p_partkey = ps_partkey
// and p_brand <> ':1'
// and p_type not like ':2%'
// and p_size in (:3, :4, :5, :6, :7, :8, :9, :10)
// and ps_suppkey not in (
// select
// s_suppkey
// from
// supplier
// where
// s_comment like '%Customer%Complaints%'
// )
// group by
// p_brand,
// p_type,
// p_size
// order by
// supplier_cnt desc,
// p_brand,
// p_type,
// p_size;
// :n -1
fn starts_with(source: &[u8], query: &[u8]) -> bool {
source.len() >= query.len() && &source[..query.len()] == query
}
fn substring2(source: &[u8], query1: &[u8], query2: &[u8]) -> bool {
if let Some(pos) = (0.. (source.len() - query1.len())).position(|i| &source[i..][..query1.len()] == query1) {
(pos.. query2.len()).any(|i| &source[i..][..query2.len()] == query2)
}
else { false }
}
pub fn query<G: Scope>(collections: &mut Collections<G>) -> ProbeHandle<G::Timestamp>
where G::Timestamp: Lattice+Ord {
println!("TODO: query 16 could use a count_u if it joins after to re-collect its attributes");
let suppliers =
collections
.suppliers()
.flat_map(|s|
if substring2(s.comment.as_bytes(), b"Customer", b"Complaints") {
Some((s.supp_key))
}
else { None }
);
let parts = collections
.partsupps()
.map(|ps| (ps.supp_key, ps.part_key))
.antijoin_u(&suppliers)
.map(|(_supp_key, part_key)| part_key);
collections
.parts()
.flat_map(|p|
if!starts_with(&p.brand, b"Brand45") &&!starts_with(&p.typ.as_bytes(), b"MEDIUM POLISHED") && [49, 14, 23, 45, 19, 3, 36, 9].contains(&p.size) {
Some((p.part_key, (p.brand, p.typ.to_string(), p.size)))
}
else { None }
)
.semijoin_u(&parts)
.count()
.probe()
} | random_line_split |
|
query16.rs | use timely::dataflow::*;
// use timely::dataflow::operators::*;
use timely::dataflow::operators::probe::Handle as ProbeHandle;
// use differential_dataflow::AsCollection;
use differential_dataflow::operators::*;
use differential_dataflow::lattice::Lattice;
use ::Collections;
// -- $ID$
// -- TPC-H/TPC-R Parts/Supplier Relationship Query (Q16)
// -- Functional Query Definition
// -- Approved February 1998
// :x
// :o
// select
// p_brand,
// p_type,
// p_size,
// count(distinct ps_suppkey) as supplier_cnt
// from
// partsupp,
// part
// where
// p_partkey = ps_partkey
// and p_brand <> ':1'
// and p_type not like ':2%'
// and p_size in (:3, :4, :5, :6, :7, :8, :9, :10)
// and ps_suppkey not in (
// select
// s_suppkey
// from
// supplier
// where
// s_comment like '%Customer%Complaints%'
// )
// group by
// p_brand,
// p_type,
// p_size
// order by
// supplier_cnt desc,
// p_brand,
// p_type,
// p_size;
// :n -1
fn starts_with(source: &[u8], query: &[u8]) -> bool {
source.len() >= query.len() && &source[..query.len()] == query
}
fn substring2(source: &[u8], query1: &[u8], query2: &[u8]) -> bool {
if let Some(pos) = (0.. (source.len() - query1.len())).position(|i| &source[i..][..query1.len()] == query1) {
(pos.. query2.len()).any(|i| &source[i..][..query2.len()] == query2)
}
else { false }
}
pub fn query<G: Scope>(collections: &mut Collections<G>) -> ProbeHandle<G::Timestamp>
where G::Timestamp: Lattice+Ord |
collections
.parts()
.flat_map(|p|
if!starts_with(&p.brand, b"Brand45") &&!starts_with(&p.typ.as_bytes(), b"MEDIUM POLISHED") && [49, 14, 23, 45, 19, 3, 36, 9].contains(&p.size) {
Some((p.part_key, (p.brand, p.typ.to_string(), p.size)))
}
else { None }
)
.semijoin_u(&parts)
.count()
.probe()
} | {
println!("TODO: query 16 could use a count_u if it joins after to re-collect its attributes");
let suppliers =
collections
.suppliers()
.flat_map(|s|
if substring2(s.comment.as_bytes(), b"Customer", b"Complaints") {
Some((s.supp_key))
}
else { None }
);
let parts = collections
.partsupps()
.map(|ps| (ps.supp_key, ps.part_key))
.antijoin_u(&suppliers)
.map(|(_supp_key, part_key)| part_key); | identifier_body |
query16.rs | use timely::dataflow::*;
// use timely::dataflow::operators::*;
use timely::dataflow::operators::probe::Handle as ProbeHandle;
// use differential_dataflow::AsCollection;
use differential_dataflow::operators::*;
use differential_dataflow::lattice::Lattice;
use ::Collections;
// -- $ID$
// -- TPC-H/TPC-R Parts/Supplier Relationship Query (Q16)
// -- Functional Query Definition
// -- Approved February 1998
// :x
// :o
// select
// p_brand,
// p_type,
// p_size,
// count(distinct ps_suppkey) as supplier_cnt
// from
// partsupp,
// part
// where
// p_partkey = ps_partkey
// and p_brand <> ':1'
// and p_type not like ':2%'
// and p_size in (:3, :4, :5, :6, :7, :8, :9, :10)
// and ps_suppkey not in (
// select
// s_suppkey
// from
// supplier
// where
// s_comment like '%Customer%Complaints%'
// )
// group by
// p_brand,
// p_type,
// p_size
// order by
// supplier_cnt desc,
// p_brand,
// p_type,
// p_size;
// :n -1
fn starts_with(source: &[u8], query: &[u8]) -> bool {
source.len() >= query.len() && &source[..query.len()] == query
}
fn substring2(source: &[u8], query1: &[u8], query2: &[u8]) -> bool {
if let Some(pos) = (0.. (source.len() - query1.len())).position(|i| &source[i..][..query1.len()] == query1) {
(pos.. query2.len()).any(|i| &source[i..][..query2.len()] == query2)
}
else { false }
}
pub fn | <G: Scope>(collections: &mut Collections<G>) -> ProbeHandle<G::Timestamp>
where G::Timestamp: Lattice+Ord {
println!("TODO: query 16 could use a count_u if it joins after to re-collect its attributes");
let suppliers =
collections
.suppliers()
.flat_map(|s|
if substring2(s.comment.as_bytes(), b"Customer", b"Complaints") {
Some((s.supp_key))
}
else { None }
);
let parts = collections
.partsupps()
.map(|ps| (ps.supp_key, ps.part_key))
.antijoin_u(&suppliers)
.map(|(_supp_key, part_key)| part_key);
collections
.parts()
.flat_map(|p|
if!starts_with(&p.brand, b"Brand45") &&!starts_with(&p.typ.as_bytes(), b"MEDIUM POLISHED") && [49, 14, 23, 45, 19, 3, 36, 9].contains(&p.size) {
Some((p.part_key, (p.brand, p.typ.to_string(), p.size)))
}
else { None }
)
.semijoin_u(&parts)
.count()
.probe()
} | query | identifier_name |
capstone.rs | use alloc::string::{String, ToString};
use core::convert::From;
use core::marker::PhantomData;
use libc::{c_int, c_uint, c_void};
use capstone_sys::cs_opt_value::*;
use capstone_sys::*;
use crate::arch::CapstoneBuilder;
use crate::constants::{Arch, Endian, ExtraMode, Mode, OptValue, Syntax};
use crate::error::*;
use crate::ffi::str_from_cstr_ptr;
use crate::instruction::{Insn, InsnDetail, InsnGroupId, InsnId, Instructions, RegId};
/// An instance of the capstone disassembler
///
/// Create with an instance with [`.new()`](Self::new) and disassemble bytes with [`.disasm_all()`](Self::disasm_all).
#[derive(Debug)]
pub struct Capstone {
/// Opaque handle to cs_engine
/// Stored as a pointer to ensure `Capstone` is `!Send`/`!Sync`
csh: *mut c_void,
/// Internal mode bitfield
mode: cs_mode,
/// Internal endian bitfield
endian: cs_mode,
/// Syntax
syntax: cs_opt_value::Type,
/// Internal extra mode bitfield
extra_mode: cs_mode,
/// Whether to get extra details when disassembling
detail_enabled: bool,
/// Whether to skipdata when disassembling
skipdata_enabled: bool,
/// We *must* set `mode`, `extra_mode`, and `endian` at once because `capstone`
/// handles them inside the arch-specific handler. We store the bitwise OR of these flags that
/// can be passed directly to `cs_option()`.
raw_mode: cs_mode,
/// Architecture
arch: Arch,
}
/// Defines a setter on `Capstone` that speculatively changes the arch-specific mode (which
/// includes `mode`, `endian`, and `extra_mode`). The setter takes a `capstone-rs` type and changes
/// the internal `capstone-sys` type.
macro_rules! define_set_mode {
(
$( #[$func_attr:meta] )*
=> $($visibility:ident)*, $fn_name:ident,
$opt_type:ident, $param_name:ident : $param_type:ident ;
$cs_base_type:ident
) => {
$( #[$func_attr] )*
$($visibility)* fn $fn_name(&mut self, $param_name: $param_type) -> CsResult<()> {
let old_val = self.$param_name;
self.$param_name = $cs_base_type::from($param_name);
let old_raw_mode = self.raw_mode;
let new_raw_mode = self.update_raw_mode();
let result = self._set_cs_option(
cs_opt_type::$opt_type,
new_raw_mode.0 as usize,
);
if result.is_err() {
// On error, restore old values
self.raw_mode = old_raw_mode;
self.$param_name = old_val;
}
result
}
}
}
/// Represents that no extra modes are enabled. Can be passed to `Capstone::new_raw()` as the
/// `extra_mode` argument.
pub static NO_EXTRA_MODE: EmptyExtraModeIter = EmptyExtraModeIter(PhantomData);
/// Represents an empty set of `ExtraMode`.
#[derive(Copy, Clone, Debug, Eq, Hash, PartialEq)]
pub struct EmptyExtraModeIter(PhantomData<()>);
impl Iterator for EmptyExtraModeIter {
type Item = ExtraMode;
fn next(&mut self) -> Option<Self::Item> {
None
}
}
impl Capstone {
/// Create a new instance of the decompiler using the builder pattern interface.
/// This is the recommended interface to `Capstone`.
///
/// ```
/// use capstone::prelude::*;
/// let cs = Capstone::new().x86().mode(arch::x86::ArchMode::Mode32).build();
/// ```
#[allow(clippy::new_ret_no_self)]
pub fn new() -> CapstoneBuilder {
CapstoneBuilder::new()
}
/// Create a new instance of the decompiler using the "raw" interface.
/// The user must ensure that only sensible `Arch`/`Mode` combinations are used.
///
/// ```
/// use capstone::{Arch, Capstone, NO_EXTRA_MODE, Mode};
/// let cs = Capstone::new_raw(Arch::X86, Mode::Mode64, NO_EXTRA_MODE, None);
/// assert!(cs.is_ok());
/// ```
pub fn new_raw<T: Iterator<Item = ExtraMode>>(
arch: Arch,
mode: Mode,
extra_mode: T,
endian: Option<Endian>,
) -> CsResult<Capstone> {
let mut handle: csh = 0;
let csarch: cs_arch = arch.into();
let csmode: cs_mode = mode.into();
// todo(tmfink): test valid modes at run time (or modify upstream capstone)
let endian = match endian {
Some(endian) => cs_mode::from(endian),
None => cs_mode(0),
};
let extra_mode = Self::extra_mode_value(extra_mode);
let combined_mode = csmode | endian | extra_mode;
let err = unsafe { cs_open(csarch, combined_mode, &mut handle) };
if cs_err::CS_ERR_OK == err {
let syntax = CS_OPT_SYNTAX_DEFAULT;
let raw_mode = cs_mode(0);
let detail_enabled = false;
let skipdata_enabled = detail_enabled;
let mut cs = Capstone {
csh: handle as *mut c_void,
syntax,
endian,
mode: csmode,
extra_mode,
detail_enabled,
skipdata_enabled,
raw_mode,
arch,
};
cs.update_raw_mode();
Ok(cs)
} else {
Err(err.into())
}
}
/// Disassemble all instructions in buffer
///
/// ```
/// # use capstone::prelude::*;
/// # let cs = Capstone::new().x86().mode(arch::x86::ArchMode::Mode32).build().unwrap();
/// cs.disasm_all(b"\x90", 0x1000).unwrap();
/// ```
pub fn disasm_all<'a>(&'a self, code: &[u8], addr: u64) -> CsResult<Instructions<'a>> {
self.disasm(code, addr, 0)
}
/// Disassemble `count` instructions in `code`
pub fn disasm_count<'a>(
&'a self,
code: &[u8],
addr: u64,
count: usize,
) -> CsResult<Instructions<'a>> {
if count == 0 {
return Err(Error::CustomError("Invalid dissasemble count; must be > 0"));
}
self.disasm(code, addr, count)
}
/// Disassembles a `&[u8]` full of instructions.
///
/// Pass `count = 0` to disassemble all instructions in the buffer.
fn disasm<'a>(&'a self, code: &[u8], addr: u64, count: usize) -> CsResult<Instructions<'a>> {
// SAFETY NOTE: `cs_disasm()` will write the error state into the
// `struct cs_struct` (true form of the `self.csh`) `errnum` field.
// CLAIM: since:
// - `Capstone` is not `Send`/`Sync`
// - The mutation is done through a `*mut c_void` (not through a const reference)
// it *should* be safe to accept `&self` (instead of `&mut self`) in this method.
let mut ptr: *mut cs_insn = core::ptr::null_mut();
let insn_count = unsafe {
cs_disasm(
self.csh(),
code.as_ptr(),
code.len() as usize,
addr,
count as usize,
&mut ptr,
)
};
if insn_count == 0 {
match self.error_result() {
Ok(_) => Ok(Instructions::new_empty()),
Err(err) => Err(err),
}
} else {
Ok(unsafe { Instructions::from_raw_parts(ptr, insn_count) })
}
}
/// Returns csh handle
#[inline]
fn csh(&self) -> csh {
self.csh as csh
}
/// Returns the raw mode value, which is useful for debugging
#[allow(dead_code)]
pub(crate) fn raw_mode(&self) -> cs_mode {
self.raw_mode
}
/// Update `raw_mode` with the bitwise OR of `mode`, `extra_mode`, and `endian`.
///
/// Returns the new `raw_mode`.
fn update_raw_mode(&mut self) -> cs_mode {
self.raw_mode = self.mode | self.extra_mode | self.endian;
self.raw_mode
}
/// Return the integer value used by capstone to represent the set of extra modes
fn extra_mode_value<T: Iterator<Item = ExtraMode>>(extra_mode: T) -> cs_mode {
// Bitwise OR extra modes
extra_mode.fold(cs_mode(0), |acc, x| acc | cs_mode::from(x))
}
/// Set extra modes in addition to normal `mode`
pub fn | <T: Iterator<Item = ExtraMode>>(&mut self, extra_mode: T) -> CsResult<()> {
let old_val = self.extra_mode;
self.extra_mode = Self::extra_mode_value(extra_mode);
let old_mode = self.raw_mode;
let new_mode = self.update_raw_mode();
let result = self._set_cs_option(cs_opt_type::CS_OPT_MODE, new_mode.0 as usize);
if result.is_err() {
// On error, restore old values
self.raw_mode = old_mode;
self.extra_mode = old_val;
}
result
}
/// Set the assembly syntax (has no effect on some platforms)
pub fn set_syntax(&mut self, syntax: Syntax) -> CsResult<()> {
// Todo(tmfink) check for valid syntax
let syntax_int = cs_opt_value::Type::from(syntax);
let result = self._set_cs_option(cs_opt_type::CS_OPT_SYNTAX, syntax_int as usize);
if result.is_ok() {
self.syntax = syntax_int;
}
result
}
define_set_mode!(
/// Set the endianness (has no effect on some platforms).
=> pub, set_endian, CS_OPT_MODE, endian : Endian; cs_mode);
define_set_mode!(
/// Sets the engine's disassembly mode.
/// Be careful, various combinations of modes aren't supported
/// See the capstone-sys documentation for more information.
=> pub, set_mode, CS_OPT_MODE, mode : Mode; cs_mode);
/// Returns a `CsResult` based on current `errno`.
/// If the `errno` is `CS_ERR_OK`, then `Ok(())` is returned. Otherwise, the error is returned.
fn error_result(&self) -> CsResult<()> {
let errno = unsafe { cs_errno(self.csh()) };
if errno == cs_err::CS_ERR_OK {
Ok(())
} else {
Err(errno.into())
}
}
/// Sets disassembling options at runtime.
///
/// Acts as a safe wrapper around capstone's `cs_option`.
fn _set_cs_option(&mut self, option_type: cs_opt_type, option_value: usize) -> CsResult<()> {
let err = unsafe { cs_option(self.csh(), option_type, option_value) };
if cs_err::CS_ERR_OK == err {
Ok(())
} else {
Err(err.into())
}
}
/// Controls whether to capstone will generate extra details about disassembled instructions.
///
/// Pass `true` to enable detail or `false` to disable detail.
pub fn set_detail(&mut self, enable_detail: bool) -> CsResult<()> {
let option_value: usize = OptValue::from(enable_detail).0 as usize;
let result = self._set_cs_option(cs_opt_type::CS_OPT_DETAIL, option_value);
// Only update internal state on success
if result.is_ok() {
self.detail_enabled = enable_detail;
}
result
}
/// Controls whether capstone will skip over invalid or data instructions.
///
/// Pass `true` to enable skipdata or `false` to disable skipdata.
pub fn set_skipdata(&mut self, enable_skipdata: bool) -> CsResult<()> {
let option_value: usize = OptValue::from(enable_skipdata).0 as usize;
let result = self._set_cs_option(cs_opt_type::CS_OPT_SKIPDATA, option_value);
// Only update internal state on success
if result.is_ok() {
self.skipdata_enabled = enable_skipdata;
}
result
}
/// Converts a register id `reg_id` to a `String` containing the register name.
pub fn reg_name(&self, reg_id: RegId) -> Option<String> {
let reg_name = unsafe {
let _reg_name = cs_reg_name(self.csh(), c_uint::from(reg_id.0));
str_from_cstr_ptr(_reg_name)?.to_string()
};
Some(reg_name)
}
/// Converts an instruction id `insn_id` to a `String` containing the instruction name.
///
/// Note: This function ignores the current syntax and uses the default syntax.
pub fn insn_name(&self, insn_id: InsnId) -> Option<String> {
let insn_name = unsafe {
let _insn_name = cs_insn_name(self.csh(), insn_id.0 as c_uint);
str_from_cstr_ptr(_insn_name)?.to_string()
};
Some(insn_name)
}
/// Converts a group id `group_id` to a `String` containing the group name.
pub fn group_name(&self, group_id: InsnGroupId) -> Option<String> {
let group_name = unsafe {
let _group_name = cs_group_name(self.csh(), c_uint::from(group_id.0));
str_from_cstr_ptr(_group_name)?.to_string()
};
Some(group_name)
}
/// Returns `Detail` structure for a given instruction
///
/// Requires:
///
/// 1. Instruction was created with detail enabled
/// 2. Skipdata is disabled
/// 3. Capstone was not compiled in diet mode
pub fn insn_detail<'s, 'i:'s>(&'s self, insn: &'i Insn) -> CsResult<InsnDetail<'i>> {
if!self.detail_enabled {
Err(Error::DetailOff)
} else if insn.id().0 == 0 {
Err(Error::IrrelevantDataInSkipData)
} else if Self::is_diet() {
Err(Error::IrrelevantDataInDiet)
} else {
Ok(unsafe { insn.detail(self.arch) })
}
}
/// Returns a tuple (major, minor) indicating the version of the capstone C library.
pub fn lib_version() -> (u32, u32) {
let mut major: c_int = 0;
let mut minor: c_int = 0;
let major_ptr: *mut c_int = &mut major;
let minor_ptr: *mut c_int = &mut minor;
// We can ignore the "hexical" version returned by capstone because we already have the
// major and minor versions
let _ = unsafe { cs_version(major_ptr, minor_ptr) };
(major as u32, minor as u32)
}
/// Returns whether the capstone library supports a given architecture.
pub fn supports_arch(arch: Arch) -> bool {
unsafe { cs_support(arch as c_int) }
}
/// Returns whether the capstone library was compiled in diet mode.
pub fn is_diet() -> bool {
unsafe { cs_support(CS_SUPPORT_DIET as c_int) }
}
}
impl Drop for Capstone {
fn drop(&mut self) {
unsafe { cs_close(&mut self.csh()) };
}
}
| set_extra_mode | identifier_name |
capstone.rs | use alloc::string::{String, ToString};
use core::convert::From;
use core::marker::PhantomData;
use libc::{c_int, c_uint, c_void};
use capstone_sys::cs_opt_value::*;
use capstone_sys::*;
use crate::arch::CapstoneBuilder;
use crate::constants::{Arch, Endian, ExtraMode, Mode, OptValue, Syntax};
use crate::error::*;
use crate::ffi::str_from_cstr_ptr;
use crate::instruction::{Insn, InsnDetail, InsnGroupId, InsnId, Instructions, RegId};
/// An instance of the capstone disassembler
///
/// Create with an instance with [`.new()`](Self::new) and disassemble bytes with [`.disasm_all()`](Self::disasm_all).
#[derive(Debug)]
pub struct Capstone {
/// Opaque handle to cs_engine
/// Stored as a pointer to ensure `Capstone` is `!Send`/`!Sync`
csh: *mut c_void,
/// Internal mode bitfield
mode: cs_mode,
/// Internal endian bitfield
endian: cs_mode,
/// Syntax
syntax: cs_opt_value::Type,
/// Internal extra mode bitfield
extra_mode: cs_mode,
/// Whether to get extra details when disassembling
detail_enabled: bool,
/// Whether to skipdata when disassembling
skipdata_enabled: bool,
/// We *must* set `mode`, `extra_mode`, and `endian` at once because `capstone`
/// handles them inside the arch-specific handler. We store the bitwise OR of these flags that
/// can be passed directly to `cs_option()`.
raw_mode: cs_mode,
/// Architecture
arch: Arch,
}
/// Defines a setter on `Capstone` that speculatively changes the arch-specific mode (which
/// includes `mode`, `endian`, and `extra_mode`). The setter takes a `capstone-rs` type and changes
/// the internal `capstone-sys` type.
macro_rules! define_set_mode {
(
$( #[$func_attr:meta] )*
=> $($visibility:ident)*, $fn_name:ident,
$opt_type:ident, $param_name:ident : $param_type:ident ;
$cs_base_type:ident
) => {
$( #[$func_attr] )*
$($visibility)* fn $fn_name(&mut self, $param_name: $param_type) -> CsResult<()> {
let old_val = self.$param_name;
self.$param_name = $cs_base_type::from($param_name);
let old_raw_mode = self.raw_mode;
let new_raw_mode = self.update_raw_mode();
let result = self._set_cs_option(
cs_opt_type::$opt_type,
new_raw_mode.0 as usize,
);
if result.is_err() {
// On error, restore old values
self.raw_mode = old_raw_mode;
self.$param_name = old_val;
}
result
}
}
}
/// Represents that no extra modes are enabled. Can be passed to `Capstone::new_raw()` as the
/// `extra_mode` argument.
pub static NO_EXTRA_MODE: EmptyExtraModeIter = EmptyExtraModeIter(PhantomData);
/// Represents an empty set of `ExtraMode`.
#[derive(Copy, Clone, Debug, Eq, Hash, PartialEq)]
pub struct EmptyExtraModeIter(PhantomData<()>);
impl Iterator for EmptyExtraModeIter {
type Item = ExtraMode;
fn next(&mut self) -> Option<Self::Item> {
None
}
}
impl Capstone {
/// Create a new instance of the decompiler using the builder pattern interface.
/// This is the recommended interface to `Capstone`.
///
/// ```
/// use capstone::prelude::*;
/// let cs = Capstone::new().x86().mode(arch::x86::ArchMode::Mode32).build();
/// ```
#[allow(clippy::new_ret_no_self)]
pub fn new() -> CapstoneBuilder {
CapstoneBuilder::new()
}
/// Create a new instance of the decompiler using the "raw" interface.
/// The user must ensure that only sensible `Arch`/`Mode` combinations are used.
///
/// ```
/// use capstone::{Arch, Capstone, NO_EXTRA_MODE, Mode};
/// let cs = Capstone::new_raw(Arch::X86, Mode::Mode64, NO_EXTRA_MODE, None);
/// assert!(cs.is_ok());
/// ```
pub fn new_raw<T: Iterator<Item = ExtraMode>>(
arch: Arch,
mode: Mode,
extra_mode: T,
endian: Option<Endian>,
) -> CsResult<Capstone> {
let mut handle: csh = 0;
let csarch: cs_arch = arch.into();
let csmode: cs_mode = mode.into();
// todo(tmfink): test valid modes at run time (or modify upstream capstone)
let endian = match endian {
Some(endian) => cs_mode::from(endian),
None => cs_mode(0),
};
let extra_mode = Self::extra_mode_value(extra_mode);
let combined_mode = csmode | endian | extra_mode;
let err = unsafe { cs_open(csarch, combined_mode, &mut handle) };
if cs_err::CS_ERR_OK == err {
let syntax = CS_OPT_SYNTAX_DEFAULT;
let raw_mode = cs_mode(0);
let detail_enabled = false;
let skipdata_enabled = detail_enabled;
let mut cs = Capstone {
csh: handle as *mut c_void,
syntax,
endian,
mode: csmode,
extra_mode,
detail_enabled,
skipdata_enabled,
raw_mode,
arch,
};
cs.update_raw_mode();
Ok(cs)
} else {
Err(err.into())
}
}
/// Disassemble all instructions in buffer
///
/// ```
/// # use capstone::prelude::*;
/// # let cs = Capstone::new().x86().mode(arch::x86::ArchMode::Mode32).build().unwrap();
/// cs.disasm_all(b"\x90", 0x1000).unwrap();
/// ```
pub fn disasm_all<'a>(&'a self, code: &[u8], addr: u64) -> CsResult<Instructions<'a>> {
self.disasm(code, addr, 0)
}
/// Disassemble `count` instructions in `code`
pub fn disasm_count<'a>(
&'a self,
code: &[u8],
addr: u64,
count: usize,
) -> CsResult<Instructions<'a>> {
if count == 0 {
return Err(Error::CustomError("Invalid dissasemble count; must be > 0"));
}
self.disasm(code, addr, count)
}
/// Disassembles a `&[u8]` full of instructions.
///
/// Pass `count = 0` to disassemble all instructions in the buffer.
fn disasm<'a>(&'a self, code: &[u8], addr: u64, count: usize) -> CsResult<Instructions<'a>> {
// SAFETY NOTE: `cs_disasm()` will write the error state into the
// `struct cs_struct` (true form of the `self.csh`) `errnum` field.
// CLAIM: since:
// - `Capstone` is not `Send`/`Sync`
// - The mutation is done through a `*mut c_void` (not through a const reference)
// it *should* be safe to accept `&self` (instead of `&mut self`) in this method.
let mut ptr: *mut cs_insn = core::ptr::null_mut();
let insn_count = unsafe {
cs_disasm(
self.csh(),
code.as_ptr(),
code.len() as usize,
addr,
count as usize,
&mut ptr,
)
};
if insn_count == 0 {
match self.error_result() {
Ok(_) => Ok(Instructions::new_empty()),
Err(err) => Err(err),
}
} else {
Ok(unsafe { Instructions::from_raw_parts(ptr, insn_count) })
}
}
/// Returns csh handle
#[inline]
fn csh(&self) -> csh {
self.csh as csh
}
/// Returns the raw mode value, which is useful for debugging
#[allow(dead_code)]
pub(crate) fn raw_mode(&self) -> cs_mode {
self.raw_mode
}
/// Update `raw_mode` with the bitwise OR of `mode`, `extra_mode`, and `endian`.
///
/// Returns the new `raw_mode`.
fn update_raw_mode(&mut self) -> cs_mode {
self.raw_mode = self.mode | self.extra_mode | self.endian;
self.raw_mode
}
/// Return the integer value used by capstone to represent the set of extra modes
fn extra_mode_value<T: Iterator<Item = ExtraMode>>(extra_mode: T) -> cs_mode {
// Bitwise OR extra modes
extra_mode.fold(cs_mode(0), |acc, x| acc | cs_mode::from(x))
}
/// Set extra modes in addition to normal `mode`
pub fn set_extra_mode<T: Iterator<Item = ExtraMode>>(&mut self, extra_mode: T) -> CsResult<()> {
let old_val = self.extra_mode;
self.extra_mode = Self::extra_mode_value(extra_mode);
let old_mode = self.raw_mode;
let new_mode = self.update_raw_mode();
let result = self._set_cs_option(cs_opt_type::CS_OPT_MODE, new_mode.0 as usize);
if result.is_err() {
// On error, restore old values
self.raw_mode = old_mode;
self.extra_mode = old_val;
}
result
}
/// Set the assembly syntax (has no effect on some platforms)
pub fn set_syntax(&mut self, syntax: Syntax) -> CsResult<()> {
// Todo(tmfink) check for valid syntax
let syntax_int = cs_opt_value::Type::from(syntax);
let result = self._set_cs_option(cs_opt_type::CS_OPT_SYNTAX, syntax_int as usize);
if result.is_ok() {
self.syntax = syntax_int;
}
result
}
define_set_mode!(
/// Set the endianness (has no effect on some platforms).
=> pub, set_endian, CS_OPT_MODE, endian : Endian; cs_mode);
define_set_mode!(
/// Sets the engine's disassembly mode.
/// Be careful, various combinations of modes aren't supported
/// See the capstone-sys documentation for more information.
=> pub, set_mode, CS_OPT_MODE, mode : Mode; cs_mode);
/// Returns a `CsResult` based on current `errno`.
/// If the `errno` is `CS_ERR_OK`, then `Ok(())` is returned. Otherwise, the error is returned.
fn error_result(&self) -> CsResult<()> {
let errno = unsafe { cs_errno(self.csh()) };
if errno == cs_err::CS_ERR_OK {
Ok(())
} else {
Err(errno.into())
}
}
/// Sets disassembling options at runtime.
///
/// Acts as a safe wrapper around capstone's `cs_option`.
fn _set_cs_option(&mut self, option_type: cs_opt_type, option_value: usize) -> CsResult<()> |
/// Controls whether to capstone will generate extra details about disassembled instructions.
///
/// Pass `true` to enable detail or `false` to disable detail.
pub fn set_detail(&mut self, enable_detail: bool) -> CsResult<()> {
let option_value: usize = OptValue::from(enable_detail).0 as usize;
let result = self._set_cs_option(cs_opt_type::CS_OPT_DETAIL, option_value);
// Only update internal state on success
if result.is_ok() {
self.detail_enabled = enable_detail;
}
result
}
/// Controls whether capstone will skip over invalid or data instructions.
///
/// Pass `true` to enable skipdata or `false` to disable skipdata.
pub fn set_skipdata(&mut self, enable_skipdata: bool) -> CsResult<()> {
let option_value: usize = OptValue::from(enable_skipdata).0 as usize;
let result = self._set_cs_option(cs_opt_type::CS_OPT_SKIPDATA, option_value);
// Only update internal state on success
if result.is_ok() {
self.skipdata_enabled = enable_skipdata;
}
result
}
/// Converts a register id `reg_id` to a `String` containing the register name.
pub fn reg_name(&self, reg_id: RegId) -> Option<String> {
let reg_name = unsafe {
let _reg_name = cs_reg_name(self.csh(), c_uint::from(reg_id.0));
str_from_cstr_ptr(_reg_name)?.to_string()
};
Some(reg_name)
}
/// Converts an instruction id `insn_id` to a `String` containing the instruction name.
///
/// Note: This function ignores the current syntax and uses the default syntax.
pub fn insn_name(&self, insn_id: InsnId) -> Option<String> {
let insn_name = unsafe {
let _insn_name = cs_insn_name(self.csh(), insn_id.0 as c_uint);
str_from_cstr_ptr(_insn_name)?.to_string()
};
Some(insn_name)
}
/// Converts a group id `group_id` to a `String` containing the group name.
pub fn group_name(&self, group_id: InsnGroupId) -> Option<String> {
let group_name = unsafe {
let _group_name = cs_group_name(self.csh(), c_uint::from(group_id.0));
str_from_cstr_ptr(_group_name)?.to_string()
};
Some(group_name)
}
/// Returns `Detail` structure for a given instruction
///
/// Requires:
///
/// 1. Instruction was created with detail enabled
/// 2. Skipdata is disabled
/// 3. Capstone was not compiled in diet mode
pub fn insn_detail<'s, 'i:'s>(&'s self, insn: &'i Insn) -> CsResult<InsnDetail<'i>> {
if!self.detail_enabled {
Err(Error::DetailOff)
} else if insn.id().0 == 0 {
Err(Error::IrrelevantDataInSkipData)
} else if Self::is_diet() {
Err(Error::IrrelevantDataInDiet)
} else {
Ok(unsafe { insn.detail(self.arch) })
}
}
/// Returns a tuple (major, minor) indicating the version of the capstone C library.
pub fn lib_version() -> (u32, u32) {
let mut major: c_int = 0;
let mut minor: c_int = 0;
let major_ptr: *mut c_int = &mut major;
let minor_ptr: *mut c_int = &mut minor;
// We can ignore the "hexical" version returned by capstone because we already have the
// major and minor versions
let _ = unsafe { cs_version(major_ptr, minor_ptr) };
(major as u32, minor as u32)
}
/// Returns whether the capstone library supports a given architecture.
pub fn supports_arch(arch: Arch) -> bool {
unsafe { cs_support(arch as c_int) }
}
/// Returns whether the capstone library was compiled in diet mode.
pub fn is_diet() -> bool {
unsafe { cs_support(CS_SUPPORT_DIET as c_int) }
}
}
impl Drop for Capstone {
fn drop(&mut self) {
unsafe { cs_close(&mut self.csh()) };
}
}
| {
let err = unsafe { cs_option(self.csh(), option_type, option_value) };
if cs_err::CS_ERR_OK == err {
Ok(())
} else {
Err(err.into())
}
} | identifier_body |
capstone.rs | use alloc::string::{String, ToString};
use core::convert::From;
use core::marker::PhantomData;
use libc::{c_int, c_uint, c_void};
use capstone_sys::cs_opt_value::*;
use capstone_sys::*;
use crate::arch::CapstoneBuilder;
use crate::constants::{Arch, Endian, ExtraMode, Mode, OptValue, Syntax};
use crate::error::*;
use crate::ffi::str_from_cstr_ptr;
use crate::instruction::{Insn, InsnDetail, InsnGroupId, InsnId, Instructions, RegId};
/// An instance of the capstone disassembler
///
/// Create with an instance with [`.new()`](Self::new) and disassemble bytes with [`.disasm_all()`](Self::disasm_all).
#[derive(Debug)]
pub struct Capstone {
/// Opaque handle to cs_engine
/// Stored as a pointer to ensure `Capstone` is `!Send`/`!Sync`
csh: *mut c_void,
/// Internal mode bitfield
mode: cs_mode,
/// Internal endian bitfield
endian: cs_mode,
/// Syntax
syntax: cs_opt_value::Type,
/// Internal extra mode bitfield
extra_mode: cs_mode,
/// Whether to get extra details when disassembling
detail_enabled: bool,
/// Whether to skipdata when disassembling
skipdata_enabled: bool,
/// We *must* set `mode`, `extra_mode`, and `endian` at once because `capstone`
/// handles them inside the arch-specific handler. We store the bitwise OR of these flags that
/// can be passed directly to `cs_option()`.
raw_mode: cs_mode,
/// Architecture
arch: Arch,
}
/// Defines a setter on `Capstone` that speculatively changes the arch-specific mode (which
/// includes `mode`, `endian`, and `extra_mode`). The setter takes a `capstone-rs` type and changes
/// the internal `capstone-sys` type.
macro_rules! define_set_mode {
(
$( #[$func_attr:meta] )*
=> $($visibility:ident)*, $fn_name:ident,
$opt_type:ident, $param_name:ident : $param_type:ident ;
$cs_base_type:ident
) => {
$( #[$func_attr] )*
$($visibility)* fn $fn_name(&mut self, $param_name: $param_type) -> CsResult<()> {
let old_val = self.$param_name;
self.$param_name = $cs_base_type::from($param_name);
let old_raw_mode = self.raw_mode;
let new_raw_mode = self.update_raw_mode();
let result = self._set_cs_option(
cs_opt_type::$opt_type,
new_raw_mode.0 as usize,
);
if result.is_err() {
// On error, restore old values
self.raw_mode = old_raw_mode;
self.$param_name = old_val;
}
result
}
}
}
/// Represents that no extra modes are enabled. Can be passed to `Capstone::new_raw()` as the
/// `extra_mode` argument.
pub static NO_EXTRA_MODE: EmptyExtraModeIter = EmptyExtraModeIter(PhantomData);
/// Represents an empty set of `ExtraMode`.
#[derive(Copy, Clone, Debug, Eq, Hash, PartialEq)]
pub struct EmptyExtraModeIter(PhantomData<()>);
impl Iterator for EmptyExtraModeIter {
type Item = ExtraMode;
fn next(&mut self) -> Option<Self::Item> {
None
}
}
impl Capstone {
/// Create a new instance of the decompiler using the builder pattern interface.
/// This is the recommended interface to `Capstone`.
///
/// ```
/// use capstone::prelude::*;
/// let cs = Capstone::new().x86().mode(arch::x86::ArchMode::Mode32).build();
/// ```
#[allow(clippy::new_ret_no_self)]
pub fn new() -> CapstoneBuilder {
CapstoneBuilder::new()
}
/// Create a new instance of the decompiler using the "raw" interface.
/// The user must ensure that only sensible `Arch`/`Mode` combinations are used.
/// | /// assert!(cs.is_ok());
/// ```
pub fn new_raw<T: Iterator<Item = ExtraMode>>(
arch: Arch,
mode: Mode,
extra_mode: T,
endian: Option<Endian>,
) -> CsResult<Capstone> {
let mut handle: csh = 0;
let csarch: cs_arch = arch.into();
let csmode: cs_mode = mode.into();
// todo(tmfink): test valid modes at run time (or modify upstream capstone)
let endian = match endian {
Some(endian) => cs_mode::from(endian),
None => cs_mode(0),
};
let extra_mode = Self::extra_mode_value(extra_mode);
let combined_mode = csmode | endian | extra_mode;
let err = unsafe { cs_open(csarch, combined_mode, &mut handle) };
if cs_err::CS_ERR_OK == err {
let syntax = CS_OPT_SYNTAX_DEFAULT;
let raw_mode = cs_mode(0);
let detail_enabled = false;
let skipdata_enabled = detail_enabled;
let mut cs = Capstone {
csh: handle as *mut c_void,
syntax,
endian,
mode: csmode,
extra_mode,
detail_enabled,
skipdata_enabled,
raw_mode,
arch,
};
cs.update_raw_mode();
Ok(cs)
} else {
Err(err.into())
}
}
/// Disassemble all instructions in buffer
///
/// ```
/// # use capstone::prelude::*;
/// # let cs = Capstone::new().x86().mode(arch::x86::ArchMode::Mode32).build().unwrap();
/// cs.disasm_all(b"\x90", 0x1000).unwrap();
/// ```
pub fn disasm_all<'a>(&'a self, code: &[u8], addr: u64) -> CsResult<Instructions<'a>> {
self.disasm(code, addr, 0)
}
/// Disassemble `count` instructions in `code`
pub fn disasm_count<'a>(
&'a self,
code: &[u8],
addr: u64,
count: usize,
) -> CsResult<Instructions<'a>> {
if count == 0 {
return Err(Error::CustomError("Invalid dissasemble count; must be > 0"));
}
self.disasm(code, addr, count)
}
/// Disassembles a `&[u8]` full of instructions.
///
/// Pass `count = 0` to disassemble all instructions in the buffer.
fn disasm<'a>(&'a self, code: &[u8], addr: u64, count: usize) -> CsResult<Instructions<'a>> {
// SAFETY NOTE: `cs_disasm()` will write the error state into the
// `struct cs_struct` (true form of the `self.csh`) `errnum` field.
// CLAIM: since:
// - `Capstone` is not `Send`/`Sync`
// - The mutation is done through a `*mut c_void` (not through a const reference)
// it *should* be safe to accept `&self` (instead of `&mut self`) in this method.
let mut ptr: *mut cs_insn = core::ptr::null_mut();
let insn_count = unsafe {
cs_disasm(
self.csh(),
code.as_ptr(),
code.len() as usize,
addr,
count as usize,
&mut ptr,
)
};
if insn_count == 0 {
match self.error_result() {
Ok(_) => Ok(Instructions::new_empty()),
Err(err) => Err(err),
}
} else {
Ok(unsafe { Instructions::from_raw_parts(ptr, insn_count) })
}
}
/// Returns csh handle
#[inline]
fn csh(&self) -> csh {
self.csh as csh
}
/// Returns the raw mode value, which is useful for debugging
#[allow(dead_code)]
pub(crate) fn raw_mode(&self) -> cs_mode {
self.raw_mode
}
/// Update `raw_mode` with the bitwise OR of `mode`, `extra_mode`, and `endian`.
///
/// Returns the new `raw_mode`.
fn update_raw_mode(&mut self) -> cs_mode {
self.raw_mode = self.mode | self.extra_mode | self.endian;
self.raw_mode
}
/// Return the integer value used by capstone to represent the set of extra modes
fn extra_mode_value<T: Iterator<Item = ExtraMode>>(extra_mode: T) -> cs_mode {
// Bitwise OR extra modes
extra_mode.fold(cs_mode(0), |acc, x| acc | cs_mode::from(x))
}
/// Set extra modes in addition to normal `mode`
pub fn set_extra_mode<T: Iterator<Item = ExtraMode>>(&mut self, extra_mode: T) -> CsResult<()> {
let old_val = self.extra_mode;
self.extra_mode = Self::extra_mode_value(extra_mode);
let old_mode = self.raw_mode;
let new_mode = self.update_raw_mode();
let result = self._set_cs_option(cs_opt_type::CS_OPT_MODE, new_mode.0 as usize);
if result.is_err() {
// On error, restore old values
self.raw_mode = old_mode;
self.extra_mode = old_val;
}
result
}
/// Set the assembly syntax (has no effect on some platforms)
pub fn set_syntax(&mut self, syntax: Syntax) -> CsResult<()> {
// Todo(tmfink) check for valid syntax
let syntax_int = cs_opt_value::Type::from(syntax);
let result = self._set_cs_option(cs_opt_type::CS_OPT_SYNTAX, syntax_int as usize);
if result.is_ok() {
self.syntax = syntax_int;
}
result
}
define_set_mode!(
/// Set the endianness (has no effect on some platforms).
=> pub, set_endian, CS_OPT_MODE, endian : Endian; cs_mode);
define_set_mode!(
/// Sets the engine's disassembly mode.
/// Be careful, various combinations of modes aren't supported
/// See the capstone-sys documentation for more information.
=> pub, set_mode, CS_OPT_MODE, mode : Mode; cs_mode);
/// Returns a `CsResult` based on current `errno`.
/// If the `errno` is `CS_ERR_OK`, then `Ok(())` is returned. Otherwise, the error is returned.
fn error_result(&self) -> CsResult<()> {
let errno = unsafe { cs_errno(self.csh()) };
if errno == cs_err::CS_ERR_OK {
Ok(())
} else {
Err(errno.into())
}
}
/// Sets disassembling options at runtime.
///
/// Acts as a safe wrapper around capstone's `cs_option`.
fn _set_cs_option(&mut self, option_type: cs_opt_type, option_value: usize) -> CsResult<()> {
let err = unsafe { cs_option(self.csh(), option_type, option_value) };
if cs_err::CS_ERR_OK == err {
Ok(())
} else {
Err(err.into())
}
}
/// Controls whether to capstone will generate extra details about disassembled instructions.
///
/// Pass `true` to enable detail or `false` to disable detail.
pub fn set_detail(&mut self, enable_detail: bool) -> CsResult<()> {
let option_value: usize = OptValue::from(enable_detail).0 as usize;
let result = self._set_cs_option(cs_opt_type::CS_OPT_DETAIL, option_value);
// Only update internal state on success
if result.is_ok() {
self.detail_enabled = enable_detail;
}
result
}
/// Controls whether capstone will skip over invalid or data instructions.
///
/// Pass `true` to enable skipdata or `false` to disable skipdata.
pub fn set_skipdata(&mut self, enable_skipdata: bool) -> CsResult<()> {
let option_value: usize = OptValue::from(enable_skipdata).0 as usize;
let result = self._set_cs_option(cs_opt_type::CS_OPT_SKIPDATA, option_value);
// Only update internal state on success
if result.is_ok() {
self.skipdata_enabled = enable_skipdata;
}
result
}
/// Converts a register id `reg_id` to a `String` containing the register name.
pub fn reg_name(&self, reg_id: RegId) -> Option<String> {
let reg_name = unsafe {
let _reg_name = cs_reg_name(self.csh(), c_uint::from(reg_id.0));
str_from_cstr_ptr(_reg_name)?.to_string()
};
Some(reg_name)
}
/// Converts an instruction id `insn_id` to a `String` containing the instruction name.
///
/// Note: This function ignores the current syntax and uses the default syntax.
pub fn insn_name(&self, insn_id: InsnId) -> Option<String> {
let insn_name = unsafe {
let _insn_name = cs_insn_name(self.csh(), insn_id.0 as c_uint);
str_from_cstr_ptr(_insn_name)?.to_string()
};
Some(insn_name)
}
/// Converts a group id `group_id` to a `String` containing the group name.
pub fn group_name(&self, group_id: InsnGroupId) -> Option<String> {
let group_name = unsafe {
let _group_name = cs_group_name(self.csh(), c_uint::from(group_id.0));
str_from_cstr_ptr(_group_name)?.to_string()
};
Some(group_name)
}
/// Returns `Detail` structure for a given instruction
///
/// Requires:
///
/// 1. Instruction was created with detail enabled
/// 2. Skipdata is disabled
/// 3. Capstone was not compiled in diet mode
pub fn insn_detail<'s, 'i:'s>(&'s self, insn: &'i Insn) -> CsResult<InsnDetail<'i>> {
if!self.detail_enabled {
Err(Error::DetailOff)
} else if insn.id().0 == 0 {
Err(Error::IrrelevantDataInSkipData)
} else if Self::is_diet() {
Err(Error::IrrelevantDataInDiet)
} else {
Ok(unsafe { insn.detail(self.arch) })
}
}
/// Returns a tuple (major, minor) indicating the version of the capstone C library.
pub fn lib_version() -> (u32, u32) {
let mut major: c_int = 0;
let mut minor: c_int = 0;
let major_ptr: *mut c_int = &mut major;
let minor_ptr: *mut c_int = &mut minor;
// We can ignore the "hexical" version returned by capstone because we already have the
// major and minor versions
let _ = unsafe { cs_version(major_ptr, minor_ptr) };
(major as u32, minor as u32)
}
/// Returns whether the capstone library supports a given architecture.
pub fn supports_arch(arch: Arch) -> bool {
unsafe { cs_support(arch as c_int) }
}
/// Returns whether the capstone library was compiled in diet mode.
pub fn is_diet() -> bool {
unsafe { cs_support(CS_SUPPORT_DIET as c_int) }
}
}
impl Drop for Capstone {
fn drop(&mut self) {
unsafe { cs_close(&mut self.csh()) };
}
} | /// ```
/// use capstone::{Arch, Capstone, NO_EXTRA_MODE, Mode};
/// let cs = Capstone::new_raw(Arch::X86, Mode::Mode64, NO_EXTRA_MODE, None); | random_line_split |
main.rs | mod sqrl_s4;
mod sqrl_crypto;
use sqrl_s4::SqrlS4Identity;
use sqrl_crypto::en_scrypt;
fn main() | {
//let identity = SqrlS4Identity{type1_length: 34, ..Default::default()};
//let mut identity = SqrlS4Identity::default();
let sqrlbinary = b"sqrldata}\x00\x01\x00-\x00\"wQ\x122\x0e\xb5\x891\xfep\x97\xef\xf2e]\xf6\x0fg\x07\x8c_\xda\xd4\xe0Z\xe0\xb8\t\x96\x00\x00\x00\xf3\x01\x04\x05\x0f\x00\x023\x88\xcd\xa0\xd7WN\xf7\x8a\xd19\xf8\x1c]\x13\x87\x06\xc6\xe8\xf8\xb08\xf6\x14\xd9m\x9e\xf6|\x94\xa4\x1fF\xab}\x0e\xd3\xbf\xa3r\xa3^\xb4\xfb\xcc\xe7\x8cQ\x8d\x8dyRl\x05\xf1\x19|\x90\x03\x06\t\xe0\xb3\x85H\x8c\xe0\xa6\x0fQm\xf6\x94q6-\xee\xe0\xe9I\x00\x02\x00\xea\xde\x04q\xa1\xfaO\x8f\x1c\xf5e\xea\xb3)-^\t\xa5\x00\x00\x00\xf9o$\"\x9e\x91\xa6\xa9k\xde\xe2z^&j\xa6\x15\xb5\x04\xf4P\x01e\xcc\xfa\xa8V\xd7\xf4\x94L\xea\xea\xdd><\xcbC\xc5+\xeb\xaf\x18\x88\xf9\xa6\xd4\xce";
let mut identity = SqrlS4Identity::from_binary(sqrlbinary);
println!("identity debug\n{:?}", identity);
println!("identity print\n{}", identity);
identity.type1_length = 125;
println!("identity.type1_length {}", identity.type1_length);
println!("{:?}", en_scrypt(b"", b"", 64, 1));
} | identifier_body |
|
main.rs | mod sqrl_s4;
mod sqrl_crypto;
use sqrl_s4::SqrlS4Identity;
use sqrl_crypto::en_scrypt;
fn main() {
//let identity = SqrlS4Identity{type1_length: 34,..Default::default()};
//let mut identity = SqrlS4Identity::default();
let sqrlbinary = b"sqrldata}\x00\x01\x00-\x00\"wQ\x122\x0e\xb5\x891\xfep\x97\xef\xf2e]\xf6\x0fg\x07\x8c_\xda\xd4\xe0Z\xe0\xb8\t\x96\x00\x00\x00\xf3\x01\x04\x05\x0f\x00\x023\x88\xcd\xa0\xd7WN\xf7\x8a\xd19\xf8\x1c]\x13\x87\x06\xc6\xe8\xf8\xb08\xf6\x14\xd9m\x9e\xf6|\x94\xa4\x1fF\xab}\x0e\xd3\xbf\xa3r\xa3^\xb4\xfb\xcc\xe7\x8cQ\x8d\x8dyRl\x05\xf1\x19|\x90\x03\x06\t\xe0\xb3\x85H\x8c\xe0\xa6\x0fQm\xf6\x94q6-\xee\xe0\xe9I\x00\x02\x00\xea\xde\x04q\xa1\xfaO\x8f\x1c\xf5e\xea\xb3)-^\t\xa5\x00\x00\x00\xf9o$\"\x9e\x91\xa6\xa9k\xde\xe2z^&j\xa6\x15\xb5\x04\xf4P\x01e\xcc\xfa\xa8V\xd7\xf4\x94L\xea\xea\xdd><\xcbC\xc5+\xeb\xaf\x18\x88\xf9\xa6\xd4\xce"; | println!("identity debug\n{:?}", identity);
println!("identity print\n{}", identity);
identity.type1_length = 125;
println!("identity.type1_length {}", identity.type1_length);
println!("{:?}", en_scrypt(b"", b"", 64, 1));
} | let mut identity = SqrlS4Identity::from_binary(sqrlbinary); | random_line_split |
main.rs | mod sqrl_s4;
mod sqrl_crypto;
use sqrl_s4::SqrlS4Identity;
use sqrl_crypto::en_scrypt;
fn | () {
//let identity = SqrlS4Identity{type1_length: 34,..Default::default()};
//let mut identity = SqrlS4Identity::default();
let sqrlbinary = b"sqrldata}\x00\x01\x00-\x00\"wQ\x122\x0e\xb5\x891\xfep\x97\xef\xf2e]\xf6\x0fg\x07\x8c_\xda\xd4\xe0Z\xe0\xb8\t\x96\x00\x00\x00\xf3\x01\x04\x05\x0f\x00\x023\x88\xcd\xa0\xd7WN\xf7\x8a\xd19\xf8\x1c]\x13\x87\x06\xc6\xe8\xf8\xb08\xf6\x14\xd9m\x9e\xf6|\x94\xa4\x1fF\xab}\x0e\xd3\xbf\xa3r\xa3^\xb4\xfb\xcc\xe7\x8cQ\x8d\x8dyRl\x05\xf1\x19|\x90\x03\x06\t\xe0\xb3\x85H\x8c\xe0\xa6\x0fQm\xf6\x94q6-\xee\xe0\xe9I\x00\x02\x00\xea\xde\x04q\xa1\xfaO\x8f\x1c\xf5e\xea\xb3)-^\t\xa5\x00\x00\x00\xf9o$\"\x9e\x91\xa6\xa9k\xde\xe2z^&j\xa6\x15\xb5\x04\xf4P\x01e\xcc\xfa\xa8V\xd7\xf4\x94L\xea\xea\xdd><\xcbC\xc5+\xeb\xaf\x18\x88\xf9\xa6\xd4\xce";
let mut identity = SqrlS4Identity::from_binary(sqrlbinary);
println!("identity debug\n{:?}", identity);
println!("identity print\n{}", identity);
identity.type1_length = 125;
println!("identity.type1_length {}", identity.type1_length);
println!("{:?}", en_scrypt(b"", b"", 64, 1));
}
| main | identifier_name |
error.rs | use std::error::Error;
use std::fmt;
use hyper::client::Response;
use hyper::error::HttpError;
use rustc_serialize::json::Json;
/// An error returned by `Client` when an API call fails.
pub struct FleetError {
/// An HTTP status code returned by the fleet API.
pub code: Option<u16>,
/// A message describing the error. This message comes from fleet directly whenever fleet
/// provides a message.
pub message: Option<String>,
}
impl FleetError {
/// Constructs a new `FleetError` from a `hyper::error::HttpError`. Not intended for public
/// use.
pub fn from_hyper_error(error: &HttpError) -> FleetError {
FleetError {
code: None,
message: Some(error.description().to_string()),
}
}
/// Constructs a new `FleetError` from a `hyper::client::Response`. Not intended for public
/// use.
pub fn from_hyper_response(response: &mut Response) -> FleetError {
FleetError {
code: Some(response.status.to_u16()),
message: extract_message(response),
}
}
}
impl fmt::Display for FleetError {
fn | (&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(
f,
"{}: {}",
self.code.unwrap_or(0),
self.message.clone().unwrap_or("Unknown error".to_string()),
)
}
}
fn extract_message(response: &mut Response) -> Option<String> {
match Json::from_reader(response) {
Ok(json) => {
match json.find_path(&["error", "message"]) {
Some(message_json) => match message_json.as_string() {
Some(message) => {
if message.len() == 0 {
Some("Error in JSON response from Fleet was empty".to_string())
} else {
Some(message.to_string())
}
},
None => Some("Error in JSON response from Fleet was empty".to_string()),
},
None => Some("Error parsing JSON response from Fleet".to_string()),
}
},
Err(error) => Some(error.description().to_string()),
}
}
| fmt | identifier_name |
error.rs | use std::error::Error;
use std::fmt;
use hyper::client::Response;
use hyper::error::HttpError;
use rustc_serialize::json::Json;
/// An error returned by `Client` when an API call fails.
pub struct FleetError {
/// An HTTP status code returned by the fleet API.
pub code: Option<u16>,
/// A message describing the error. This message comes from fleet directly whenever fleet
/// provides a message.
pub message: Option<String>,
}
impl FleetError {
/// Constructs a new `FleetError` from a `hyper::error::HttpError`. Not intended for public
/// use.
pub fn from_hyper_error(error: &HttpError) -> FleetError {
FleetError {
code: None,
message: Some(error.description().to_string()),
}
}
/// Constructs a new `FleetError` from a `hyper::client::Response`. Not intended for public
/// use.
pub fn from_hyper_response(response: &mut Response) -> FleetError {
FleetError {
code: Some(response.status.to_u16()),
message: extract_message(response),
}
}
}
impl fmt::Display for FleetError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result |
}
fn extract_message(response: &mut Response) -> Option<String> {
match Json::from_reader(response) {
Ok(json) => {
match json.find_path(&["error", "message"]) {
Some(message_json) => match message_json.as_string() {
Some(message) => {
if message.len() == 0 {
Some("Error in JSON response from Fleet was empty".to_string())
} else {
Some(message.to_string())
}
},
None => Some("Error in JSON response from Fleet was empty".to_string()),
},
None => Some("Error parsing JSON response from Fleet".to_string()),
}
},
Err(error) => Some(error.description().to_string()),
}
}
| {
write!(
f,
"{}: {}",
self.code.unwrap_or(0),
self.message.clone().unwrap_or("Unknown error".to_string()),
)
} | identifier_body |
error.rs | use std::error::Error;
use std::fmt;
use hyper::client::Response;
use hyper::error::HttpError;
use rustc_serialize::json::Json;
/// An error returned by `Client` when an API call fails.
pub struct FleetError {
/// An HTTP status code returned by the fleet API.
pub code: Option<u16>,
/// A message describing the error. This message comes from fleet directly whenever fleet
/// provides a message.
pub message: Option<String>,
}
impl FleetError {
/// Constructs a new `FleetError` from a `hyper::error::HttpError`. Not intended for public
/// use.
pub fn from_hyper_error(error: &HttpError) -> FleetError {
FleetError { | }
/// Constructs a new `FleetError` from a `hyper::client::Response`. Not intended for public
/// use.
pub fn from_hyper_response(response: &mut Response) -> FleetError {
FleetError {
code: Some(response.status.to_u16()),
message: extract_message(response),
}
}
}
impl fmt::Display for FleetError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(
f,
"{}: {}",
self.code.unwrap_or(0),
self.message.clone().unwrap_or("Unknown error".to_string()),
)
}
}
fn extract_message(response: &mut Response) -> Option<String> {
match Json::from_reader(response) {
Ok(json) => {
match json.find_path(&["error", "message"]) {
Some(message_json) => match message_json.as_string() {
Some(message) => {
if message.len() == 0 {
Some("Error in JSON response from Fleet was empty".to_string())
} else {
Some(message.to_string())
}
},
None => Some("Error in JSON response from Fleet was empty".to_string()),
},
None => Some("Error parsing JSON response from Fleet".to_string()),
}
},
Err(error) => Some(error.description().to_string()),
}
} | code: None,
message: Some(error.description().to_string()),
} | random_line_split |
middleware.rs | use std::default::Default;
use std::sync::Arc;
use mysql::conn::MyOpts;
use mysql::conn::pool::MyPool;
use mysql::value::from_value;
use nickel::{Request, Response, Middleware, Continue, MiddlewareResult};
use typemap::Key;
use plugin::{Pluggable, Extensible};
pub struct MysqlMiddleware {
pub pool: Arc<MyPool>,
}
impl MysqlMiddleware {
pub fn new(db_name: &str, user: &str, pass: &str) -> MysqlMiddleware {
let options = MyOpts {
user: Some(user.into()),
pass: Some(pass.into()),
db_name: Some(db_name.into()),
..Default::default()
};
let pool = MyPool::new(options).unwrap();
MysqlMiddleware {
pool: Arc::new(pool),
}
} | }
impl Key for MysqlMiddleware { type Value = Arc<MyPool>; }
impl Middleware for MysqlMiddleware {
fn invoke<'res>(&self, request: &mut Request, response: Response<'res>) -> MiddlewareResult<'res> {
request.extensions_mut().insert::<MysqlMiddleware>(self.pool.clone());
Ok(Continue(response))
}
}
pub trait MysqlRequestExtensions {
fn db_connection(&self) -> Arc<MyPool>;
}
impl<'a, 'b, 'c> MysqlRequestExtensions for Request<'a, 'b, 'c> {
fn db_connection(&self) -> Arc<MyPool> {
self.extensions().get::<MysqlMiddleware>().unwrap().clone()
}
} | random_line_split |
|
middleware.rs | use std::default::Default;
use std::sync::Arc;
use mysql::conn::MyOpts;
use mysql::conn::pool::MyPool;
use mysql::value::from_value;
use nickel::{Request, Response, Middleware, Continue, MiddlewareResult};
use typemap::Key;
use plugin::{Pluggable, Extensible};
pub struct MysqlMiddleware {
pub pool: Arc<MyPool>,
}
impl MysqlMiddleware {
pub fn new(db_name: &str, user: &str, pass: &str) -> MysqlMiddleware {
let options = MyOpts {
user: Some(user.into()),
pass: Some(pass.into()),
db_name: Some(db_name.into()),
..Default::default()
};
let pool = MyPool::new(options).unwrap();
MysqlMiddleware {
pool: Arc::new(pool),
}
}
}
impl Key for MysqlMiddleware { type Value = Arc<MyPool>; }
impl Middleware for MysqlMiddleware {
fn | <'res>(&self, request: &mut Request, response: Response<'res>) -> MiddlewareResult<'res> {
request.extensions_mut().insert::<MysqlMiddleware>(self.pool.clone());
Ok(Continue(response))
}
}
pub trait MysqlRequestExtensions {
fn db_connection(&self) -> Arc<MyPool>;
}
impl<'a, 'b, 'c> MysqlRequestExtensions for Request<'a, 'b, 'c> {
fn db_connection(&self) -> Arc<MyPool> {
self.extensions().get::<MysqlMiddleware>().unwrap().clone()
}
}
| invoke | identifier_name |
relais_controller.rs | use errors::*;
use gtk;
use gtk::prelude::*;
use shift_register::*;
use std::sync::{Arc, Mutex};
pub fn all(button: >k::ToggleButton, relais: &Arc<Mutex<ShiftRegister>>) -> Result<()> {
let mut relais = relais.lock().unwrap();
match button.get_active() {
true => relais.all()?,
false => relais.reset()?,
}
Ok(())
}
pub fn random(button: >k::ToggleButton, relais: &Arc<Mutex<ShiftRegister>>) -> Result<()> {
let mut relais = relais.lock().unwrap();
match button.get_active() {
true => relais.test_random()?,
false => relais.reset()?,
}
Ok(())}
pub fn one_after_one(button: >k::ToggleButton, relais: &Arc<Mutex<ShiftRegister>>) -> Result<()> |
pub fn set(button: >k::ToggleButton, relais: &Arc<Mutex<ShiftRegister>>, num: u64) -> Result<()> {
let mut relais = relais.lock().unwrap();
match button.get_active() {
true => relais.set(num)?,
false => relais.clear(num)?,
}
Ok(())
}
| {
let mut relais = relais.lock().unwrap();
match button.get_active() {
true => {
for i in 1..10 {
relais.set(i);
::std::thread::sleep(::std::time::Duration::from_millis(100));
}
},
false => relais.reset()?,
}
Ok(())
} | identifier_body |
relais_controller.rs | use errors::*;
use gtk;
use gtk::prelude::*;
use shift_register::*;
use std::sync::{Arc, Mutex};
pub fn all(button: >k::ToggleButton, relais: &Arc<Mutex<ShiftRegister>>) -> Result<()> {
let mut relais = relais.lock().unwrap();
match button.get_active() {
true => relais.all()?,
false => relais.reset()?,
}
Ok(())
}
pub fn | (button: >k::ToggleButton, relais: &Arc<Mutex<ShiftRegister>>) -> Result<()> {
let mut relais = relais.lock().unwrap();
match button.get_active() {
true => relais.test_random()?,
false => relais.reset()?,
}
Ok(())}
pub fn one_after_one(button: >k::ToggleButton, relais: &Arc<Mutex<ShiftRegister>>) -> Result<()> {
let mut relais = relais.lock().unwrap();
match button.get_active() {
true => {
for i in 1..10 {
relais.set(i);
::std::thread::sleep(::std::time::Duration::from_millis(100));
}
},
false => relais.reset()?,
}
Ok(())
}
pub fn set(button: >k::ToggleButton, relais: &Arc<Mutex<ShiftRegister>>, num: u64) -> Result<()> {
let mut relais = relais.lock().unwrap();
match button.get_active() {
true => relais.set(num)?,
false => relais.clear(num)?,
}
Ok(())
}
| random | identifier_name |
relais_controller.rs | use errors::*;
use gtk;
use gtk::prelude::*;
use shift_register::*;
use std::sync::{Arc, Mutex};
pub fn all(button: >k::ToggleButton, relais: &Arc<Mutex<ShiftRegister>>) -> Result<()> {
let mut relais = relais.lock().unwrap();
match button.get_active() {
true => relais.all()?,
false => relais.reset()?,
}
Ok(())
}
pub fn random(button: >k::ToggleButton, relais: &Arc<Mutex<ShiftRegister>>) -> Result<()> {
let mut relais = relais.lock().unwrap();
match button.get_active() {
true => relais.test_random()?,
false => relais.reset()?,
}
Ok(())}
| let mut relais = relais.lock().unwrap();
match button.get_active() {
true => {
for i in 1..10 {
relais.set(i);
::std::thread::sleep(::std::time::Duration::from_millis(100));
}
},
false => relais.reset()?,
}
Ok(())
}
pub fn set(button: >k::ToggleButton, relais: &Arc<Mutex<ShiftRegister>>, num: u64) -> Result<()> {
let mut relais = relais.lock().unwrap();
match button.get_active() {
true => relais.set(num)?,
false => relais.clear(num)?,
}
Ok(())
} | pub fn one_after_one(button: >k::ToggleButton, relais: &Arc<Mutex<ShiftRegister>>) -> Result<()> { | random_line_split |
acoustic_wave.rs | extern crate arrayfire as af;
use af::*;
use std::f64::consts::*;
fn main() {
set_device(0);
info();
acoustic_wave_simulation();
}
fn normalise(a: &Array) -> Array |
fn acoustic_wave_simulation() {
// Speed of sound
let c = 0.1;
// Distance step
let dx = 0.5;
// Time step
let dt = 1.0;
// Grid size.
let nx = 1500;
let ny = 1500;
// Grid dimensions.
let dims = Dim4::new(&[nx, ny, 1, 1]);
// Pressure field
let mut p = constant::<f32>(0.0, dims);
// d(pressure)/dt field
let mut p_dot = p.clone();
// Laplacian (Del^2) convolution kernel.
let laplacian_values = [0.0f32, 1.0, 0.0,
1.0, -4.0, 1.0,
0.0, 1.0, 0.0];
let laplacian_kernel = Array::new(&laplacian_values, Dim4::new(&[3, 3, 1, 1])) / (dx * dx);
// Create a window to show the waves.
let mut win = Window::new(1000, 1000, "Waves".to_string());
// Hann windowed pulse.
let pulse_time = 100.0f64;
let centre_freq = 0.05;
// Number of samples in pulse.
let pulse_n = (pulse_time/dt).floor() as u64;
let i = range::<f32>(Dim4::new(&[pulse_n, 1, 1, 1]), 0);
let t = i.clone() * dt;
let hamming_window = cos(&(i * (2.0 * PI / pulse_n as f64))) * -0.46 + 0.54;
let wave = sin(&(&t * centre_freq * 2.0 * PI));
let pulse = wave * hamming_window;
// Iteration count.
let mut it = 0;
while!win.is_closed() {
// Convole with laplacian to get spacial second derivative.
let lap_p = convolve2(&p, &laplacian_kernel, ConvMode::DEFAULT, ConvDomain::SPATIAL);
// Calculate the updated pressure and d(pressure)/dt fields.
p_dot += lap_p * (c * dt);
p += &p_dot * dt;
if it < pulse_n {
// Location of the source.
let seqs = &[Seq::new(700.0, 800.0, 1.0), Seq::new(800.0, 800.0, 1.0)];
// Set the pressure there.
p = assign_seq(&p, seqs, &index(&pulse, &[Seq::new(it as f64, it as f64, 1.0)]));
}
// Draw the image.
win.set_colormap(af::ColorMap::BLUE);
win.draw_image(&normalise(&p), None);
it += 1;
}
}
| {
(a/(max_all(&abs(a)).0 as f32 * 2.0f32)) + 0.5f32
} | identifier_body |
acoustic_wave.rs | extern crate arrayfire as af;
use af::*;
use std::f64::consts::*;
fn main() {
set_device(0);
info();
acoustic_wave_simulation();
}
fn | (a: &Array) -> Array {
(a/(max_all(&abs(a)).0 as f32 * 2.0f32)) + 0.5f32
}
fn acoustic_wave_simulation() {
// Speed of sound
let c = 0.1;
// Distance step
let dx = 0.5;
// Time step
let dt = 1.0;
// Grid size.
let nx = 1500;
let ny = 1500;
// Grid dimensions.
let dims = Dim4::new(&[nx, ny, 1, 1]);
// Pressure field
let mut p = constant::<f32>(0.0, dims);
// d(pressure)/dt field
let mut p_dot = p.clone();
// Laplacian (Del^2) convolution kernel.
let laplacian_values = [0.0f32, 1.0, 0.0,
1.0, -4.0, 1.0,
0.0, 1.0, 0.0];
let laplacian_kernel = Array::new(&laplacian_values, Dim4::new(&[3, 3, 1, 1])) / (dx * dx);
// Create a window to show the waves.
let mut win = Window::new(1000, 1000, "Waves".to_string());
// Hann windowed pulse.
let pulse_time = 100.0f64;
let centre_freq = 0.05;
// Number of samples in pulse.
let pulse_n = (pulse_time/dt).floor() as u64;
let i = range::<f32>(Dim4::new(&[pulse_n, 1, 1, 1]), 0);
let t = i.clone() * dt;
let hamming_window = cos(&(i * (2.0 * PI / pulse_n as f64))) * -0.46 + 0.54;
let wave = sin(&(&t * centre_freq * 2.0 * PI));
let pulse = wave * hamming_window;
// Iteration count.
let mut it = 0;
while!win.is_closed() {
// Convole with laplacian to get spacial second derivative.
let lap_p = convolve2(&p, &laplacian_kernel, ConvMode::DEFAULT, ConvDomain::SPATIAL);
// Calculate the updated pressure and d(pressure)/dt fields.
p_dot += lap_p * (c * dt);
p += &p_dot * dt;
if it < pulse_n {
// Location of the source.
let seqs = &[Seq::new(700.0, 800.0, 1.0), Seq::new(800.0, 800.0, 1.0)];
// Set the pressure there.
p = assign_seq(&p, seqs, &index(&pulse, &[Seq::new(it as f64, it as f64, 1.0)]));
}
// Draw the image.
win.set_colormap(af::ColorMap::BLUE);
win.draw_image(&normalise(&p), None);
it += 1;
}
}
| normalise | identifier_name |
acoustic_wave.rs | extern crate arrayfire as af;
use af::*;
use std::f64::consts::*;
fn main() {
set_device(0);
info();
acoustic_wave_simulation();
}
fn normalise(a: &Array) -> Array {
(a/(max_all(&abs(a)).0 as f32 * 2.0f32)) + 0.5f32
}
fn acoustic_wave_simulation() {
// Speed of sound
let c = 0.1;
// Distance step
let dx = 0.5;
// Time step
let dt = 1.0;
// Grid size.
let nx = 1500;
let ny = 1500;
// Grid dimensions.
let dims = Dim4::new(&[nx, ny, 1, 1]);
// Pressure field
let mut p = constant::<f32>(0.0, dims);
// d(pressure)/dt field
let mut p_dot = p.clone();
// Laplacian (Del^2) convolution kernel.
let laplacian_values = [0.0f32, 1.0, 0.0,
1.0, -4.0, 1.0,
0.0, 1.0, 0.0];
let laplacian_kernel = Array::new(&laplacian_values, Dim4::new(&[3, 3, 1, 1])) / (dx * dx);
// Create a window to show the waves.
let mut win = Window::new(1000, 1000, "Waves".to_string());
// Hann windowed pulse.
let pulse_time = 100.0f64;
let centre_freq = 0.05;
// Number of samples in pulse.
let pulse_n = (pulse_time/dt).floor() as u64;
let i = range::<f32>(Dim4::new(&[pulse_n, 1, 1, 1]), 0);
let t = i.clone() * dt;
let hamming_window = cos(&(i * (2.0 * PI / pulse_n as f64))) * -0.46 + 0.54;
let wave = sin(&(&t * centre_freq * 2.0 * PI));
let pulse = wave * hamming_window;
// Iteration count.
let mut it = 0;
while!win.is_closed() {
// Convole with laplacian to get spacial second derivative.
let lap_p = convolve2(&p, &laplacian_kernel, ConvMode::DEFAULT, ConvDomain::SPATIAL);
// Calculate the updated pressure and d(pressure)/dt fields.
p_dot += lap_p * (c * dt);
p += &p_dot * dt;
if it < pulse_n |
// Draw the image.
win.set_colormap(af::ColorMap::BLUE);
win.draw_image(&normalise(&p), None);
it += 1;
}
}
| {
// Location of the source.
let seqs = &[Seq::new(700.0, 800.0, 1.0), Seq::new(800.0, 800.0, 1.0)];
// Set the pressure there.
p = assign_seq(&p, seqs, &index(&pulse, &[Seq::new(it as f64, it as f64, 1.0)]));
} | conditional_block |
acoustic_wave.rs | extern crate arrayfire as af;
use af::*;
use std::f64::consts::*;
fn main() {
set_device(0);
info(); |
fn normalise(a: &Array) -> Array {
(a/(max_all(&abs(a)).0 as f32 * 2.0f32)) + 0.5f32
}
fn acoustic_wave_simulation() {
// Speed of sound
let c = 0.1;
// Distance step
let dx = 0.5;
// Time step
let dt = 1.0;
// Grid size.
let nx = 1500;
let ny = 1500;
// Grid dimensions.
let dims = Dim4::new(&[nx, ny, 1, 1]);
// Pressure field
let mut p = constant::<f32>(0.0, dims);
// d(pressure)/dt field
let mut p_dot = p.clone();
// Laplacian (Del^2) convolution kernel.
let laplacian_values = [0.0f32, 1.0, 0.0,
1.0, -4.0, 1.0,
0.0, 1.0, 0.0];
let laplacian_kernel = Array::new(&laplacian_values, Dim4::new(&[3, 3, 1, 1])) / (dx * dx);
// Create a window to show the waves.
let mut win = Window::new(1000, 1000, "Waves".to_string());
// Hann windowed pulse.
let pulse_time = 100.0f64;
let centre_freq = 0.05;
// Number of samples in pulse.
let pulse_n = (pulse_time/dt).floor() as u64;
let i = range::<f32>(Dim4::new(&[pulse_n, 1, 1, 1]), 0);
let t = i.clone() * dt;
let hamming_window = cos(&(i * (2.0 * PI / pulse_n as f64))) * -0.46 + 0.54;
let wave = sin(&(&t * centre_freq * 2.0 * PI));
let pulse = wave * hamming_window;
// Iteration count.
let mut it = 0;
while!win.is_closed() {
// Convole with laplacian to get spacial second derivative.
let lap_p = convolve2(&p, &laplacian_kernel, ConvMode::DEFAULT, ConvDomain::SPATIAL);
// Calculate the updated pressure and d(pressure)/dt fields.
p_dot += lap_p * (c * dt);
p += &p_dot * dt;
if it < pulse_n {
// Location of the source.
let seqs = &[Seq::new(700.0, 800.0, 1.0), Seq::new(800.0, 800.0, 1.0)];
// Set the pressure there.
p = assign_seq(&p, seqs, &index(&pulse, &[Seq::new(it as f64, it as f64, 1.0)]));
}
// Draw the image.
win.set_colormap(af::ColorMap::BLUE);
win.draw_image(&normalise(&p), None);
it += 1;
}
} |
acoustic_wave_simulation();
} | random_line_split |
htmltablecaptionelement.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use dom::bindings::codegen::Bindings::HTMLTableCaptionElementBinding;
use dom::bindings::js::Root;
use dom::bindings::str::DOMString;
use dom::document::Document;
use dom::htmlelement::HTMLElement;
use dom::node::Node;
use string_cache::Atom;
#[dom_struct]
pub struct HTMLTableCaptionElement {
htmlelement: HTMLElement
}
impl HTMLTableCaptionElement {
fn new_inherited(local_name: Atom,
prefix: Option<DOMString>,
document: &Document) -> HTMLTableCaptionElement |
#[allow(unrooted_must_root)]
pub fn new(local_name: Atom,
prefix: Option<DOMString>,
document: &Document) -> Root<HTMLTableCaptionElement> {
Node::reflect_node(box HTMLTableCaptionElement::new_inherited(local_name, prefix, document),
document,
HTMLTableCaptionElementBinding::Wrap)
}
}
| {
HTMLTableCaptionElement {
htmlelement:
HTMLElement::new_inherited(local_name, prefix, document)
}
} | identifier_body |
htmltablecaptionelement.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use dom::bindings::codegen::Bindings::HTMLTableCaptionElementBinding;
use dom::bindings::js::Root;
use dom::bindings::str::DOMString;
use dom::document::Document;
use dom::htmlelement::HTMLElement;
use dom::node::Node;
use string_cache::Atom;
#[dom_struct]
pub struct HTMLTableCaptionElement {
htmlelement: HTMLElement
}
impl HTMLTableCaptionElement {
fn new_inherited(local_name: Atom,
prefix: Option<DOMString>,
document: &Document) -> HTMLTableCaptionElement {
HTMLTableCaptionElement {
htmlelement:
HTMLElement::new_inherited(local_name, prefix, document)
}
}
#[allow(unrooted_must_root)]
pub fn | (local_name: Atom,
prefix: Option<DOMString>,
document: &Document) -> Root<HTMLTableCaptionElement> {
Node::reflect_node(box HTMLTableCaptionElement::new_inherited(local_name, prefix, document),
document,
HTMLTableCaptionElementBinding::Wrap)
}
}
| new | identifier_name |
htmltablecaptionelement.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use dom::bindings::codegen::Bindings::HTMLTableCaptionElementBinding;
use dom::bindings::js::Root;
use dom::bindings::str::DOMString;
use dom::document::Document;
use dom::htmlelement::HTMLElement;
use dom::node::Node;
use string_cache::Atom;
#[dom_struct]
pub struct HTMLTableCaptionElement {
htmlelement: HTMLElement
}
impl HTMLTableCaptionElement {
fn new_inherited(local_name: Atom,
prefix: Option<DOMString>,
document: &Document) -> HTMLTableCaptionElement {
HTMLTableCaptionElement {
htmlelement:
HTMLElement::new_inherited(local_name, prefix, document)
}
}
#[allow(unrooted_must_root)] | document,
HTMLTableCaptionElementBinding::Wrap)
}
} | pub fn new(local_name: Atom,
prefix: Option<DOMString>,
document: &Document) -> Root<HTMLTableCaptionElement> {
Node::reflect_node(box HTMLTableCaptionElement::new_inherited(local_name, prefix, document), | random_line_split |
color.mako.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
<%namespace name="helpers" file="/helpers.mako.rs" />
<% data.new_style_struct("Color", inherited=True) %>
<% from data import to_rust_ident %>
${helpers.predefined_type(
"color",
"ColorPropertyValue",
"::cssparser::RGBA::new(0, 0, 0, 255)",
animation_value_type="AnimatedRGBA",
flags="APPLIES_TO_FIRST_LETTER APPLIES_TO_FIRST_LINE APPLIES_TO_PLACEHOLDER",
ignored_when_colors_disabled="True",
spec="https://drafts.csswg.org/css-color/#color"
)}
// FIXME(#15973): Add servo support for system colors
//
// FIXME(emilio): Move outside of mako.
% if product == "gecko":
pub mod system_colors {
<%
# These are actually parsed. See nsCSSProps::kColorKTable
system_colors = """activeborder activecaption appworkspace background buttonface
buttonhighlight buttonshadow buttontext captiontext graytext highlight
highlighttext inactiveborder inactivecaption inactivecaptiontext
infobackground infotext menu menutext scrollbar threeddarkshadow
threedface threedhighlight threedlightshadow threedshadow window
windowframe windowtext -moz-buttondefault -moz-buttonhoverface
-moz-buttonhovertext -moz-cellhighlight -moz-cellhighlighttext
-moz-eventreerow -moz-field -moz-fieldtext -moz-dialog -moz-dialogtext
-moz-dragtargetzone -moz-gtk-info-bar-text -moz-html-cellhighlight
-moz-html-cellhighlighttext -moz-mac-buttonactivetext
-moz-mac-chrome-active -moz-mac-chrome-inactive
-moz-mac-defaultbuttontext -moz-mac-focusring -moz-mac-menuselect
-moz-mac-menushadow -moz-mac-menutextdisable -moz-mac-menutextselect
-moz-mac-disabledtoolbartext -moz-mac-secondaryhighlight
-moz-mac-vibrancy-light -moz-mac-vibrancy-dark
-moz-mac-vibrant-titlebar-light -moz-mac-vibrant-titlebar-dark
-moz-mac-menupopup
-moz-mac-menuitem -moz-mac-active-menuitem -moz-mac-source-list
-moz-mac-source-list-selection -moz-mac-active-source-list-selection
-moz-mac-tooltip
-moz-menuhover -moz-menuhovertext -moz-menubartext -moz-menubarhovertext
-moz-oddtreerow -moz-win-mediatext -moz-win-communicationstext
-moz-win-accentcolor -moz-win-accentcolortext
-moz-nativehyperlinktext -moz-comboboxtext -moz-combobox""".split()
# These are not parsed but must be serialized
# They are only ever set directly by Gecko
extra_colors = """WindowBackground WindowForeground WidgetBackground WidgetForeground
WidgetSelectBackground WidgetSelectForeground Widget3DHighlight Widget3DShadow
TextBackground TextForeground TextSelectBackground TextSelectForeground
TextSelectForegroundCustom TextSelectBackgroundDisabled TextSelectBackgroundAttention
TextHighlightBackground TextHighlightForeground IMERawInputBackground
IMERawInputForeground IMERawInputUnderline IMESelectedRawTextBackground
IMESelectedRawTextForeground IMESelectedRawTextUnderline
IMEConvertedTextBackground IMEConvertedTextForeground IMEConvertedTextUnderline
IMESelectedConvertedTextBackground IMESelectedConvertedTextForeground
IMESelectedConvertedTextUnderline SpellCheckerUnderline""".split()
%>
use cssparser::Parser;
use gecko_bindings::bindings::Gecko_GetLookAndFeelSystemColor;
use gecko_bindings::structs::root::mozilla::LookAndFeel_ColorID;
use std::fmt::{self, Write};
use style_traits::{CssWriter, ToCss};
use values::computed::{Context, ToComputedValue};
pub type SystemColor = LookAndFeel_ColorID;
// It's hard to implement MallocSizeOf for LookAndFeel_ColorID because it
// is a bindgen type. So we implement it on the typedef instead.
malloc_size_of_is_0!(SystemColor);
impl ToCss for SystemColor { | let s = match *self {
% for color in system_colors + extra_colors:
LookAndFeel_ColorID::eColorID_${to_rust_ident(color)} => "${color}",
% endfor
LookAndFeel_ColorID::eColorID_LAST_COLOR => unreachable!(),
};
dest.write_str(s)
}
}
impl ToComputedValue for SystemColor {
type ComputedValue = u32; // nscolor
#[inline]
fn to_computed_value(&self, cx: &Context) -> Self::ComputedValue {
unsafe {
Gecko_GetLookAndFeelSystemColor(*self as i32,
cx.device().pres_context())
}
}
#[inline]
fn from_computed_value(_: &Self::ComputedValue) -> Self {
unreachable!()
}
}
impl SystemColor {
pub fn parse<'i, 't>(input: &mut Parser<'i, 't>,) -> Result<Self, ()> {
ascii_case_insensitive_phf_map! {
color_name -> SystemColor = {
% for color in system_colors:
"${color}" => LookAndFeel_ColorID::eColorID_${to_rust_ident(color)},
% endfor
}
}
let ident = input.expect_ident().map_err(|_| ())?;
color_name(ident).cloned().ok_or(())
}
}
}
% endif | fn to_css<W>(&self, dest: &mut CssWriter<W>) -> fmt::Result
where
W: Write,
{ | random_line_split |
cache.rs | use crate::error::CkError;
use serde_json::Value;
use std::collections::HashMap;
use std::io;
use std::path::{Path, PathBuf};
use fs_err as fs;
#[derive(Debug)]
pub struct Cache {
root: PathBuf,
files: HashMap<PathBuf, String>,
values: HashMap<PathBuf, Value>,
pub variables: HashMap<String, Value>,
last_path: Option<PathBuf>,
}
impl Cache {
/// Create a new cache, used to read files only once and otherwise store their contents.
pub fn new(doc_dir: &str) -> Cache {
Cache {
root: Path::new(doc_dir).to_owned(),
files: HashMap::new(),
values: HashMap::new(),
variables: HashMap::new(),
last_path: None,
}
}
fn resolve_path(&mut self, path: &String) -> PathBuf {
if path!= "-" {
let resolve = self.root.join(path);
self.last_path = Some(resolve.clone());
resolve
} else |
}
fn read_file(&mut self, path: PathBuf) -> Result<String, io::Error> {
if let Some(f) = self.files.get(&path) {
return Ok(f.clone());
}
let file = fs::read_to_string(&path)?;
self.files.insert(path, file.clone());
Ok(file)
}
/// Get the text from a file. If called multiple times, the file will only be read once
pub fn get_file(&mut self, path: &String) -> Result<String, io::Error> {
let path = self.resolve_path(path);
self.read_file(path)
}
/// Parse the JSON from a file. If called multiple times, the file will only be read once.
pub fn get_value(&mut self, path: &String) -> Result<Value, CkError> {
let path = self.resolve_path(path);
if let Some(v) = self.values.get(&path) {
return Ok(v.clone());
}
let content = self.read_file(path.clone())?;
let val = serde_json::from_str::<Value>(&content)?;
self.values.insert(path, val.clone());
Ok(val)
}
}
| {
self.last_path
.as_ref()
// FIXME: Point to a line number
.expect("No last path set. Make sure to specify a full path before using `-`")
.clone()
} | conditional_block |
cache.rs | use crate::error::CkError;
use serde_json::Value;
use std::collections::HashMap;
use std::io;
use std::path::{Path, PathBuf};
use fs_err as fs;
#[derive(Debug)]
pub struct Cache {
root: PathBuf,
files: HashMap<PathBuf, String>,
values: HashMap<PathBuf, Value>,
pub variables: HashMap<String, Value>,
last_path: Option<PathBuf>,
}
impl Cache {
/// Create a new cache, used to read files only once and otherwise store their contents.
pub fn new(doc_dir: &str) -> Cache {
Cache {
root: Path::new(doc_dir).to_owned(),
files: HashMap::new(),
values: HashMap::new(),
variables: HashMap::new(),
last_path: None,
}
}
fn resolve_path(&mut self, path: &String) -> PathBuf {
if path!= "-" {
let resolve = self.root.join(path);
self.last_path = Some(resolve.clone());
resolve
} else {
self.last_path
.as_ref()
// FIXME: Point to a line number
.expect("No last path set. Make sure to specify a full path before using `-`")
.clone()
}
}
fn read_file(&mut self, path: PathBuf) -> Result<String, io::Error> {
if let Some(f) = self.files.get(&path) {
return Ok(f.clone());
}
let file = fs::read_to_string(&path)?;
self.files.insert(path, file.clone());
Ok(file)
}
/// Get the text from a file. If called multiple times, the file will only be read once
pub fn get_file(&mut self, path: &String) -> Result<String, io::Error> |
/// Parse the JSON from a file. If called multiple times, the file will only be read once.
pub fn get_value(&mut self, path: &String) -> Result<Value, CkError> {
let path = self.resolve_path(path);
if let Some(v) = self.values.get(&path) {
return Ok(v.clone());
}
let content = self.read_file(path.clone())?;
let val = serde_json::from_str::<Value>(&content)?;
self.values.insert(path, val.clone());
Ok(val)
}
}
| {
let path = self.resolve_path(path);
self.read_file(path)
} | identifier_body |
cache.rs | use crate::error::CkError;
use serde_json::Value;
use std::collections::HashMap;
use std::io;
use std::path::{Path, PathBuf};
use fs_err as fs;
#[derive(Debug)]
pub struct Cache {
root: PathBuf,
files: HashMap<PathBuf, String>,
values: HashMap<PathBuf, Value>,
pub variables: HashMap<String, Value>,
last_path: Option<PathBuf>,
}
impl Cache {
/// Create a new cache, used to read files only once and otherwise store their contents.
pub fn new(doc_dir: &str) -> Cache {
Cache {
root: Path::new(doc_dir).to_owned(),
files: HashMap::new(),
values: HashMap::new(),
variables: HashMap::new(),
last_path: None,
}
}
fn resolve_path(&mut self, path: &String) -> PathBuf {
if path!= "-" {
let resolve = self.root.join(path);
self.last_path = Some(resolve.clone());
resolve
} else {
self.last_path | .clone()
}
}
fn read_file(&mut self, path: PathBuf) -> Result<String, io::Error> {
if let Some(f) = self.files.get(&path) {
return Ok(f.clone());
}
let file = fs::read_to_string(&path)?;
self.files.insert(path, file.clone());
Ok(file)
}
/// Get the text from a file. If called multiple times, the file will only be read once
pub fn get_file(&mut self, path: &String) -> Result<String, io::Error> {
let path = self.resolve_path(path);
self.read_file(path)
}
/// Parse the JSON from a file. If called multiple times, the file will only be read once.
pub fn get_value(&mut self, path: &String) -> Result<Value, CkError> {
let path = self.resolve_path(path);
if let Some(v) = self.values.get(&path) {
return Ok(v.clone());
}
let content = self.read_file(path.clone())?;
let val = serde_json::from_str::<Value>(&content)?;
self.values.insert(path, val.clone());
Ok(val)
}
} | .as_ref()
// FIXME: Point to a line number
.expect("No last path set. Make sure to specify a full path before using `-`") | random_line_split |
cache.rs | use crate::error::CkError;
use serde_json::Value;
use std::collections::HashMap;
use std::io;
use std::path::{Path, PathBuf};
use fs_err as fs;
#[derive(Debug)]
pub struct Cache {
root: PathBuf,
files: HashMap<PathBuf, String>,
values: HashMap<PathBuf, Value>,
pub variables: HashMap<String, Value>,
last_path: Option<PathBuf>,
}
impl Cache {
/// Create a new cache, used to read files only once and otherwise store their contents.
pub fn new(doc_dir: &str) -> Cache {
Cache {
root: Path::new(doc_dir).to_owned(),
files: HashMap::new(),
values: HashMap::new(),
variables: HashMap::new(),
last_path: None,
}
}
fn resolve_path(&mut self, path: &String) -> PathBuf {
if path!= "-" {
let resolve = self.root.join(path);
self.last_path = Some(resolve.clone());
resolve
} else {
self.last_path
.as_ref()
// FIXME: Point to a line number
.expect("No last path set. Make sure to specify a full path before using `-`")
.clone()
}
}
fn | (&mut self, path: PathBuf) -> Result<String, io::Error> {
if let Some(f) = self.files.get(&path) {
return Ok(f.clone());
}
let file = fs::read_to_string(&path)?;
self.files.insert(path, file.clone());
Ok(file)
}
/// Get the text from a file. If called multiple times, the file will only be read once
pub fn get_file(&mut self, path: &String) -> Result<String, io::Error> {
let path = self.resolve_path(path);
self.read_file(path)
}
/// Parse the JSON from a file. If called multiple times, the file will only be read once.
pub fn get_value(&mut self, path: &String) -> Result<Value, CkError> {
let path = self.resolve_path(path);
if let Some(v) = self.values.get(&path) {
return Ok(v.clone());
}
let content = self.read_file(path.clone())?;
let val = serde_json::from_str::<Value>(&content)?;
self.values.insert(path, val.clone());
Ok(val)
}
}
| read_file | identifier_name |
trie_node.rs | use {TrieNode, KeyValue, NibbleVec, BRANCH_FACTOR};
use keys::*;
macro_rules! no_children {
() => ([
None, None, None, None,
None, None, None, None,
None, None, None, None,
None, None, None, None
])
}
impl<K, V> TrieNode<K, V>
where K: TrieKey
{
/// Create a value-less, child-less TrieNode.
pub fn new() -> TrieNode<K, V> {
TrieNode {
key: NibbleVec::new(),
key_value: None,
children: no_children![],
child_count: 0,
}
}
/// Create a TrieNode with no children.
pub fn with_key_value(key_fragments: NibbleVec, key: K, value: V) -> TrieNode<K, V> {
TrieNode {
key: key_fragments,
key_value: Some(Box::new(KeyValue {
key: key,
value: value,
})),
children: no_children![],
child_count: 0,
}
}
/// Get the key stored at this node, if any.
pub fn key(&self) -> Option<&K> {
self.key_value.as_ref().map(|kv| &kv.key)
}
/// Get the value stored at this node, if any.
pub fn value(&self) -> Option<&V> {
self.key_value.as_ref().map(|kv| &kv.value)
}
/// Get a mutable reference to the value stored at this node, if any.
pub fn value_mut(&mut self) -> Option<&mut V> {
self.key_value.as_mut().map(|kv| &mut kv.value)
}
/// Get the value whilst checking a key match.
pub fn value_checked(&self, key: &K) -> Option<&V> {
self.key_value.as_ref().map(|kv| {
check_keys(&kv.key, key);
&kv.value
})
}
/// Get a mutable value whilst checking a key match.
pub fn value_checked_mut(&mut self, key: &K) -> Option<&mut V> {
self.key_value.as_mut().map(|kv| {
check_keys(&kv.key, key);
&mut kv.value
})
}
/// Compute the number of keys and values in this node's subtrie.
pub fn compute_size(&self) -> usize {
let mut size = if self.key_value.is_some() { 1 } else { 0 };
for child in &self.children {
if let &Some(ref child) = child {
// TODO: could unroll this recursion
size += child.compute_size();
}
}
size
}
/// Add a child at the given index, given that none exists there already.
pub fn add_child(&mut self, idx: usize, node: Box<TrieNode<K, V>>) {
debug_assert!(self.children[idx].is_none());
self.child_count += 1;
self.children[idx] = Some(node);
}
/// Remove a child at the given index, if it exists.
pub fn take_child(&mut self, idx: usize) -> Option<Box<TrieNode<K, V>>> {
self.children[idx].take().map(|node| {
self.child_count -= 1;
node
})
}
/// Helper function for removing the single child of a node.
pub fn take_only_child(&mut self) -> Box<TrieNode<K, V>> {
debug_assert!(self.child_count == 1);
for i in 0..BRANCH_FACTOR {
if let Some(child) = self.take_child(i) {
return child;
}
}
unreachable!("node with child_count 1 has no actual children");
}
/// Set the key and value of a node, given that it currently lacks one.
pub fn add_key_value(&mut self, key: K, value: V) {
debug_assert!(self.key_value.is_none());
self.key_value = Some(Box::new(KeyValue {
key: key,
value: value,
}));
}
/// Move the value out of a node, whilst checking that its key is as expected.
/// Can panic (see check_keys).
pub fn take_value(&mut self, key: &K) -> Option<V> {
self.key_value.take().map(|kv| {
check_keys(&kv.key, key);
kv.value
})
}
/// Replace a value, returning the previous value if there was one.
pub fn replace_value(&mut self, key: K, value: V) -> Option<V> {
// TODO: optimise this?
let previous = self.take_value(&key);
self.add_key_value(key, value);
previous
}
/// Get a reference to this node if it has a value.
pub fn as_value_node(&self) -> Option<&TrieNode<K, V>> {
self.key_value.as_ref().map(|_| self)
}
/// Split a node at a given index in its key, transforming it into a prefix node of its
/// previous self.
pub fn split(&mut self, idx: usize) {
// Extract all the parts of the suffix node, starting with the key.
let key = self.key.split(idx);
// Key-value.
let key_value = self.key_value.take();
// Children.
let mut children = no_children![];
for (i, child) in self.children.iter_mut().enumerate() {
if child.is_some() {
children[i] = child.take();
}
}
// Child count.
let child_count = self.child_count;
self.child_count = 1;
// Insert the collected items below what is now an empty prefix node.
let bucket = key.get(0) as usize;
self.children[bucket] = Some(Box::new(TrieNode {
key: key,
key_value: key_value,
children: children,
child_count: child_count,
}));
}
/// Check the integrity of a trie subtree (quite costly).
/// Return true and the size of the subtree if all checks are successful,
/// or false and a junk value if any test fails.
pub fn check_integrity_recursive(&self, prefix: &NibbleVec) -> (bool, usize) {
let mut sub_tree_size = 0;
let is_root = prefix.len() == 0;
// Check that no value-less, non-root nodes have only 1 child.
if!is_root && self.child_count == 1 && self.key_value.is_none() {
println!("Value-less node with a single child.");
return (false, sub_tree_size);
}
// Check that all non-root key vector's have length > 1.
if!is_root && self.key.len() == 0 {
println!("Key length is 0 at non-root node.");
return (false, sub_tree_size);
}
// Check that the child count matches the actual number of children.
let child_count = self.children.iter().fold(0, |acc, e| acc + (e.is_some() as usize));
if child_count!= self.child_count {
println!("Child count error, recorded: {}, actual: {}",
self.child_count,
child_count);
return (false, sub_tree_size);
}
// Compute the key fragments for this node, according to the trie.
let trie_key = prefix.clone().join(&self.key);
// Account for this node in the size check, and check its key.
match self.key_value {
Some(ref kv) => {
sub_tree_size += 1;
let actual_key = kv.key.encode();
if trie_key!= actual_key {
return (false, sub_tree_size);
}
}
None => (),
}
// Recursively check children.
for i in 0..BRANCH_FACTOR {
if let Some(ref child) = self.children[i] |
}
(true, sub_tree_size)
}
}
| {
match child.check_integrity_recursive(&trie_key) {
(false, _) => return (false, sub_tree_size),
(true, child_size) => sub_tree_size += child_size,
}
} | conditional_block |
trie_node.rs | use {TrieNode, KeyValue, NibbleVec, BRANCH_FACTOR};
use keys::*;
macro_rules! no_children {
() => ([
None, None, None, None,
None, None, None, None,
None, None, None, None,
None, None, None, None
])
}
impl<K, V> TrieNode<K, V>
where K: TrieKey
{
/// Create a value-less, child-less TrieNode.
pub fn | () -> TrieNode<K, V> {
TrieNode {
key: NibbleVec::new(),
key_value: None,
children: no_children![],
child_count: 0,
}
}
/// Create a TrieNode with no children.
pub fn with_key_value(key_fragments: NibbleVec, key: K, value: V) -> TrieNode<K, V> {
TrieNode {
key: key_fragments,
key_value: Some(Box::new(KeyValue {
key: key,
value: value,
})),
children: no_children![],
child_count: 0,
}
}
/// Get the key stored at this node, if any.
pub fn key(&self) -> Option<&K> {
self.key_value.as_ref().map(|kv| &kv.key)
}
/// Get the value stored at this node, if any.
pub fn value(&self) -> Option<&V> {
self.key_value.as_ref().map(|kv| &kv.value)
}
/// Get a mutable reference to the value stored at this node, if any.
pub fn value_mut(&mut self) -> Option<&mut V> {
self.key_value.as_mut().map(|kv| &mut kv.value)
}
/// Get the value whilst checking a key match.
pub fn value_checked(&self, key: &K) -> Option<&V> {
self.key_value.as_ref().map(|kv| {
check_keys(&kv.key, key);
&kv.value
})
}
/// Get a mutable value whilst checking a key match.
pub fn value_checked_mut(&mut self, key: &K) -> Option<&mut V> {
self.key_value.as_mut().map(|kv| {
check_keys(&kv.key, key);
&mut kv.value
})
}
/// Compute the number of keys and values in this node's subtrie.
pub fn compute_size(&self) -> usize {
let mut size = if self.key_value.is_some() { 1 } else { 0 };
for child in &self.children {
if let &Some(ref child) = child {
// TODO: could unroll this recursion
size += child.compute_size();
}
}
size
}
/// Add a child at the given index, given that none exists there already.
pub fn add_child(&mut self, idx: usize, node: Box<TrieNode<K, V>>) {
debug_assert!(self.children[idx].is_none());
self.child_count += 1;
self.children[idx] = Some(node);
}
/// Remove a child at the given index, if it exists.
pub fn take_child(&mut self, idx: usize) -> Option<Box<TrieNode<K, V>>> {
self.children[idx].take().map(|node| {
self.child_count -= 1;
node
})
}
/// Helper function for removing the single child of a node.
pub fn take_only_child(&mut self) -> Box<TrieNode<K, V>> {
debug_assert!(self.child_count == 1);
for i in 0..BRANCH_FACTOR {
if let Some(child) = self.take_child(i) {
return child;
}
}
unreachable!("node with child_count 1 has no actual children");
}
/// Set the key and value of a node, given that it currently lacks one.
pub fn add_key_value(&mut self, key: K, value: V) {
debug_assert!(self.key_value.is_none());
self.key_value = Some(Box::new(KeyValue {
key: key,
value: value,
}));
}
/// Move the value out of a node, whilst checking that its key is as expected.
/// Can panic (see check_keys).
pub fn take_value(&mut self, key: &K) -> Option<V> {
self.key_value.take().map(|kv| {
check_keys(&kv.key, key);
kv.value
})
}
/// Replace a value, returning the previous value if there was one.
pub fn replace_value(&mut self, key: K, value: V) -> Option<V> {
// TODO: optimise this?
let previous = self.take_value(&key);
self.add_key_value(key, value);
previous
}
/// Get a reference to this node if it has a value.
pub fn as_value_node(&self) -> Option<&TrieNode<K, V>> {
self.key_value.as_ref().map(|_| self)
}
/// Split a node at a given index in its key, transforming it into a prefix node of its
/// previous self.
pub fn split(&mut self, idx: usize) {
// Extract all the parts of the suffix node, starting with the key.
let key = self.key.split(idx);
// Key-value.
let key_value = self.key_value.take();
// Children.
let mut children = no_children![];
for (i, child) in self.children.iter_mut().enumerate() {
if child.is_some() {
children[i] = child.take();
}
}
// Child count.
let child_count = self.child_count;
self.child_count = 1;
// Insert the collected items below what is now an empty prefix node.
let bucket = key.get(0) as usize;
self.children[bucket] = Some(Box::new(TrieNode {
key: key,
key_value: key_value,
children: children,
child_count: child_count,
}));
}
/// Check the integrity of a trie subtree (quite costly).
/// Return true and the size of the subtree if all checks are successful,
/// or false and a junk value if any test fails.
pub fn check_integrity_recursive(&self, prefix: &NibbleVec) -> (bool, usize) {
let mut sub_tree_size = 0;
let is_root = prefix.len() == 0;
// Check that no value-less, non-root nodes have only 1 child.
if!is_root && self.child_count == 1 && self.key_value.is_none() {
println!("Value-less node with a single child.");
return (false, sub_tree_size);
}
// Check that all non-root key vector's have length > 1.
if!is_root && self.key.len() == 0 {
println!("Key length is 0 at non-root node.");
return (false, sub_tree_size);
}
// Check that the child count matches the actual number of children.
let child_count = self.children.iter().fold(0, |acc, e| acc + (e.is_some() as usize));
if child_count!= self.child_count {
println!("Child count error, recorded: {}, actual: {}",
self.child_count,
child_count);
return (false, sub_tree_size);
}
// Compute the key fragments for this node, according to the trie.
let trie_key = prefix.clone().join(&self.key);
// Account for this node in the size check, and check its key.
match self.key_value {
Some(ref kv) => {
sub_tree_size += 1;
let actual_key = kv.key.encode();
if trie_key!= actual_key {
return (false, sub_tree_size);
}
}
None => (),
}
// Recursively check children.
for i in 0..BRANCH_FACTOR {
if let Some(ref child) = self.children[i] {
match child.check_integrity_recursive(&trie_key) {
(false, _) => return (false, sub_tree_size),
(true, child_size) => sub_tree_size += child_size,
}
}
}
(true, sub_tree_size)
}
}
| new | identifier_name |
trie_node.rs | use {TrieNode, KeyValue, NibbleVec, BRANCH_FACTOR};
use keys::*;
macro_rules! no_children {
() => ([
None, None, None, None,
None, None, None, None,
None, None, None, None,
None, None, None, None
])
}
impl<K, V> TrieNode<K, V>
where K: TrieKey
{
/// Create a value-less, child-less TrieNode.
pub fn new() -> TrieNode<K, V> {
TrieNode {
key: NibbleVec::new(),
key_value: None,
children: no_children![],
child_count: 0,
}
}
/// Create a TrieNode with no children.
pub fn with_key_value(key_fragments: NibbleVec, key: K, value: V) -> TrieNode<K, V> {
TrieNode {
key: key_fragments,
key_value: Some(Box::new(KeyValue {
key: key,
value: value,
})),
children: no_children![],
child_count: 0,
}
}
/// Get the key stored at this node, if any.
pub fn key(&self) -> Option<&K> {
self.key_value.as_ref().map(|kv| &kv.key)
}
/// Get the value stored at this node, if any.
pub fn value(&self) -> Option<&V> {
self.key_value.as_ref().map(|kv| &kv.value)
}
/// Get a mutable reference to the value stored at this node, if any.
pub fn value_mut(&mut self) -> Option<&mut V> {
self.key_value.as_mut().map(|kv| &mut kv.value)
}
/// Get the value whilst checking a key match.
pub fn value_checked(&self, key: &K) -> Option<&V> {
self.key_value.as_ref().map(|kv| {
check_keys(&kv.key, key);
&kv.value
})
}
/// Get a mutable value whilst checking a key match.
pub fn value_checked_mut(&mut self, key: &K) -> Option<&mut V> {
self.key_value.as_mut().map(|kv| {
check_keys(&kv.key, key);
&mut kv.value |
/// Compute the number of keys and values in this node's subtrie.
pub fn compute_size(&self) -> usize {
let mut size = if self.key_value.is_some() { 1 } else { 0 };
for child in &self.children {
if let &Some(ref child) = child {
// TODO: could unroll this recursion
size += child.compute_size();
}
}
size
}
/// Add a child at the given index, given that none exists there already.
pub fn add_child(&mut self, idx: usize, node: Box<TrieNode<K, V>>) {
debug_assert!(self.children[idx].is_none());
self.child_count += 1;
self.children[idx] = Some(node);
}
/// Remove a child at the given index, if it exists.
pub fn take_child(&mut self, idx: usize) -> Option<Box<TrieNode<K, V>>> {
self.children[idx].take().map(|node| {
self.child_count -= 1;
node
})
}
/// Helper function for removing the single child of a node.
pub fn take_only_child(&mut self) -> Box<TrieNode<K, V>> {
debug_assert!(self.child_count == 1);
for i in 0..BRANCH_FACTOR {
if let Some(child) = self.take_child(i) {
return child;
}
}
unreachable!("node with child_count 1 has no actual children");
}
/// Set the key and value of a node, given that it currently lacks one.
pub fn add_key_value(&mut self, key: K, value: V) {
debug_assert!(self.key_value.is_none());
self.key_value = Some(Box::new(KeyValue {
key: key,
value: value,
}));
}
/// Move the value out of a node, whilst checking that its key is as expected.
/// Can panic (see check_keys).
pub fn take_value(&mut self, key: &K) -> Option<V> {
self.key_value.take().map(|kv| {
check_keys(&kv.key, key);
kv.value
})
}
/// Replace a value, returning the previous value if there was one.
pub fn replace_value(&mut self, key: K, value: V) -> Option<V> {
// TODO: optimise this?
let previous = self.take_value(&key);
self.add_key_value(key, value);
previous
}
/// Get a reference to this node if it has a value.
pub fn as_value_node(&self) -> Option<&TrieNode<K, V>> {
self.key_value.as_ref().map(|_| self)
}
/// Split a node at a given index in its key, transforming it into a prefix node of its
/// previous self.
pub fn split(&mut self, idx: usize) {
// Extract all the parts of the suffix node, starting with the key.
let key = self.key.split(idx);
// Key-value.
let key_value = self.key_value.take();
// Children.
let mut children = no_children![];
for (i, child) in self.children.iter_mut().enumerate() {
if child.is_some() {
children[i] = child.take();
}
}
// Child count.
let child_count = self.child_count;
self.child_count = 1;
// Insert the collected items below what is now an empty prefix node.
let bucket = key.get(0) as usize;
self.children[bucket] = Some(Box::new(TrieNode {
key: key,
key_value: key_value,
children: children,
child_count: child_count,
}));
}
/// Check the integrity of a trie subtree (quite costly).
/// Return true and the size of the subtree if all checks are successful,
/// or false and a junk value if any test fails.
pub fn check_integrity_recursive(&self, prefix: &NibbleVec) -> (bool, usize) {
let mut sub_tree_size = 0;
let is_root = prefix.len() == 0;
// Check that no value-less, non-root nodes have only 1 child.
if!is_root && self.child_count == 1 && self.key_value.is_none() {
println!("Value-less node with a single child.");
return (false, sub_tree_size);
}
// Check that all non-root key vector's have length > 1.
if!is_root && self.key.len() == 0 {
println!("Key length is 0 at non-root node.");
return (false, sub_tree_size);
}
// Check that the child count matches the actual number of children.
let child_count = self.children.iter().fold(0, |acc, e| acc + (e.is_some() as usize));
if child_count!= self.child_count {
println!("Child count error, recorded: {}, actual: {}",
self.child_count,
child_count);
return (false, sub_tree_size);
}
// Compute the key fragments for this node, according to the trie.
let trie_key = prefix.clone().join(&self.key);
// Account for this node in the size check, and check its key.
match self.key_value {
Some(ref kv) => {
sub_tree_size += 1;
let actual_key = kv.key.encode();
if trie_key!= actual_key {
return (false, sub_tree_size);
}
}
None => (),
}
// Recursively check children.
for i in 0..BRANCH_FACTOR {
if let Some(ref child) = self.children[i] {
match child.check_integrity_recursive(&trie_key) {
(false, _) => return (false, sub_tree_size),
(true, child_size) => sub_tree_size += child_size,
}
}
}
(true, sub_tree_size)
}
} | })
} | random_line_split |
trie_node.rs | use {TrieNode, KeyValue, NibbleVec, BRANCH_FACTOR};
use keys::*;
macro_rules! no_children {
() => ([
None, None, None, None,
None, None, None, None,
None, None, None, None,
None, None, None, None
])
}
impl<K, V> TrieNode<K, V>
where K: TrieKey
{
/// Create a value-less, child-less TrieNode.
pub fn new() -> TrieNode<K, V> {
TrieNode {
key: NibbleVec::new(),
key_value: None,
children: no_children![],
child_count: 0,
}
}
/// Create a TrieNode with no children.
pub fn with_key_value(key_fragments: NibbleVec, key: K, value: V) -> TrieNode<K, V> {
TrieNode {
key: key_fragments,
key_value: Some(Box::new(KeyValue {
key: key,
value: value,
})),
children: no_children![],
child_count: 0,
}
}
/// Get the key stored at this node, if any.
pub fn key(&self) -> Option<&K> {
self.key_value.as_ref().map(|kv| &kv.key)
}
/// Get the value stored at this node, if any.
pub fn value(&self) -> Option<&V> {
self.key_value.as_ref().map(|kv| &kv.value)
}
/// Get a mutable reference to the value stored at this node, if any.
pub fn value_mut(&mut self) -> Option<&mut V> {
self.key_value.as_mut().map(|kv| &mut kv.value)
}
/// Get the value whilst checking a key match.
pub fn value_checked(&self, key: &K) -> Option<&V> |
/// Get a mutable value whilst checking a key match.
pub fn value_checked_mut(&mut self, key: &K) -> Option<&mut V> {
self.key_value.as_mut().map(|kv| {
check_keys(&kv.key, key);
&mut kv.value
})
}
/// Compute the number of keys and values in this node's subtrie.
pub fn compute_size(&self) -> usize {
let mut size = if self.key_value.is_some() { 1 } else { 0 };
for child in &self.children {
if let &Some(ref child) = child {
// TODO: could unroll this recursion
size += child.compute_size();
}
}
size
}
/// Add a child at the given index, given that none exists there already.
pub fn add_child(&mut self, idx: usize, node: Box<TrieNode<K, V>>) {
debug_assert!(self.children[idx].is_none());
self.child_count += 1;
self.children[idx] = Some(node);
}
/// Remove a child at the given index, if it exists.
pub fn take_child(&mut self, idx: usize) -> Option<Box<TrieNode<K, V>>> {
self.children[idx].take().map(|node| {
self.child_count -= 1;
node
})
}
/// Helper function for removing the single child of a node.
pub fn take_only_child(&mut self) -> Box<TrieNode<K, V>> {
debug_assert!(self.child_count == 1);
for i in 0..BRANCH_FACTOR {
if let Some(child) = self.take_child(i) {
return child;
}
}
unreachable!("node with child_count 1 has no actual children");
}
/// Set the key and value of a node, given that it currently lacks one.
pub fn add_key_value(&mut self, key: K, value: V) {
debug_assert!(self.key_value.is_none());
self.key_value = Some(Box::new(KeyValue {
key: key,
value: value,
}));
}
/// Move the value out of a node, whilst checking that its key is as expected.
/// Can panic (see check_keys).
pub fn take_value(&mut self, key: &K) -> Option<V> {
self.key_value.take().map(|kv| {
check_keys(&kv.key, key);
kv.value
})
}
/// Replace a value, returning the previous value if there was one.
pub fn replace_value(&mut self, key: K, value: V) -> Option<V> {
// TODO: optimise this?
let previous = self.take_value(&key);
self.add_key_value(key, value);
previous
}
/// Get a reference to this node if it has a value.
pub fn as_value_node(&self) -> Option<&TrieNode<K, V>> {
self.key_value.as_ref().map(|_| self)
}
/// Split a node at a given index in its key, transforming it into a prefix node of its
/// previous self.
pub fn split(&mut self, idx: usize) {
// Extract all the parts of the suffix node, starting with the key.
let key = self.key.split(idx);
// Key-value.
let key_value = self.key_value.take();
// Children.
let mut children = no_children![];
for (i, child) in self.children.iter_mut().enumerate() {
if child.is_some() {
children[i] = child.take();
}
}
// Child count.
let child_count = self.child_count;
self.child_count = 1;
// Insert the collected items below what is now an empty prefix node.
let bucket = key.get(0) as usize;
self.children[bucket] = Some(Box::new(TrieNode {
key: key,
key_value: key_value,
children: children,
child_count: child_count,
}));
}
/// Check the integrity of a trie subtree (quite costly).
/// Return true and the size of the subtree if all checks are successful,
/// or false and a junk value if any test fails.
pub fn check_integrity_recursive(&self, prefix: &NibbleVec) -> (bool, usize) {
let mut sub_tree_size = 0;
let is_root = prefix.len() == 0;
// Check that no value-less, non-root nodes have only 1 child.
if!is_root && self.child_count == 1 && self.key_value.is_none() {
println!("Value-less node with a single child.");
return (false, sub_tree_size);
}
// Check that all non-root key vector's have length > 1.
if!is_root && self.key.len() == 0 {
println!("Key length is 0 at non-root node.");
return (false, sub_tree_size);
}
// Check that the child count matches the actual number of children.
let child_count = self.children.iter().fold(0, |acc, e| acc + (e.is_some() as usize));
if child_count!= self.child_count {
println!("Child count error, recorded: {}, actual: {}",
self.child_count,
child_count);
return (false, sub_tree_size);
}
// Compute the key fragments for this node, according to the trie.
let trie_key = prefix.clone().join(&self.key);
// Account for this node in the size check, and check its key.
match self.key_value {
Some(ref kv) => {
sub_tree_size += 1;
let actual_key = kv.key.encode();
if trie_key!= actual_key {
return (false, sub_tree_size);
}
}
None => (),
}
// Recursively check children.
for i in 0..BRANCH_FACTOR {
if let Some(ref child) = self.children[i] {
match child.check_integrity_recursive(&trie_key) {
(false, _) => return (false, sub_tree_size),
(true, child_size) => sub_tree_size += child_size,
}
}
}
(true, sub_tree_size)
}
}
| {
self.key_value.as_ref().map(|kv| {
check_keys(&kv.key, key);
&kv.value
})
} | identifier_body |
table.rs | use crate::rmeta::*;
use rustc_index::vec::Idx;
use rustc_serialize::opaque::Encoder;
use rustc_serialize::Encoder as _;
use std::convert::TryInto;
use std::marker::PhantomData;
use std::num::NonZeroUsize;
use tracing::debug;
/// Helper trait, for encoding to, and decoding from, a fixed number of bytes.
/// Used mainly for Lazy positions and lengths.
/// Unchecked invariant: `Self::default()` should encode as `[0; BYTE_LEN]`,
/// but this has no impact on safety.
pub(super) trait FixedSizeEncoding: Default {
const BYTE_LEN: usize;
// FIXME(eddyb) convert to and from `[u8; Self::BYTE_LEN]` instead,
// once that starts being allowed by the compiler (i.e. lazy normalization).
fn from_bytes(b: &[u8]) -> Self;
fn write_to_bytes(self, b: &mut [u8]);
// FIXME(eddyb) make these generic functions, or at least defaults here.
// (same problem as above, needs `[u8; Self::BYTE_LEN]`)
// For now, a macro (`fixed_size_encoding_byte_len_and_defaults`) is used.
/// Read a `Self` value (encoded as `Self::BYTE_LEN` bytes),
/// from `&b[i * Self::BYTE_LEN..]`, returning `None` if `i`
/// is not in bounds, or `Some(Self::from_bytes(...))` otherwise.
fn maybe_read_from_bytes_at(b: &[u8], i: usize) -> Option<Self>;
/// Write a `Self` value (encoded as `Self::BYTE_LEN` bytes),
/// at `&mut b[i * Self::BYTE_LEN..]`, using `Self::write_to_bytes`.
fn write_to_bytes_at(self, b: &mut [u8], i: usize);
}
// HACK(eddyb) this shouldn't be needed (see comments on the methods above).
macro_rules! fixed_size_encoding_byte_len_and_defaults {
($byte_len:expr) => {
const BYTE_LEN: usize = $byte_len;
fn maybe_read_from_bytes_at(b: &[u8], i: usize) -> Option<Self> {
const BYTE_LEN: usize = $byte_len;
// HACK(eddyb) ideally this would be done with fully safe code,
// but slicing `[u8]` with `i * N..` is optimized worse, due to the
// possibility of `i * N` overflowing, than indexing `[[u8; N]]`.
let b = unsafe {
std::slice::from_raw_parts(b.as_ptr() as *const [u8; BYTE_LEN], b.len() / BYTE_LEN)
};
b.get(i).map(|b| FixedSizeEncoding::from_bytes(b))
}
fn write_to_bytes_at(self, b: &mut [u8], i: usize) {
const BYTE_LEN: usize = $byte_len;
// HACK(eddyb) ideally this would be done with fully safe code,
// see similar comment in `read_from_bytes_at` for why it can't yet.
let b = unsafe {
std::slice::from_raw_parts_mut(
b.as_mut_ptr() as *mut [u8; BYTE_LEN],
b.len() / BYTE_LEN,
)
};
self.write_to_bytes(&mut b[i]);
}
};
}
impl FixedSizeEncoding for u32 {
fixed_size_encoding_byte_len_and_defaults!(4);
fn from_bytes(b: &[u8]) -> Self {
let mut bytes = [0; Self::BYTE_LEN];
bytes.copy_from_slice(&b[..Self::BYTE_LEN]);
Self::from_le_bytes(bytes)
}
fn write_to_bytes(self, b: &mut [u8]) {
b[..Self::BYTE_LEN].copy_from_slice(&self.to_le_bytes());
}
}
// NOTE(eddyb) there could be an impl for `usize`, which would enable a more
// generic `Lazy<T>` impl, but in the general case we might not need / want to
// fit every `usize` in `u32`.
impl<T> FixedSizeEncoding for Option<Lazy<T>> {
fixed_size_encoding_byte_len_and_defaults!(u32::BYTE_LEN);
fn from_bytes(b: &[u8]) -> Self {
Some(Lazy::from_position(NonZeroUsize::new(u32::from_bytes(b) as usize)?))
}
fn write_to_bytes(self, b: &mut [u8]) {
let position = self.map_or(0, |lazy| lazy.position.get());
let position: u32 = position.try_into().unwrap();
position.write_to_bytes(b)
}
}
impl<T> FixedSizeEncoding for Option<Lazy<[T]>> {
fixed_size_encoding_byte_len_and_defaults!(u32::BYTE_LEN * 2);
fn from_bytes(b: &[u8]) -> Self |
fn write_to_bytes(self, b: &mut [u8]) {
self.map(|lazy| Lazy::<T>::from_position(lazy.position)).write_to_bytes(b);
let len = self.map_or(0, |lazy| lazy.meta);
let len: u32 = len.try_into().unwrap();
len.write_to_bytes(&mut b[u32::BYTE_LEN..]);
}
}
/// Random-access table (i.e. offering constant-time `get`/`set`), similar to
/// `Vec<Option<T>>`, but without requiring encoding or decoding all the values
/// eagerly and in-order.
/// A total of `(max_idx + 1) * <Option<T> as FixedSizeEncoding>::BYTE_LEN` bytes
/// are used for a table, where `max_idx` is the largest index passed to
/// `TableBuilder::set`.
pub(super) struct Table<I: Idx, T>
where
Option<T>: FixedSizeEncoding,
{
_marker: PhantomData<(fn(&I), T)>,
// NOTE(eddyb) this makes `Table` not implement `Sized`, but no
// value of `Table` is ever created (it's always behind `Lazy`).
_bytes: [u8],
}
/// Helper for constructing a table's serialization (also see `Table`).
pub(super) struct TableBuilder<I: Idx, T>
where
Option<T>: FixedSizeEncoding,
{
// FIXME(eddyb) use `IndexVec<I, [u8; <Option<T>>::BYTE_LEN]>` instead of
// `Vec<u8>`, once that starts working (i.e. lazy normalization).
// Then again, that has the downside of not allowing `TableBuilder::encode` to
// obtain a `&[u8]` entirely in safe code, for writing the bytes out.
bytes: Vec<u8>,
_marker: PhantomData<(fn(&I), T)>,
}
impl<I: Idx, T> Default for TableBuilder<I, T>
where
Option<T>: FixedSizeEncoding,
{
fn default() -> Self {
TableBuilder { bytes: vec![], _marker: PhantomData }
}
}
impl<I: Idx, T> TableBuilder<I, T>
where
Option<T>: FixedSizeEncoding,
{
pub(crate) fn set(&mut self, i: I, value: T) {
// FIXME(eddyb) investigate more compact encodings for sparse tables.
// On the PR @michaelwoerister mentioned:
// > Space requirements could perhaps be optimized by using the HAMT `popcnt`
// > trick (i.e. divide things into buckets of 32 or 64 items and then
// > store bit-masks of which item in each bucket is actually serialized).
let i = i.index();
let needed = (i + 1) * <Option<T>>::BYTE_LEN;
if self.bytes.len() < needed {
self.bytes.resize(needed, 0);
}
Some(value).write_to_bytes_at(&mut self.bytes, i);
}
pub(crate) fn encode(&self, buf: &mut Encoder) -> Lazy<Table<I, T>> {
let pos = buf.position();
buf.emit_raw_bytes(&self.bytes).unwrap();
Lazy::from_position_and_meta(NonZeroUsize::new(pos as usize).unwrap(), self.bytes.len())
}
}
impl<I: Idx, T> LazyMeta for Table<I, T>
where
Option<T>: FixedSizeEncoding,
{
type Meta = usize;
fn min_size(len: usize) -> usize {
len
}
}
impl<I: Idx, T> Lazy<Table<I, T>>
where
Option<T>: FixedSizeEncoding,
{
/// Given the metadata, extract out the value at a particular index (if any).
#[inline(never)]
pub(super) fn get<'a, 'tcx, M: Metadata<'a, 'tcx>>(&self, metadata: M, i: I) -> Option<T> {
debug!("Table::lookup: index={:?} len={:?}", i, self.meta);
let start = self.position.get();
let bytes = &metadata.raw_bytes()[start..start + self.meta];
<Option<T>>::maybe_read_from_bytes_at(bytes, i.index())?
}
/// Size of the table in entries, including possible gaps.
pub(super) fn size(&self) -> usize {
self.meta / <Option<T>>::BYTE_LEN
}
}
| {
Some(Lazy::from_position_and_meta(
<Option<Lazy<T>>>::from_bytes(b)?.position,
u32::from_bytes(&b[u32::BYTE_LEN..]) as usize,
))
} | identifier_body |
table.rs | use crate::rmeta::*;
use rustc_index::vec::Idx;
use rustc_serialize::opaque::Encoder;
use rustc_serialize::Encoder as _;
use std::convert::TryInto;
use std::marker::PhantomData;
use std::num::NonZeroUsize;
use tracing::debug;
/// Helper trait, for encoding to, and decoding from, a fixed number of bytes.
/// Used mainly for Lazy positions and lengths.
/// Unchecked invariant: `Self::default()` should encode as `[0; BYTE_LEN]`,
/// but this has no impact on safety.
pub(super) trait FixedSizeEncoding: Default {
const BYTE_LEN: usize;
// FIXME(eddyb) convert to and from `[u8; Self::BYTE_LEN]` instead,
// once that starts being allowed by the compiler (i.e. lazy normalization).
fn from_bytes(b: &[u8]) -> Self;
fn write_to_bytes(self, b: &mut [u8]);
// FIXME(eddyb) make these generic functions, or at least defaults here.
// (same problem as above, needs `[u8; Self::BYTE_LEN]`)
// For now, a macro (`fixed_size_encoding_byte_len_and_defaults`) is used.
/// Read a `Self` value (encoded as `Self::BYTE_LEN` bytes),
/// from `&b[i * Self::BYTE_LEN..]`, returning `None` if `i`
/// is not in bounds, or `Some(Self::from_bytes(...))` otherwise.
fn maybe_read_from_bytes_at(b: &[u8], i: usize) -> Option<Self>;
/// Write a `Self` value (encoded as `Self::BYTE_LEN` bytes),
/// at `&mut b[i * Self::BYTE_LEN..]`, using `Self::write_to_bytes`.
fn write_to_bytes_at(self, b: &mut [u8], i: usize);
}
// HACK(eddyb) this shouldn't be needed (see comments on the methods above).
macro_rules! fixed_size_encoding_byte_len_and_defaults {
($byte_len:expr) => {
const BYTE_LEN: usize = $byte_len;
fn maybe_read_from_bytes_at(b: &[u8], i: usize) -> Option<Self> {
const BYTE_LEN: usize = $byte_len;
// HACK(eddyb) ideally this would be done with fully safe code,
// but slicing `[u8]` with `i * N..` is optimized worse, due to the
// possibility of `i * N` overflowing, than indexing `[[u8; N]]`.
let b = unsafe {
std::slice::from_raw_parts(b.as_ptr() as *const [u8; BYTE_LEN], b.len() / BYTE_LEN)
};
b.get(i).map(|b| FixedSizeEncoding::from_bytes(b))
}
fn write_to_bytes_at(self, b: &mut [u8], i: usize) {
const BYTE_LEN: usize = $byte_len;
// HACK(eddyb) ideally this would be done with fully safe code,
// see similar comment in `read_from_bytes_at` for why it can't yet.
let b = unsafe {
std::slice::from_raw_parts_mut(
b.as_mut_ptr() as *mut [u8; BYTE_LEN],
b.len() / BYTE_LEN,
)
};
self.write_to_bytes(&mut b[i]);
}
};
}
impl FixedSizeEncoding for u32 {
fixed_size_encoding_byte_len_and_defaults!(4);
fn from_bytes(b: &[u8]) -> Self {
let mut bytes = [0; Self::BYTE_LEN];
bytes.copy_from_slice(&b[..Self::BYTE_LEN]);
Self::from_le_bytes(bytes)
}
fn write_to_bytes(self, b: &mut [u8]) {
b[..Self::BYTE_LEN].copy_from_slice(&self.to_le_bytes());
}
}
// NOTE(eddyb) there could be an impl for `usize`, which would enable a more
// generic `Lazy<T>` impl, but in the general case we might not need / want to
// fit every `usize` in `u32`.
impl<T> FixedSizeEncoding for Option<Lazy<T>> {
fixed_size_encoding_byte_len_and_defaults!(u32::BYTE_LEN);
fn | (b: &[u8]) -> Self {
Some(Lazy::from_position(NonZeroUsize::new(u32::from_bytes(b) as usize)?))
}
fn write_to_bytes(self, b: &mut [u8]) {
let position = self.map_or(0, |lazy| lazy.position.get());
let position: u32 = position.try_into().unwrap();
position.write_to_bytes(b)
}
}
impl<T> FixedSizeEncoding for Option<Lazy<[T]>> {
fixed_size_encoding_byte_len_and_defaults!(u32::BYTE_LEN * 2);
fn from_bytes(b: &[u8]) -> Self {
Some(Lazy::from_position_and_meta(
<Option<Lazy<T>>>::from_bytes(b)?.position,
u32::from_bytes(&b[u32::BYTE_LEN..]) as usize,
))
}
fn write_to_bytes(self, b: &mut [u8]) {
self.map(|lazy| Lazy::<T>::from_position(lazy.position)).write_to_bytes(b);
let len = self.map_or(0, |lazy| lazy.meta);
let len: u32 = len.try_into().unwrap();
len.write_to_bytes(&mut b[u32::BYTE_LEN..]);
}
}
/// Random-access table (i.e. offering constant-time `get`/`set`), similar to
/// `Vec<Option<T>>`, but without requiring encoding or decoding all the values
/// eagerly and in-order.
/// A total of `(max_idx + 1) * <Option<T> as FixedSizeEncoding>::BYTE_LEN` bytes
/// are used for a table, where `max_idx` is the largest index passed to
/// `TableBuilder::set`.
pub(super) struct Table<I: Idx, T>
where
Option<T>: FixedSizeEncoding,
{
_marker: PhantomData<(fn(&I), T)>,
// NOTE(eddyb) this makes `Table` not implement `Sized`, but no
// value of `Table` is ever created (it's always behind `Lazy`).
_bytes: [u8],
}
/// Helper for constructing a table's serialization (also see `Table`).
pub(super) struct TableBuilder<I: Idx, T>
where
Option<T>: FixedSizeEncoding,
{
// FIXME(eddyb) use `IndexVec<I, [u8; <Option<T>>::BYTE_LEN]>` instead of
// `Vec<u8>`, once that starts working (i.e. lazy normalization).
// Then again, that has the downside of not allowing `TableBuilder::encode` to
// obtain a `&[u8]` entirely in safe code, for writing the bytes out.
bytes: Vec<u8>,
_marker: PhantomData<(fn(&I), T)>,
}
impl<I: Idx, T> Default for TableBuilder<I, T>
where
Option<T>: FixedSizeEncoding,
{
fn default() -> Self {
TableBuilder { bytes: vec![], _marker: PhantomData }
}
}
impl<I: Idx, T> TableBuilder<I, T>
where
Option<T>: FixedSizeEncoding,
{
pub(crate) fn set(&mut self, i: I, value: T) {
// FIXME(eddyb) investigate more compact encodings for sparse tables.
// On the PR @michaelwoerister mentioned:
// > Space requirements could perhaps be optimized by using the HAMT `popcnt`
// > trick (i.e. divide things into buckets of 32 or 64 items and then
// > store bit-masks of which item in each bucket is actually serialized).
let i = i.index();
let needed = (i + 1) * <Option<T>>::BYTE_LEN;
if self.bytes.len() < needed {
self.bytes.resize(needed, 0);
}
Some(value).write_to_bytes_at(&mut self.bytes, i);
}
pub(crate) fn encode(&self, buf: &mut Encoder) -> Lazy<Table<I, T>> {
let pos = buf.position();
buf.emit_raw_bytes(&self.bytes).unwrap();
Lazy::from_position_and_meta(NonZeroUsize::new(pos as usize).unwrap(), self.bytes.len())
}
}
impl<I: Idx, T> LazyMeta for Table<I, T>
where
Option<T>: FixedSizeEncoding,
{
type Meta = usize;
fn min_size(len: usize) -> usize {
len
}
}
impl<I: Idx, T> Lazy<Table<I, T>>
where
Option<T>: FixedSizeEncoding,
{
/// Given the metadata, extract out the value at a particular index (if any).
#[inline(never)]
pub(super) fn get<'a, 'tcx, M: Metadata<'a, 'tcx>>(&self, metadata: M, i: I) -> Option<T> {
debug!("Table::lookup: index={:?} len={:?}", i, self.meta);
let start = self.position.get();
let bytes = &metadata.raw_bytes()[start..start + self.meta];
<Option<T>>::maybe_read_from_bytes_at(bytes, i.index())?
}
/// Size of the table in entries, including possible gaps.
pub(super) fn size(&self) -> usize {
self.meta / <Option<T>>::BYTE_LEN
}
}
| from_bytes | identifier_name |
table.rs | use crate::rmeta::*;
use rustc_index::vec::Idx;
use rustc_serialize::opaque::Encoder;
use rustc_serialize::Encoder as _;
use std::convert::TryInto;
use std::marker::PhantomData;
use std::num::NonZeroUsize;
use tracing::debug;
/// Helper trait, for encoding to, and decoding from, a fixed number of bytes.
/// Used mainly for Lazy positions and lengths.
/// Unchecked invariant: `Self::default()` should encode as `[0; BYTE_LEN]`,
/// but this has no impact on safety.
pub(super) trait FixedSizeEncoding: Default {
const BYTE_LEN: usize;
// FIXME(eddyb) convert to and from `[u8; Self::BYTE_LEN]` instead,
// once that starts being allowed by the compiler (i.e. lazy normalization).
fn from_bytes(b: &[u8]) -> Self;
fn write_to_bytes(self, b: &mut [u8]);
// FIXME(eddyb) make these generic functions, or at least defaults here.
// (same problem as above, needs `[u8; Self::BYTE_LEN]`)
// For now, a macro (`fixed_size_encoding_byte_len_and_defaults`) is used.
/// Read a `Self` value (encoded as `Self::BYTE_LEN` bytes),
/// from `&b[i * Self::BYTE_LEN..]`, returning `None` if `i`
/// is not in bounds, or `Some(Self::from_bytes(...))` otherwise.
fn maybe_read_from_bytes_at(b: &[u8], i: usize) -> Option<Self>;
/// Write a `Self` value (encoded as `Self::BYTE_LEN` bytes),
/// at `&mut b[i * Self::BYTE_LEN..]`, using `Self::write_to_bytes`.
fn write_to_bytes_at(self, b: &mut [u8], i: usize);
}
// HACK(eddyb) this shouldn't be needed (see comments on the methods above).
macro_rules! fixed_size_encoding_byte_len_and_defaults {
($byte_len:expr) => {
const BYTE_LEN: usize = $byte_len;
fn maybe_read_from_bytes_at(b: &[u8], i: usize) -> Option<Self> {
const BYTE_LEN: usize = $byte_len;
// HACK(eddyb) ideally this would be done with fully safe code,
// but slicing `[u8]` with `i * N..` is optimized worse, due to the
// possibility of `i * N` overflowing, than indexing `[[u8; N]]`.
let b = unsafe {
std::slice::from_raw_parts(b.as_ptr() as *const [u8; BYTE_LEN], b.len() / BYTE_LEN)
};
b.get(i).map(|b| FixedSizeEncoding::from_bytes(b))
}
fn write_to_bytes_at(self, b: &mut [u8], i: usize) {
const BYTE_LEN: usize = $byte_len;
// HACK(eddyb) ideally this would be done with fully safe code,
// see similar comment in `read_from_bytes_at` for why it can't yet.
let b = unsafe {
std::slice::from_raw_parts_mut(
b.as_mut_ptr() as *mut [u8; BYTE_LEN],
b.len() / BYTE_LEN,
)
};
self.write_to_bytes(&mut b[i]);
}
};
}
impl FixedSizeEncoding for u32 {
fixed_size_encoding_byte_len_and_defaults!(4);
fn from_bytes(b: &[u8]) -> Self {
let mut bytes = [0; Self::BYTE_LEN];
bytes.copy_from_slice(&b[..Self::BYTE_LEN]);
Self::from_le_bytes(bytes)
}
fn write_to_bytes(self, b: &mut [u8]) {
b[..Self::BYTE_LEN].copy_from_slice(&self.to_le_bytes());
}
}
// NOTE(eddyb) there could be an impl for `usize`, which would enable a more
// generic `Lazy<T>` impl, but in the general case we might not need / want to
// fit every `usize` in `u32`.
impl<T> FixedSizeEncoding for Option<Lazy<T>> {
fixed_size_encoding_byte_len_and_defaults!(u32::BYTE_LEN);
fn from_bytes(b: &[u8]) -> Self {
Some(Lazy::from_position(NonZeroUsize::new(u32::from_bytes(b) as usize)?))
}
fn write_to_bytes(self, b: &mut [u8]) {
let position = self.map_or(0, |lazy| lazy.position.get());
let position: u32 = position.try_into().unwrap();
position.write_to_bytes(b)
}
}
impl<T> FixedSizeEncoding for Option<Lazy<[T]>> {
fixed_size_encoding_byte_len_and_defaults!(u32::BYTE_LEN * 2);
fn from_bytes(b: &[u8]) -> Self {
Some(Lazy::from_position_and_meta(
<Option<Lazy<T>>>::from_bytes(b)?.position,
u32::from_bytes(&b[u32::BYTE_LEN..]) as usize,
))
}
fn write_to_bytes(self, b: &mut [u8]) {
self.map(|lazy| Lazy::<T>::from_position(lazy.position)).write_to_bytes(b);
let len = self.map_or(0, |lazy| lazy.meta);
let len: u32 = len.try_into().unwrap();
len.write_to_bytes(&mut b[u32::BYTE_LEN..]);
}
}
/// Random-access table (i.e. offering constant-time `get`/`set`), similar to
/// `Vec<Option<T>>`, but without requiring encoding or decoding all the values
/// eagerly and in-order.
/// A total of `(max_idx + 1) * <Option<T> as FixedSizeEncoding>::BYTE_LEN` bytes
/// are used for a table, where `max_idx` is the largest index passed to
/// `TableBuilder::set`.
pub(super) struct Table<I: Idx, T>
where
Option<T>: FixedSizeEncoding,
{
_marker: PhantomData<(fn(&I), T)>,
// NOTE(eddyb) this makes `Table` not implement `Sized`, but no
// value of `Table` is ever created (it's always behind `Lazy`).
_bytes: [u8],
}
/// Helper for constructing a table's serialization (also see `Table`).
pub(super) struct TableBuilder<I: Idx, T>
where
Option<T>: FixedSizeEncoding,
{
// FIXME(eddyb) use `IndexVec<I, [u8; <Option<T>>::BYTE_LEN]>` instead of
// `Vec<u8>`, once that starts working (i.e. lazy normalization).
// Then again, that has the downside of not allowing `TableBuilder::encode` to
// obtain a `&[u8]` entirely in safe code, for writing the bytes out.
bytes: Vec<u8>,
_marker: PhantomData<(fn(&I), T)>,
}
impl<I: Idx, T> Default for TableBuilder<I, T>
where
Option<T>: FixedSizeEncoding,
{
fn default() -> Self {
TableBuilder { bytes: vec![], _marker: PhantomData }
}
}
impl<I: Idx, T> TableBuilder<I, T>
where
Option<T>: FixedSizeEncoding,
{
pub(crate) fn set(&mut self, i: I, value: T) {
// FIXME(eddyb) investigate more compact encodings for sparse tables.
// On the PR @michaelwoerister mentioned:
// > Space requirements could perhaps be optimized by using the HAMT `popcnt`
// > trick (i.e. divide things into buckets of 32 or 64 items and then
// > store bit-masks of which item in each bucket is actually serialized).
let i = i.index();
let needed = (i + 1) * <Option<T>>::BYTE_LEN;
if self.bytes.len() < needed |
Some(value).write_to_bytes_at(&mut self.bytes, i);
}
pub(crate) fn encode(&self, buf: &mut Encoder) -> Lazy<Table<I, T>> {
let pos = buf.position();
buf.emit_raw_bytes(&self.bytes).unwrap();
Lazy::from_position_and_meta(NonZeroUsize::new(pos as usize).unwrap(), self.bytes.len())
}
}
impl<I: Idx, T> LazyMeta for Table<I, T>
where
Option<T>: FixedSizeEncoding,
{
type Meta = usize;
fn min_size(len: usize) -> usize {
len
}
}
impl<I: Idx, T> Lazy<Table<I, T>>
where
Option<T>: FixedSizeEncoding,
{
/// Given the metadata, extract out the value at a particular index (if any).
#[inline(never)]
pub(super) fn get<'a, 'tcx, M: Metadata<'a, 'tcx>>(&self, metadata: M, i: I) -> Option<T> {
debug!("Table::lookup: index={:?} len={:?}", i, self.meta);
let start = self.position.get();
let bytes = &metadata.raw_bytes()[start..start + self.meta];
<Option<T>>::maybe_read_from_bytes_at(bytes, i.index())?
}
/// Size of the table in entries, including possible gaps.
pub(super) fn size(&self) -> usize {
self.meta / <Option<T>>::BYTE_LEN
}
}
| {
self.bytes.resize(needed, 0);
} | conditional_block |
table.rs | use crate::rmeta::*;
use rustc_index::vec::Idx;
use rustc_serialize::opaque::Encoder;
use rustc_serialize::Encoder as _;
use std::convert::TryInto;
use std::marker::PhantomData;
use std::num::NonZeroUsize;
use tracing::debug;
/// Helper trait, for encoding to, and decoding from, a fixed number of bytes.
/// Used mainly for Lazy positions and lengths.
/// Unchecked invariant: `Self::default()` should encode as `[0; BYTE_LEN]`,
/// but this has no impact on safety.
pub(super) trait FixedSizeEncoding: Default {
const BYTE_LEN: usize;
// FIXME(eddyb) convert to and from `[u8; Self::BYTE_LEN]` instead,
// once that starts being allowed by the compiler (i.e. lazy normalization).
fn from_bytes(b: &[u8]) -> Self;
fn write_to_bytes(self, b: &mut [u8]);
// FIXME(eddyb) make these generic functions, or at least defaults here.
// (same problem as above, needs `[u8; Self::BYTE_LEN]`)
// For now, a macro (`fixed_size_encoding_byte_len_and_defaults`) is used.
/// Read a `Self` value (encoded as `Self::BYTE_LEN` bytes),
/// from `&b[i * Self::BYTE_LEN..]`, returning `None` if `i`
/// is not in bounds, or `Some(Self::from_bytes(...))` otherwise.
fn maybe_read_from_bytes_at(b: &[u8], i: usize) -> Option<Self>;
/// Write a `Self` value (encoded as `Self::BYTE_LEN` bytes), |
// HACK(eddyb) this shouldn't be needed (see comments on the methods above).
macro_rules! fixed_size_encoding_byte_len_and_defaults {
($byte_len:expr) => {
const BYTE_LEN: usize = $byte_len;
fn maybe_read_from_bytes_at(b: &[u8], i: usize) -> Option<Self> {
const BYTE_LEN: usize = $byte_len;
// HACK(eddyb) ideally this would be done with fully safe code,
// but slicing `[u8]` with `i * N..` is optimized worse, due to the
// possibility of `i * N` overflowing, than indexing `[[u8; N]]`.
let b = unsafe {
std::slice::from_raw_parts(b.as_ptr() as *const [u8; BYTE_LEN], b.len() / BYTE_LEN)
};
b.get(i).map(|b| FixedSizeEncoding::from_bytes(b))
}
fn write_to_bytes_at(self, b: &mut [u8], i: usize) {
const BYTE_LEN: usize = $byte_len;
// HACK(eddyb) ideally this would be done with fully safe code,
// see similar comment in `read_from_bytes_at` for why it can't yet.
let b = unsafe {
std::slice::from_raw_parts_mut(
b.as_mut_ptr() as *mut [u8; BYTE_LEN],
b.len() / BYTE_LEN,
)
};
self.write_to_bytes(&mut b[i]);
}
};
}
impl FixedSizeEncoding for u32 {
fixed_size_encoding_byte_len_and_defaults!(4);
fn from_bytes(b: &[u8]) -> Self {
let mut bytes = [0; Self::BYTE_LEN];
bytes.copy_from_slice(&b[..Self::BYTE_LEN]);
Self::from_le_bytes(bytes)
}
fn write_to_bytes(self, b: &mut [u8]) {
b[..Self::BYTE_LEN].copy_from_slice(&self.to_le_bytes());
}
}
// NOTE(eddyb) there could be an impl for `usize`, which would enable a more
// generic `Lazy<T>` impl, but in the general case we might not need / want to
// fit every `usize` in `u32`.
impl<T> FixedSizeEncoding for Option<Lazy<T>> {
fixed_size_encoding_byte_len_and_defaults!(u32::BYTE_LEN);
fn from_bytes(b: &[u8]) -> Self {
Some(Lazy::from_position(NonZeroUsize::new(u32::from_bytes(b) as usize)?))
}
fn write_to_bytes(self, b: &mut [u8]) {
let position = self.map_or(0, |lazy| lazy.position.get());
let position: u32 = position.try_into().unwrap();
position.write_to_bytes(b)
}
}
impl<T> FixedSizeEncoding for Option<Lazy<[T]>> {
fixed_size_encoding_byte_len_and_defaults!(u32::BYTE_LEN * 2);
fn from_bytes(b: &[u8]) -> Self {
Some(Lazy::from_position_and_meta(
<Option<Lazy<T>>>::from_bytes(b)?.position,
u32::from_bytes(&b[u32::BYTE_LEN..]) as usize,
))
}
fn write_to_bytes(self, b: &mut [u8]) {
self.map(|lazy| Lazy::<T>::from_position(lazy.position)).write_to_bytes(b);
let len = self.map_or(0, |lazy| lazy.meta);
let len: u32 = len.try_into().unwrap();
len.write_to_bytes(&mut b[u32::BYTE_LEN..]);
}
}
/// Random-access table (i.e. offering constant-time `get`/`set`), similar to
/// `Vec<Option<T>>`, but without requiring encoding or decoding all the values
/// eagerly and in-order.
/// A total of `(max_idx + 1) * <Option<T> as FixedSizeEncoding>::BYTE_LEN` bytes
/// are used for a table, where `max_idx` is the largest index passed to
/// `TableBuilder::set`.
pub(super) struct Table<I: Idx, T>
where
Option<T>: FixedSizeEncoding,
{
_marker: PhantomData<(fn(&I), T)>,
// NOTE(eddyb) this makes `Table` not implement `Sized`, but no
// value of `Table` is ever created (it's always behind `Lazy`).
_bytes: [u8],
}
/// Helper for constructing a table's serialization (also see `Table`).
pub(super) struct TableBuilder<I: Idx, T>
where
Option<T>: FixedSizeEncoding,
{
// FIXME(eddyb) use `IndexVec<I, [u8; <Option<T>>::BYTE_LEN]>` instead of
// `Vec<u8>`, once that starts working (i.e. lazy normalization).
// Then again, that has the downside of not allowing `TableBuilder::encode` to
// obtain a `&[u8]` entirely in safe code, for writing the bytes out.
bytes: Vec<u8>,
_marker: PhantomData<(fn(&I), T)>,
}
impl<I: Idx, T> Default for TableBuilder<I, T>
where
Option<T>: FixedSizeEncoding,
{
fn default() -> Self {
TableBuilder { bytes: vec![], _marker: PhantomData }
}
}
impl<I: Idx, T> TableBuilder<I, T>
where
Option<T>: FixedSizeEncoding,
{
pub(crate) fn set(&mut self, i: I, value: T) {
// FIXME(eddyb) investigate more compact encodings for sparse tables.
// On the PR @michaelwoerister mentioned:
// > Space requirements could perhaps be optimized by using the HAMT `popcnt`
// > trick (i.e. divide things into buckets of 32 or 64 items and then
// > store bit-masks of which item in each bucket is actually serialized).
let i = i.index();
let needed = (i + 1) * <Option<T>>::BYTE_LEN;
if self.bytes.len() < needed {
self.bytes.resize(needed, 0);
}
Some(value).write_to_bytes_at(&mut self.bytes, i);
}
pub(crate) fn encode(&self, buf: &mut Encoder) -> Lazy<Table<I, T>> {
let pos = buf.position();
buf.emit_raw_bytes(&self.bytes).unwrap();
Lazy::from_position_and_meta(NonZeroUsize::new(pos as usize).unwrap(), self.bytes.len())
}
}
impl<I: Idx, T> LazyMeta for Table<I, T>
where
Option<T>: FixedSizeEncoding,
{
type Meta = usize;
fn min_size(len: usize) -> usize {
len
}
}
impl<I: Idx, T> Lazy<Table<I, T>>
where
Option<T>: FixedSizeEncoding,
{
/// Given the metadata, extract out the value at a particular index (if any).
#[inline(never)]
pub(super) fn get<'a, 'tcx, M: Metadata<'a, 'tcx>>(&self, metadata: M, i: I) -> Option<T> {
debug!("Table::lookup: index={:?} len={:?}", i, self.meta);
let start = self.position.get();
let bytes = &metadata.raw_bytes()[start..start + self.meta];
<Option<T>>::maybe_read_from_bytes_at(bytes, i.index())?
}
/// Size of the table in entries, including possible gaps.
pub(super) fn size(&self) -> usize {
self.meta / <Option<T>>::BYTE_LEN
}
} | /// at `&mut b[i * Self::BYTE_LEN..]`, using `Self::write_to_bytes`.
fn write_to_bytes_at(self, b: &mut [u8], i: usize);
} | random_line_split |
zhttpto.rs | //
// zhttpto.rs
//
// Starting code for PS1
// Running on Rust 0.9
//
// Note that this code has serious security risks! You should not run it
// on any system with access to sensitive files.
//
// University of Virginia - cs4414 Spring 2014
// Weilin Xu and David Evans
// Version 0.3
#[feature(globs)];
use std::io::*;
use std::os;
use std::io::net::ip::{SocketAddr};
use std::{str};
static IP: &'static str = "127.0.0.1";
static PORT: int = 4414;
static mut visitor_count: uint = 0;
fn | () {
let addr = from_str::<SocketAddr>(format!("{:s}:{:d}", IP, PORT)).unwrap();
let mut acceptor = net::tcp::TcpListener::bind(addr).listen();
println(format!("Listening on [{:s}]...", addr.to_str()));
for stream in acceptor.incoming() {
// Spawn a task to handle the connection
do spawn {
let mut stream = stream;
match stream {
Some(ref mut s) => {
match s.peer_name() {
Some(pn) => {println(format!("Received connection from: [{:s}]", pn.to_str()));},
None => ()
}
},
None => ()
}
let mut buf = [0,..500];
stream.read(buf);
let request_str = str::from_utf8(buf);
let split_str: ~[&str] = request_str.split(' ').collect();
let path = os::getcwd();
let mut path_str: ~str;
if split_str[0] == "GET" && split_str[1]!= "" {
path_str =
match path.as_str() {
Some(string) => string+split_str[1],
None => ~"/"
};
let cwdpath = Path::new(path_str.clone());
let fix = path_str.slice(path_str.len()-5, path_str.len()).to_owned();
if split_str[1] == "/" {
println(format!("Received request :\n{:s}", request_str));
let response: ~str =
~"HTTP/1.1 200 OK\r\nContent-Type: text/html; charset=UTF-8\r\n\r\n
<doctype!html><html><head><title>Hello, Rust!</title>
<style>body { background-color: #111; color: #FFEEAA }
h1 { font-size:2cm; text-align: center; color: black; text-shadow: 0 0 4mm red}
h2 { font-size:2cm; text-align: center; color: black; text-shadow: 0 0 4mm green}
</style></head>
<body>
<h1>Greetings, Krusty!</h1>
</body></html>\r\n";
stream.write(response.as_bytes());
println!("Connection terminates.");
}
else if cwdpath.is_file() && fix == ~".html" {
println!("File requested: {:s}", path_str);
let mut file = buffered::BufferedReader::new(File::open(&cwdpath));
let fl_arr: ~[~str] = file.lines().collect();
let mut fr = ~"HTTP/1.1 200 OK\r\nContent-Type: text/html; charset=UTF-8\r\n\r\n";
for line in fl_arr.iter() {
fr = fr + line.to_owned() + "\r\n";
}
stream.write(fr.as_bytes());
}
else {
println!("Error reading file. Recieved request :\n{:s}", request_str);
let fr = ~"HTTP/1.1 418 I'M A TEAPOT\r\nContent-Type: text/html; charset=UTF-8\r\n\r\nI'm a teapot";
stream.write(fr.as_bytes());
println!("End of failed request.");
}
}
unsafe {
visitor_count+=1;
println!("Request count: {:u}\n", visitor_count);
}
}
}
}
| main | identifier_name |
zhttpto.rs | //
// zhttpto.rs
//
// Starting code for PS1
// Running on Rust 0.9
//
// Note that this code has serious security risks! You should not run it
// on any system with access to sensitive files.
//
// University of Virginia - cs4414 Spring 2014
// Weilin Xu and David Evans
// Version 0.3
#[feature(globs)];
use std::io::*;
use std::os;
use std::io::net::ip::{SocketAddr};
use std::{str};
static IP: &'static str = "127.0.0.1";
static PORT: int = 4414;
static mut visitor_count: uint = 0;
fn main() {
let addr = from_str::<SocketAddr>(format!("{:s}:{:d}", IP, PORT)).unwrap();
let mut acceptor = net::tcp::TcpListener::bind(addr).listen();
println(format!("Listening on [{:s}]...", addr.to_str()));
for stream in acceptor.incoming() {
// Spawn a task to handle the connection
do spawn {
let mut stream = stream;
match stream {
Some(ref mut s) => {
match s.peer_name() {
Some(pn) => {println(format!("Received connection from: [{:s}]", pn.to_str()));},
None => ()
}
},
None => ()
}
let mut buf = [0,..500];
stream.read(buf);
let request_str = str::from_utf8(buf);
let split_str: ~[&str] = request_str.split(' ').collect();
let path = os::getcwd();
let mut path_str: ~str;
if split_str[0] == "GET" && split_str[1]!= "" {
path_str =
match path.as_str() {
Some(string) => string+split_str[1],
None => ~"/"
};
let cwdpath = Path::new(path_str.clone());
let fix = path_str.slice(path_str.len()-5, path_str.len()).to_owned();
if split_str[1] == "/" {
println(format!("Received request :\n{:s}", request_str));
let response: ~str =
~"HTTP/1.1 200 OK\r\nContent-Type: text/html; charset=UTF-8\r\n\r\n
<doctype!html><html><head><title>Hello, Rust!</title>
<style>body { background-color: #111; color: #FFEEAA }
h1 { font-size:2cm; text-align: center; color: black; text-shadow: 0 0 4mm red}
h2 { font-size:2cm; text-align: center; color: black; text-shadow: 0 0 4mm green}
</style></head>
<body>
<h1>Greetings, Krusty!</h1>
</body></html>\r\n";
stream.write(response.as_bytes());
println!("Connection terminates.");
}
else if cwdpath.is_file() && fix == ~".html" |
else {
println!("Error reading file. Recieved request :\n{:s}", request_str);
let fr = ~"HTTP/1.1 418 I'M A TEAPOT\r\nContent-Type: text/html; charset=UTF-8\r\n\r\nI'm a teapot";
stream.write(fr.as_bytes());
println!("End of failed request.");
}
}
unsafe {
visitor_count+=1;
println!("Request count: {:u}\n", visitor_count);
}
}
}
}
| {
println!("File requested: {:s}", path_str);
let mut file = buffered::BufferedReader::new(File::open(&cwdpath));
let fl_arr: ~[~str] = file.lines().collect();
let mut fr = ~"HTTP/1.1 200 OK\r\nContent-Type: text/html; charset=UTF-8\r\n\r\n";
for line in fl_arr.iter() {
fr = fr + line.to_owned() + "\r\n";
}
stream.write(fr.as_bytes());
} | conditional_block |
zhttpto.rs | //
// zhttpto.rs
//
// Starting code for PS1
// Running on Rust 0.9
//
// Note that this code has serious security risks! You should not run it
// on any system with access to sensitive files.
//
// University of Virginia - cs4414 Spring 2014
// Weilin Xu and David Evans
// Version 0.3
#[feature(globs)];
use std::io::*;
use std::os;
use std::io::net::ip::{SocketAddr};
use std::{str};
static IP: &'static str = "127.0.0.1";
static PORT: int = 4414;
static mut visitor_count: uint = 0;
fn main() | let mut buf = [0,..500];
stream.read(buf);
let request_str = str::from_utf8(buf);
let split_str: ~[&str] = request_str.split(' ').collect();
let path = os::getcwd();
let mut path_str: ~str;
if split_str[0] == "GET" && split_str[1]!= "" {
path_str =
match path.as_str() {
Some(string) => string+split_str[1],
None => ~"/"
};
let cwdpath = Path::new(path_str.clone());
let fix = path_str.slice(path_str.len()-5, path_str.len()).to_owned();
if split_str[1] == "/" {
println(format!("Received request :\n{:s}", request_str));
let response: ~str =
~"HTTP/1.1 200 OK\r\nContent-Type: text/html; charset=UTF-8\r\n\r\n
<doctype!html><html><head><title>Hello, Rust!</title>
<style>body { background-color: #111; color: #FFEEAA }
h1 { font-size:2cm; text-align: center; color: black; text-shadow: 0 0 4mm red}
h2 { font-size:2cm; text-align: center; color: black; text-shadow: 0 0 4mm green}
</style></head>
<body>
<h1>Greetings, Krusty!</h1>
</body></html>\r\n";
stream.write(response.as_bytes());
println!("Connection terminates.");
}
else if cwdpath.is_file() && fix == ~".html" {
println!("File requested: {:s}", path_str);
let mut file = buffered::BufferedReader::new(File::open(&cwdpath));
let fl_arr: ~[~str] = file.lines().collect();
let mut fr = ~"HTTP/1.1 200 OK\r\nContent-Type: text/html; charset=UTF-8\r\n\r\n";
for line in fl_arr.iter() {
fr = fr + line.to_owned() + "\r\n";
}
stream.write(fr.as_bytes());
}
else {
println!("Error reading file. Recieved request :\n{:s}", request_str);
let fr = ~"HTTP/1.1 418 I'M A TEAPOT\r\nContent-Type: text/html; charset=UTF-8\r\n\r\nI'm a teapot";
stream.write(fr.as_bytes());
println!("End of failed request.");
}
}
unsafe {
visitor_count+=1;
println!("Request count: {:u}\n", visitor_count);
}
}
}
}
| {
let addr = from_str::<SocketAddr>(format!("{:s}:{:d}", IP, PORT)).unwrap();
let mut acceptor = net::tcp::TcpListener::bind(addr).listen();
println(format!("Listening on [{:s}] ...", addr.to_str()));
for stream in acceptor.incoming() {
// Spawn a task to handle the connection
do spawn {
let mut stream = stream;
match stream {
Some(ref mut s) => {
match s.peer_name() {
Some(pn) => {println(format!("Received connection from: [{:s}]", pn.to_str()));},
None => ()
}
},
None => ()
}
| identifier_body |
zhttpto.rs | //
// zhttpto.rs
//
// Starting code for PS1
// Running on Rust 0.9
//
// Note that this code has serious security risks! You should not run it
// on any system with access to sensitive files.
//
// University of Virginia - cs4414 Spring 2014
// Weilin Xu and David Evans
// Version 0.3
#[feature(globs)];
use std::io::*;
use std::os;
use std::io::net::ip::{SocketAddr};
use std::{str};
static IP: &'static str = "127.0.0.1";
static PORT: int = 4414;
static mut visitor_count: uint = 0;
fn main() {
let addr = from_str::<SocketAddr>(format!("{:s}:{:d}", IP, PORT)).unwrap();
let mut acceptor = net::tcp::TcpListener::bind(addr).listen();
println(format!("Listening on [{:s}]...", addr.to_str()));
for stream in acceptor.incoming() {
// Spawn a task to handle the connection
do spawn {
let mut stream = stream;
match stream {
Some(ref mut s) => { | }
},
None => ()
}
let mut buf = [0,..500];
stream.read(buf);
let request_str = str::from_utf8(buf);
let split_str: ~[&str] = request_str.split(' ').collect();
let path = os::getcwd();
let mut path_str: ~str;
if split_str[0] == "GET" && split_str[1]!= "" {
path_str =
match path.as_str() {
Some(string) => string+split_str[1],
None => ~"/"
};
let cwdpath = Path::new(path_str.clone());
let fix = path_str.slice(path_str.len()-5, path_str.len()).to_owned();
if split_str[1] == "/" {
println(format!("Received request :\n{:s}", request_str));
let response: ~str =
~"HTTP/1.1 200 OK\r\nContent-Type: text/html; charset=UTF-8\r\n\r\n
<doctype!html><html><head><title>Hello, Rust!</title>
<style>body { background-color: #111; color: #FFEEAA }
h1 { font-size:2cm; text-align: center; color: black; text-shadow: 0 0 4mm red}
h2 { font-size:2cm; text-align: center; color: black; text-shadow: 0 0 4mm green}
</style></head>
<body>
<h1>Greetings, Krusty!</h1>
</body></html>\r\n";
stream.write(response.as_bytes());
println!("Connection terminates.");
}
else if cwdpath.is_file() && fix == ~".html" {
println!("File requested: {:s}", path_str);
let mut file = buffered::BufferedReader::new(File::open(&cwdpath));
let fl_arr: ~[~str] = file.lines().collect();
let mut fr = ~"HTTP/1.1 200 OK\r\nContent-Type: text/html; charset=UTF-8\r\n\r\n";
for line in fl_arr.iter() {
fr = fr + line.to_owned() + "\r\n";
}
stream.write(fr.as_bytes());
}
else {
println!("Error reading file. Recieved request :\n{:s}", request_str);
let fr = ~"HTTP/1.1 418 I'M A TEAPOT\r\nContent-Type: text/html; charset=UTF-8\r\n\r\nI'm a teapot";
stream.write(fr.as_bytes());
println!("End of failed request.");
}
}
unsafe {
visitor_count+=1;
println!("Request count: {:u}\n", visitor_count);
}
}
}
} | match s.peer_name() {
Some(pn) => {println(format!("Received connection from: [{:s}]", pn.to_str()));},
None => () | random_line_split |
snippet_parser.rs | use std::error::Error;
use structs::*;
use filesystem::read_file_to_json;
use std::fs::read_dir;
use filesystem::read_file_to_string;
use std::fs::DirEntry;
pub fn get_all_snippets() -> Result<Vec<Snippet>, Box<Error>> {
let mut all_snippets = Vec::new();
let snippets_path = "./snippets/";
let snippets_dirs = read_dir(snippets_path).unwrap();
for snippet_folder in snippets_dirs {
let uw = snippet_folder?;
if uw.file_type().expect("failed to get folder type").is_dir() {
let snippet = parse_snippet(&uw);
all_snippets.push(snippet);
}
}
Ok(all_snippets)
}
fn parse_snippet(snippet_folder: &DirEntry) -> Snippet | .map(|x| x.as_str().expect("failed to parse crates").to_string())
.collect(),
tags: info["tags"]
.as_array()
.expect("failed to parse tags")
.into_iter()
.map(|x| x.as_str().expect("failed to parse tags").to_string())
.collect(),
content: content,
link: folder_name,
};
println!("parsed: {}", folder_relative_path);
ssnippet
} | {
let uw = snippet_folder;
let folder_relative_path = uw.path().display().to_string();
let folder_name = uw.file_name()
.to_str()
.expect("failed to get snippet folder name")
.to_string();
let info_path = format!("{}/info.json", folder_relative_path);
let content_path = format!("{}/content.md", folder_relative_path);
let info = read_file_to_json(&info_path);
let content = read_file_to_string(&content_path);
let ssnippet = Snippet {
title: info["title"]
.as_str()
.expect("failed to parse title")
.to_string(),
crates: info["crates"]
.as_array()
.expect("failed to parse crates")
.into_iter() | identifier_body |
snippet_parser.rs | use std::error::Error;
use structs::*;
use filesystem::read_file_to_json;
use std::fs::read_dir;
use filesystem::read_file_to_string;
use std::fs::DirEntry;
pub fn | () -> Result<Vec<Snippet>, Box<Error>> {
let mut all_snippets = Vec::new();
let snippets_path = "./snippets/";
let snippets_dirs = read_dir(snippets_path).unwrap();
for snippet_folder in snippets_dirs {
let uw = snippet_folder?;
if uw.file_type().expect("failed to get folder type").is_dir() {
let snippet = parse_snippet(&uw);
all_snippets.push(snippet);
}
}
Ok(all_snippets)
}
fn parse_snippet(snippet_folder: &DirEntry) -> Snippet {
let uw = snippet_folder;
let folder_relative_path = uw.path().display().to_string();
let folder_name = uw.file_name()
.to_str()
.expect("failed to get snippet folder name")
.to_string();
let info_path = format!("{}/info.json", folder_relative_path);
let content_path = format!("{}/content.md", folder_relative_path);
let info = read_file_to_json(&info_path);
let content = read_file_to_string(&content_path);
let ssnippet = Snippet {
title: info["title"]
.as_str()
.expect("failed to parse title")
.to_string(),
crates: info["crates"]
.as_array()
.expect("failed to parse crates")
.into_iter()
.map(|x| x.as_str().expect("failed to parse crates").to_string())
.collect(),
tags: info["tags"]
.as_array()
.expect("failed to parse tags")
.into_iter()
.map(|x| x.as_str().expect("failed to parse tags").to_string())
.collect(),
content: content,
link: folder_name,
};
println!("parsed: {}", folder_relative_path);
ssnippet
} | get_all_snippets | identifier_name |
snippet_parser.rs | use std::error::Error;
use structs::*;
use filesystem::read_file_to_json;
use std::fs::read_dir;
use filesystem::read_file_to_string;
use std::fs::DirEntry;
pub fn get_all_snippets() -> Result<Vec<Snippet>, Box<Error>> {
let mut all_snippets = Vec::new();
let snippets_path = "./snippets/";
let snippets_dirs = read_dir(snippets_path).unwrap();
for snippet_folder in snippets_dirs {
let uw = snippet_folder?;
if uw.file_type().expect("failed to get folder type").is_dir() {
let snippet = parse_snippet(&uw);
all_snippets.push(snippet);
}
}
Ok(all_snippets)
}
fn parse_snippet(snippet_folder: &DirEntry) -> Snippet {
let uw = snippet_folder; | .to_str()
.expect("failed to get snippet folder name")
.to_string();
let info_path = format!("{}/info.json", folder_relative_path);
let content_path = format!("{}/content.md", folder_relative_path);
let info = read_file_to_json(&info_path);
let content = read_file_to_string(&content_path);
let ssnippet = Snippet {
title: info["title"]
.as_str()
.expect("failed to parse title")
.to_string(),
crates: info["crates"]
.as_array()
.expect("failed to parse crates")
.into_iter()
.map(|x| x.as_str().expect("failed to parse crates").to_string())
.collect(),
tags: info["tags"]
.as_array()
.expect("failed to parse tags")
.into_iter()
.map(|x| x.as_str().expect("failed to parse tags").to_string())
.collect(),
content: content,
link: folder_name,
};
println!("parsed: {}", folder_relative_path);
ssnippet
} | let folder_relative_path = uw.path().display().to_string();
let folder_name = uw.file_name() | random_line_split |
snippet_parser.rs | use std::error::Error;
use structs::*;
use filesystem::read_file_to_json;
use std::fs::read_dir;
use filesystem::read_file_to_string;
use std::fs::DirEntry;
pub fn get_all_snippets() -> Result<Vec<Snippet>, Box<Error>> {
let mut all_snippets = Vec::new();
let snippets_path = "./snippets/";
let snippets_dirs = read_dir(snippets_path).unwrap();
for snippet_folder in snippets_dirs {
let uw = snippet_folder?;
if uw.file_type().expect("failed to get folder type").is_dir() |
}
Ok(all_snippets)
}
fn parse_snippet(snippet_folder: &DirEntry) -> Snippet {
let uw = snippet_folder;
let folder_relative_path = uw.path().display().to_string();
let folder_name = uw.file_name()
.to_str()
.expect("failed to get snippet folder name")
.to_string();
let info_path = format!("{}/info.json", folder_relative_path);
let content_path = format!("{}/content.md", folder_relative_path);
let info = read_file_to_json(&info_path);
let content = read_file_to_string(&content_path);
let ssnippet = Snippet {
title: info["title"]
.as_str()
.expect("failed to parse title")
.to_string(),
crates: info["crates"]
.as_array()
.expect("failed to parse crates")
.into_iter()
.map(|x| x.as_str().expect("failed to parse crates").to_string())
.collect(),
tags: info["tags"]
.as_array()
.expect("failed to parse tags")
.into_iter()
.map(|x| x.as_str().expect("failed to parse tags").to_string())
.collect(),
content: content,
link: folder_name,
};
println!("parsed: {}", folder_relative_path);
ssnippet
} | {
let snippet = parse_snippet(&uw);
all_snippets.push(snippet);
} | conditional_block |
hidden-line.rs | // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
| ///
/// ```rust
/// mod to_make_deriving_work { // FIXME #4913
///
/// # #[derive(PartialEq)] // invisible
/// # struct Foo; // invisible
///
/// #[derive(PartialEq)] // Bar
/// struct Bar(Foo);
///
/// fn test() {
/// let x = Bar(Foo);
/// assert_eq!(x, x); // check that the derivings worked
/// }
///
/// }
/// ```
pub fn foo() {}
// @!has hidden_line/fn.foo.html invisible
// @matches - //pre "#\[derive\(PartialEq\)\] // Bar" | /// The '# ' lines should be removed from the output, but the #[derive] should be
/// retained. | random_line_split |
hidden-line.rs | // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/// The '#'lines should be removed from the output, but the #[derive] should be
/// retained.
///
/// ```rust
/// mod to_make_deriving_work { // FIXME #4913
///
/// # #[derive(PartialEq)] // invisible
/// # struct Foo; // invisible
///
/// #[derive(PartialEq)] // Bar
/// struct Bar(Foo);
///
/// fn test() {
/// let x = Bar(Foo);
/// assert_eq!(x, x); // check that the derivings worked
/// }
///
/// }
/// ```
pub fn | () {}
// @!has hidden_line/fn.foo.html invisible
// @matches - //pre "#\[derive\(PartialEq\)\] // Bar"
| foo | identifier_name |
hidden-line.rs | // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/// The '#'lines should be removed from the output, but the #[derive] should be
/// retained.
///
/// ```rust
/// mod to_make_deriving_work { // FIXME #4913
///
/// # #[derive(PartialEq)] // invisible
/// # struct Foo; // invisible
///
/// #[derive(PartialEq)] // Bar
/// struct Bar(Foo);
///
/// fn test() {
/// let x = Bar(Foo);
/// assert_eq!(x, x); // check that the derivings worked
/// }
///
/// }
/// ```
pub fn foo() |
// @!has hidden_line/fn.foo.html invisible
// @matches - //pre "#\[derive\(PartialEq\)\] // Bar"
| {} | identifier_body |
filehandler.rs | /*
* The module filehandler consists of funtions to write
* a string to a file and to read a file into a string
*/
use std::error::Error;
use std::fs::File;
use std::io::prelude::*;
use std::path::Path;
use std::fs;
/*
* Reads a file with name @filename into
* the referenced mutable String @content
*/
pub fn read_file(filename:&String, content: &mut String) {
let path = Path::new(&filename);
let pathstr = path.display();
let mut file = match File::open(&path) {
Err(why) => panic!("could not open {} : {}", pathstr, Error::description(&why)),
Ok(file) => file,
};
let mut tmpcontent = String::new();
match file.read_to_string(&mut tmpcontent) {
Err(why) => panic!("could not read {} : {}", pathstr, Error::description(&why)),
Ok(file) => {}
}
content.push_str(&tmpcontent);
}
/*
* Writes the String @content into a file
* with the name @filename. It overwrites its
* former content.
*/
pub fn | (filename:String, content:String) {
let path = Path::new(&filename);
let parent = path.parent().unwrap();
match fs::create_dir_all(parent) {
Ok(m) => m,
Err(e) => panic!(e), // Parent-Folder could not be created
};
let f = File::create(&filename);
let mut file = match f {
Ok(file) => file,
Err(m) => panic!("Datei kann nicht geschrieben werden"),
};
match file.write_all(content.as_bytes()) {
Ok(m) => m,
Err(e) => panic!("The content of file {} could not be written", filename),
};
}
| write_file | identifier_name |
filehandler.rs | /*
* The module filehandler consists of funtions to write
* a string to a file and to read a file into a string
*/
use std::error::Error;
use std::fs::File;
use std::io::prelude::*;
use std::path::Path;
use std::fs;
/*
* Reads a file with name @filename into
* the referenced mutable String @content
*/
pub fn read_file(filename:&String, content: &mut String) {
let path = Path::new(&filename);
let pathstr = path.display();
let mut file = match File::open(&path) {
Err(why) => panic!("could not open {} : {}", pathstr, Error::description(&why)),
Ok(file) => file,
};
let mut tmpcontent = String::new();
match file.read_to_string(&mut tmpcontent) {
Err(why) => panic!("could not read {} : {}", pathstr, Error::description(&why)),
Ok(file) => {}
}
content.push_str(&tmpcontent);
}
/*
* Writes the String @content into a file
* with the name @filename. It overwrites its
* former content.
*/
pub fn write_file(filename:String, content:String) | {
let path = Path::new(&filename);
let parent = path.parent().unwrap();
match fs::create_dir_all(parent) {
Ok(m) => m,
Err(e) => panic!(e), // Parent-Folder could not be created
};
let f = File::create(&filename);
let mut file = match f {
Ok(file) => file,
Err(m) => panic!("Datei kann nicht geschrieben werden"),
};
match file.write_all(content.as_bytes()) {
Ok(m) => m,
Err(e) => panic!("The content of file {} could not be written", filename),
};
} | identifier_body |
|
filehandler.rs | /*
* The module filehandler consists of funtions to write
* a string to a file and to read a file into a string
*/
use std::error::Error;
use std::fs::File;
use std::io::prelude::*;
use std::path::Path;
use std::fs;
/*
* Reads a file with name @filename into
* the referenced mutable String @content
*/
pub fn read_file(filename:&String, content: &mut String) {
let path = Path::new(&filename);
let pathstr = path.display();
let mut file = match File::open(&path) {
Err(why) => panic!("could not open {} : {}", pathstr, Error::description(&why)),
Ok(file) => file,
};
let mut tmpcontent = String::new();
match file.read_to_string(&mut tmpcontent) {
Err(why) => panic!("could not read {} : {}", pathstr, Error::description(&why)),
Ok(file) => {} | /*
* Writes the String @content into a file
* with the name @filename. It overwrites its
* former content.
*/
pub fn write_file(filename:String, content:String) {
let path = Path::new(&filename);
let parent = path.parent().unwrap();
match fs::create_dir_all(parent) {
Ok(m) => m,
Err(e) => panic!(e), // Parent-Folder could not be created
};
let f = File::create(&filename);
let mut file = match f {
Ok(file) => file,
Err(m) => panic!("Datei kann nicht geschrieben werden"),
};
match file.write_all(content.as_bytes()) {
Ok(m) => m,
Err(e) => panic!("The content of file {} could not be written", filename),
};
} | }
content.push_str(&tmpcontent);
}
| random_line_split |
viewport.rs | use lsp_types::Range;
use serde::{Deserialize, Serialize};
/// Visible lines of editor.
///
/// Inclusive at start, exclusive at end. Both start and end are 0-based.
#[derive(Debug, Serialize, Deserialize, Eq, PartialEq, Copy, Clone)]
pub struct Viewport {
pub start: u64,
pub end: u64,
}
impl Viewport {
#[allow(dead_code)]
pub fn new(start: u64, end: u64) -> Self {
Self { start, end }
}
fn contains(&self, line: u64) -> bool {
line >= self.start && line < self.end
}
pub fn overlaps(&self, range: Range) -> bool {
self.contains(range.start.line) || self.contains(range.end.line)
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_new() |
#[test]
fn test_overlaps() {
use lsp_types::*;
let viewport = Viewport::new(2, 7);
assert_eq!(
viewport.overlaps(Range::new(Position::new(0, 0), Position::new(1, 10))),
false
);
assert_eq!(
viewport.overlaps(Range::new(Position::new(0, 0), Position::new(2, 0))),
true
);
}
}
| {
let viewport = Viewport::new(0, 7);
assert_eq!(viewport.start, 0);
assert_eq!(viewport.end, 7);
} | identifier_body |
viewport.rs | use lsp_types::Range;
use serde::{Deserialize, Serialize};
/// Visible lines of editor.
///
/// Inclusive at start, exclusive at end. Both start and end are 0-based.
#[derive(Debug, Serialize, Deserialize, Eq, PartialEq, Copy, Clone)]
pub struct Viewport {
pub start: u64,
pub end: u64,
}
impl Viewport {
#[allow(dead_code)]
pub fn new(start: u64, end: u64) -> Self {
Self { start, end }
}
fn contains(&self, line: u64) -> bool {
line >= self.start && line < self.end
}
pub fn overlaps(&self, range: Range) -> bool {
self.contains(range.start.line) || self.contains(range.end.line)
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_new() {
let viewport = Viewport::new(0, 7);
assert_eq!(viewport.start, 0); | }
#[test]
fn test_overlaps() {
use lsp_types::*;
let viewport = Viewport::new(2, 7);
assert_eq!(
viewport.overlaps(Range::new(Position::new(0, 0), Position::new(1, 10))),
false
);
assert_eq!(
viewport.overlaps(Range::new(Position::new(0, 0), Position::new(2, 0))),
true
);
}
} | assert_eq!(viewport.end, 7); | random_line_split |
viewport.rs | use lsp_types::Range;
use serde::{Deserialize, Serialize};
/// Visible lines of editor.
///
/// Inclusive at start, exclusive at end. Both start and end are 0-based.
#[derive(Debug, Serialize, Deserialize, Eq, PartialEq, Copy, Clone)]
pub struct Viewport {
pub start: u64,
pub end: u64,
}
impl Viewport {
#[allow(dead_code)]
pub fn | (start: u64, end: u64) -> Self {
Self { start, end }
}
fn contains(&self, line: u64) -> bool {
line >= self.start && line < self.end
}
pub fn overlaps(&self, range: Range) -> bool {
self.contains(range.start.line) || self.contains(range.end.line)
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_new() {
let viewport = Viewport::new(0, 7);
assert_eq!(viewport.start, 0);
assert_eq!(viewport.end, 7);
}
#[test]
fn test_overlaps() {
use lsp_types::*;
let viewport = Viewport::new(2, 7);
assert_eq!(
viewport.overlaps(Range::new(Position::new(0, 0), Position::new(1, 10))),
false
);
assert_eq!(
viewport.overlaps(Range::new(Position::new(0, 0), Position::new(2, 0))),
true
);
}
}
| new | identifier_name |
mkart.rs | extern crate jiyunet_core as core;
extern crate jiyunet_dag as dag;
#[macro_use] extern crate clap;
use std::fs;
use std::io::Read;
use core::io::BinaryComponent;
use core::sig::Signed;
use dag::artifact;
use dag::segment;
mod util;
fn | () {
let matches = clap_app!(jiyu_mkart =>
(version: "0.1.0")
(author: "treyzania <[email protected]>")
(about: "Packages an file into a signed Jiyunet segment. Note that the segment is not likely to be valid on the blockchain due to noncing, etc.")
(@arg src: +required "Source file to package.")
(@arg dest: +required "Output file.")
(@arg artifact_type: -a +takes_value "Artifact type. Default: 0x0000"))
.get_matches();
let src = matches.value_of("src").unwrap();
let dest = matches.value_of("dest").unwrap();
let atype = match matches.value_of("artifact_type").map(str::parse) {
Some(Ok(p)) => p,
Some(Err(_)) => panic!("unable to parse artifact type as number"),
None => 0x0000
};
// Read the source data, convert to artifact.
let data = {
let mut f: fs::File = fs::File::open(src).unwrap();
let mut v = Vec::new();
f.read_to_end(&mut v).expect("error reading provided artifact contents");
v
};
let art = artifact::ArtifactData::new(atype, data);
let seg = segment::Segment::new_artifact_seg(art, util::timestamp());
// Load the keypair, then sign.
let kp = util::load_user_keypair().expect("keypair not found");
let signed_seg = Signed::<segment::Segment>::new(kp, seg);
// Write the signed artifact segment.
let mut out = fs::File::create(dest).expect("unable to create destination");
signed_seg.to_writer(&mut out).expect("unable to write to destination")
}
| main | identifier_name |
mkart.rs | extern crate jiyunet_core as core;
extern crate jiyunet_dag as dag;
#[macro_use] extern crate clap;
use std::fs;
use std::io::Read;
use core::io::BinaryComponent;
use core::sig::Signed;
use dag::artifact;
use dag::segment;
mod util;
fn main() | let data = {
let mut f: fs::File = fs::File::open(src).unwrap();
let mut v = Vec::new();
f.read_to_end(&mut v).expect("error reading provided artifact contents");
v
};
let art = artifact::ArtifactData::new(atype, data);
let seg = segment::Segment::new_artifact_seg(art, util::timestamp());
// Load the keypair, then sign.
let kp = util::load_user_keypair().expect("keypair not found");
let signed_seg = Signed::<segment::Segment>::new(kp, seg);
// Write the signed artifact segment.
let mut out = fs::File::create(dest).expect("unable to create destination");
signed_seg.to_writer(&mut out).expect("unable to write to destination")
}
| {
let matches = clap_app!(jiyu_mkart =>
(version: "0.1.0")
(author: "treyzania <[email protected]>")
(about: "Packages an file into a signed Jiyunet segment. Note that the segment is not likely to be valid on the blockchain due to noncing, etc.")
(@arg src: +required "Source file to package.")
(@arg dest: +required "Output file.")
(@arg artifact_type: -a +takes_value "Artifact type. Default: 0x0000"))
.get_matches();
let src = matches.value_of("src").unwrap();
let dest = matches.value_of("dest").unwrap();
let atype = match matches.value_of("artifact_type").map(str::parse) {
Some(Ok(p)) => p,
Some(Err(_)) => panic!("unable to parse artifact type as number"),
None => 0x0000
};
// Read the source data, convert to artifact. | identifier_body |
mkart.rs | extern crate jiyunet_core as core;
extern crate jiyunet_dag as dag;
#[macro_use] extern crate clap;
use std::fs;
use std::io::Read;
use core::io::BinaryComponent;
use core::sig::Signed;
use dag::artifact;
use dag::segment;
mod util; |
fn main() {
let matches = clap_app!(jiyu_mkart =>
(version: "0.1.0")
(author: "treyzania <[email protected]>")
(about: "Packages an file into a signed Jiyunet segment. Note that the segment is not likely to be valid on the blockchain due to noncing, etc.")
(@arg src: +required "Source file to package.")
(@arg dest: +required "Output file.")
(@arg artifact_type: -a +takes_value "Artifact type. Default: 0x0000"))
.get_matches();
let src = matches.value_of("src").unwrap();
let dest = matches.value_of("dest").unwrap();
let atype = match matches.value_of("artifact_type").map(str::parse) {
Some(Ok(p)) => p,
Some(Err(_)) => panic!("unable to parse artifact type as number"),
None => 0x0000
};
// Read the source data, convert to artifact.
let data = {
let mut f: fs::File = fs::File::open(src).unwrap();
let mut v = Vec::new();
f.read_to_end(&mut v).expect("error reading provided artifact contents");
v
};
let art = artifact::ArtifactData::new(atype, data);
let seg = segment::Segment::new_artifact_seg(art, util::timestamp());
// Load the keypair, then sign.
let kp = util::load_user_keypair().expect("keypair not found");
let signed_seg = Signed::<segment::Segment>::new(kp, seg);
// Write the signed artifact segment.
let mut out = fs::File::create(dest).expect("unable to create destination");
signed_seg.to_writer(&mut out).expect("unable to write to destination")
} | random_line_split |
|
syncer.rs | use postgres;
use serde_json;
use std;
use std::fmt::Debug;
use std::iter::Iterator;
use std::collections::BTreeMap;
use serde::{Serialize,Deserialize};
use errors::*;
pub struct Comm<I: std::iter::Iterator<Item=T>, J: std::iter::Iterator<Item=T>, T: Clone + Eq + Ord> { l: std::iter::Peekable<I>, r: std::iter::Peekable<J>, }
impl<I: std::iter::Iterator<Item=T>, J: std::iter::Iterator<Item=T>, T: Clone + Ord> Comm<I, J, T> {
pub fn new(left: I, right: J) -> Comm<I, J, T> {
Comm { l: left.peekable(), r: right.peekable(), }
}
}
impl<I: std::iter::Iterator<Item=T>, J: std::iter::Iterator<Item=T>, T: Clone + Eq + Ord> Iterator for Comm<I, J, T> {
type Item = (std::cmp::Ordering, T);
fn next(&mut self) -> Option<Self::Item> { // http://stackoverflow.com/a/32020190/6274013
let which = match (self.l.peek(), self.r.peek()) {
(Some(l), Some(r)) => Some(l.cmp(r)),
(Some(_), None) => Some(std::cmp::Ordering::Less),
(None, Some(_)) => Some(std::cmp::Ordering::Greater),
(None, None) => None,
};
match which {
Some(o @ std::cmp::Ordering::Equal) => self.r.next().and(self.l.next()).map(|x| (o, x)),
Some(o @ std::cmp::Ordering::Less) => self.l.next().map(|x| (o, x)),
Some(o @ std::cmp::Ordering::Greater) => self.r.next().map(|x| (o, x)),
None => None,
}
}
}
fn shrink_to_fit<T>(mut v: Vec<T>) -> Vec<T> { v.shrink_to_fit(); v }
//pub fn comm_algorithm_memoryintensive<T>(left: Vec<T>, right: Vec<T>) -> Vec<(std::cmp::Ordering, T)> where T: Clone + Eq + Ord {
// let mut ret: Vec<(std::cmp::Ordering, T)> = Vec::with_capacity(left.capacity()+right.capacity());
// let (mut l, mut r) = (left.iter().peekable(), right.iter().peekable());
// while l.peek().is_some() && r.peek().is_some() {
// let x = l.peek().unwrap().clone();
// let y = r.peek().unwrap().clone();
// match x.cmp(y) {
// o @ std::cmp::Ordering::Equal => { ret.push((o, l.next().and(r.next()).unwrap().clone())); },
// o @ std::cmp::Ordering::Less => { ret.push((o, l.next() .unwrap().clone())); },
// o @ std::cmp::Ordering::Greater => { ret.push((o, r.next() .unwrap().clone())); },
// }
// }
// for item in l { ret.push((std::cmp::Ordering::Less, item.clone())); }
// for item in r { ret.push((std::cmp::Ordering::Greater, item.clone())); }
// shrink_to_fit(ret)
//}
fn comm_list<T>(new: Vec<T>, old: &Vec<T>, heed_deletions: bool) -> (Vec<T>, Vec<T>, Vec<T>) where T: Clone + Eq + Ord {
let (mut all, mut additions, mut deletions) : (Vec<T>, Vec<T>, Vec<T>) = (Vec::with_capacity(new.len()), vec![], vec![]);
for (o, x) in Comm::new(new.into_iter(), old.iter().cloned()) {
match o {
std::cmp::Ordering::Equal => all.push(x),
std::cmp::Ordering::Less => { additions.push(x.clone()); all.push(x) },
std::cmp::Ordering::Greater => { deletions.push(x.clone()); if!heed_deletions { all.push(x) } },
}
}
(shrink_to_fit(all), shrink_to_fit(additions), shrink_to_fit(deletions))
}
fn comm_map<'a, K, T>(mut new: BTreeMap<K, Vec<T>>, old: &'a mut BTreeMap<K, Vec<T>>, heed_deletions: bool) -> (BTreeMap<K, Vec<T>>, BTreeMap<K, Vec<T>>, BTreeMap<K, Vec<T>>) where T: Debug + Clone + for<'de> Deserialize<'de> + Serialize + Eq + Ord, K: Debug + Ord + Clone + for<'de> Deserialize<'de> + Serialize {
let (mut all, mut additions, mut deletions) : (BTreeMap<K, Vec<T>>, BTreeMap<K, Vec<T>>, BTreeMap<K, Vec<T>>) = (BTreeMap::new(), BTreeMap::new(), BTreeMap::new());
for k in old.keys() { new.entry(k.clone()).or_insert(vec![]); }
for k in new.keys() { old.entry(k.clone()).or_insert(vec![]); }
//println!("{:?} vs {:?}", new.keys().collect::<Vec<_>>(), old.keys().collect::<Vec<_>>());
for ((key, new_value), (ko, old_value)) in new.into_iter().zip(old.iter()) {
assert!(&key == ko);
let (a, d, l) : (Vec<T>, Vec<T>, Vec<T>) = comm_list::<T>(new_value, old_value, heed_deletions);
all.remove(&key); all.insert(key.clone(), a);
additions.remove(&key); additions.insert(key, d);
deletions.remove(&ko); deletions.insert(ko.clone(), l);
}
//let keys = new.keys().cloned().collect::<Vec<K>>();
//assert!(keys == old.keys().cloned().collect::<Vec<K>>());
//for k in keys {
// let (new_v, old_v) = (new.get(&k).cloned().unwrap(), old.get(&k).cloned().unwrap());
// let (a, d, l) : (Vec<T>, Vec<T>, Vec<T>) = comm_list::<T>(new_v, old_v, heed_deletions);
// all.remove(&k); all.insert(k.clone(), a); additions.remove(&k); additions.insert(k.clone(), d); deletions.remove(&k); deletions.insert(k, l);
//}
(all, additions, deletions)
}
pub fn setup() -> Result<postgres::Connection> {
let conn = postgres::Connection::connect(std::env::var("DATABASE_URL")?.as_str(), postgres::TlsMode::Prefer(&postgres::tls::native_tls::NativeTls::new().unwrap()))?;
conn.execute("CREATE TABLE IF NOT EXISTS blobs (key VARCHAR PRIMARY KEY, val TEXT)", &[])?;
Ok(conn)
}
pub fn read(conn: &postgres::Connection, k: &str) -> Result<String> {
Ok(conn.query("SELECT val FROM blobs WHERE key = $1", &[&k])?.iter().next().map(|r| r.get("val")).unwrap_or_else(String::new))
}
pub fn detect(conn: &postgres::Connection, k: &str) -> Result<bool> {
Ok(conn.query("SELECT val FROM blobs WHERE key = $1", &[&k])?.iter().next().is_some())
}
pub fn write(conn: &postgres::Connection, k: &str, v: &str) -> Result<u64> {
// Yes, the correctness of this methodology relies on a lack of concurrency. Don't try this at home, kids.
let trans = conn.transaction()?;
let updates = trans.execute(if detect(conn, k)? { "UPDATE blobs SET val = $2 WHERE key = $1" } else { "INSERT INTO blobs(key,val) VALUES($1,$2)" }, &[&k, &v])?;
trans.commit()?;
ensure!(updates == 1, ErrorKind::DbWriteNopped(k.to_string()));
Ok(updates)
}
#[inline] pub fn writeback<T>(conn: &postgres::Connection, k: &str, v: &T) -> Result<u64> where T: Serialize + for<'de> Deserialize<'de> + Default {
write(conn, k, &serde_json::to_string(v)?)
}
pub fn readout<T>(conn: &postgres::Connection, k: &str) -> T where T: Serialize + for<'de> Deserialize<'de> + Default {
match read(conn, k) {
Ok(s) => serde_json::from_str(s.clone().as_str()).unwrap_or(T::default()),
Err(_) => T::default(),
}
}
pub fn | <T>(conn: &postgres::Connection, k: &str, new: Vec<T>, old: &Vec<T>, heed_deletions: bool) -> Result<(Vec<T>, Vec<T>, Vec<T>)> where T: Clone + for<'de> Deserialize<'de> + Serialize + Eq + Ord + Debug {
let (all, additions, deletions) = comm_list(new, old, heed_deletions);
if!(additions.is_empty() && deletions.is_empty()) {
writeback(conn, k, &all)?;
}
Ok((all, additions, deletions))
}
pub fn update_map<K, T>(conn: &postgres::Connection, k: &str, new: BTreeMap<K, Vec<T>>, old: &mut BTreeMap<K, Vec<T>>, heed_deletions: bool) -> Result<(BTreeMap<K, Vec<T>>, BTreeMap<K, Vec<T>>, BTreeMap<K, Vec<T>>)> where T: Clone + for<'de> Deserialize<'de> + Serialize + Eq + Ord + Debug, K: Ord + Clone + for<'de> Deserialize<'de> + Serialize + Debug {
let (all, additions, deletions) = comm_map(new, old, heed_deletions);
if!(additions.is_empty() && deletions.is_empty()) {
writeback(conn, k, &all)?;
}
Ok((all, additions, deletions))
}
| update_list | identifier_name |
syncer.rs | use postgres;
use serde_json;
use std;
use std::fmt::Debug;
use std::iter::Iterator;
use std::collections::BTreeMap;
use serde::{Serialize,Deserialize};
use errors::*;
pub struct Comm<I: std::iter::Iterator<Item=T>, J: std::iter::Iterator<Item=T>, T: Clone + Eq + Ord> { l: std::iter::Peekable<I>, r: std::iter::Peekable<J>, }
impl<I: std::iter::Iterator<Item=T>, J: std::iter::Iterator<Item=T>, T: Clone + Ord> Comm<I, J, T> {
pub fn new(left: I, right: J) -> Comm<I, J, T> {
Comm { l: left.peekable(), r: right.peekable(), }
}
}
impl<I: std::iter::Iterator<Item=T>, J: std::iter::Iterator<Item=T>, T: Clone + Eq + Ord> Iterator for Comm<I, J, T> {
type Item = (std::cmp::Ordering, T);
fn next(&mut self) -> Option<Self::Item> { // http://stackoverflow.com/a/32020190/6274013
let which = match (self.l.peek(), self.r.peek()) {
(Some(l), Some(r)) => Some(l.cmp(r)),
(Some(_), None) => Some(std::cmp::Ordering::Less),
(None, Some(_)) => Some(std::cmp::Ordering::Greater),
(None, None) => None,
};
match which {
Some(o @ std::cmp::Ordering::Equal) => self.r.next().and(self.l.next()).map(|x| (o, x)),
Some(o @ std::cmp::Ordering::Less) => self.l.next().map(|x| (o, x)),
Some(o @ std::cmp::Ordering::Greater) => self.r.next().map(|x| (o, x)),
None => None,
}
}
}
fn shrink_to_fit<T>(mut v: Vec<T>) -> Vec<T> { v.shrink_to_fit(); v }
//pub fn comm_algorithm_memoryintensive<T>(left: Vec<T>, right: Vec<T>) -> Vec<(std::cmp::Ordering, T)> where T: Clone + Eq + Ord { | // match x.cmp(y) {
// o @ std::cmp::Ordering::Equal => { ret.push((o, l.next().and(r.next()).unwrap().clone())); },
// o @ std::cmp::Ordering::Less => { ret.push((o, l.next() .unwrap().clone())); },
// o @ std::cmp::Ordering::Greater => { ret.push((o, r.next() .unwrap().clone())); },
// }
// }
// for item in l { ret.push((std::cmp::Ordering::Less, item.clone())); }
// for item in r { ret.push((std::cmp::Ordering::Greater, item.clone())); }
// shrink_to_fit(ret)
//}
fn comm_list<T>(new: Vec<T>, old: &Vec<T>, heed_deletions: bool) -> (Vec<T>, Vec<T>, Vec<T>) where T: Clone + Eq + Ord {
let (mut all, mut additions, mut deletions) : (Vec<T>, Vec<T>, Vec<T>) = (Vec::with_capacity(new.len()), vec![], vec![]);
for (o, x) in Comm::new(new.into_iter(), old.iter().cloned()) {
match o {
std::cmp::Ordering::Equal => all.push(x),
std::cmp::Ordering::Less => { additions.push(x.clone()); all.push(x) },
std::cmp::Ordering::Greater => { deletions.push(x.clone()); if!heed_deletions { all.push(x) } },
}
}
(shrink_to_fit(all), shrink_to_fit(additions), shrink_to_fit(deletions))
}
fn comm_map<'a, K, T>(mut new: BTreeMap<K, Vec<T>>, old: &'a mut BTreeMap<K, Vec<T>>, heed_deletions: bool) -> (BTreeMap<K, Vec<T>>, BTreeMap<K, Vec<T>>, BTreeMap<K, Vec<T>>) where T: Debug + Clone + for<'de> Deserialize<'de> + Serialize + Eq + Ord, K: Debug + Ord + Clone + for<'de> Deserialize<'de> + Serialize {
let (mut all, mut additions, mut deletions) : (BTreeMap<K, Vec<T>>, BTreeMap<K, Vec<T>>, BTreeMap<K, Vec<T>>) = (BTreeMap::new(), BTreeMap::new(), BTreeMap::new());
for k in old.keys() { new.entry(k.clone()).or_insert(vec![]); }
for k in new.keys() { old.entry(k.clone()).or_insert(vec![]); }
//println!("{:?} vs {:?}", new.keys().collect::<Vec<_>>(), old.keys().collect::<Vec<_>>());
for ((key, new_value), (ko, old_value)) in new.into_iter().zip(old.iter()) {
assert!(&key == ko);
let (a, d, l) : (Vec<T>, Vec<T>, Vec<T>) = comm_list::<T>(new_value, old_value, heed_deletions);
all.remove(&key); all.insert(key.clone(), a);
additions.remove(&key); additions.insert(key, d);
deletions.remove(&ko); deletions.insert(ko.clone(), l);
}
//let keys = new.keys().cloned().collect::<Vec<K>>();
//assert!(keys == old.keys().cloned().collect::<Vec<K>>());
//for k in keys {
// let (new_v, old_v) = (new.get(&k).cloned().unwrap(), old.get(&k).cloned().unwrap());
// let (a, d, l) : (Vec<T>, Vec<T>, Vec<T>) = comm_list::<T>(new_v, old_v, heed_deletions);
// all.remove(&k); all.insert(k.clone(), a); additions.remove(&k); additions.insert(k.clone(), d); deletions.remove(&k); deletions.insert(k, l);
//}
(all, additions, deletions)
}
pub fn setup() -> Result<postgres::Connection> {
let conn = postgres::Connection::connect(std::env::var("DATABASE_URL")?.as_str(), postgres::TlsMode::Prefer(&postgres::tls::native_tls::NativeTls::new().unwrap()))?;
conn.execute("CREATE TABLE IF NOT EXISTS blobs (key VARCHAR PRIMARY KEY, val TEXT)", &[])?;
Ok(conn)
}
pub fn read(conn: &postgres::Connection, k: &str) -> Result<String> {
Ok(conn.query("SELECT val FROM blobs WHERE key = $1", &[&k])?.iter().next().map(|r| r.get("val")).unwrap_or_else(String::new))
}
pub fn detect(conn: &postgres::Connection, k: &str) -> Result<bool> {
Ok(conn.query("SELECT val FROM blobs WHERE key = $1", &[&k])?.iter().next().is_some())
}
pub fn write(conn: &postgres::Connection, k: &str, v: &str) -> Result<u64> {
// Yes, the correctness of this methodology relies on a lack of concurrency. Don't try this at home, kids.
let trans = conn.transaction()?;
let updates = trans.execute(if detect(conn, k)? { "UPDATE blobs SET val = $2 WHERE key = $1" } else { "INSERT INTO blobs(key,val) VALUES($1,$2)" }, &[&k, &v])?;
trans.commit()?;
ensure!(updates == 1, ErrorKind::DbWriteNopped(k.to_string()));
Ok(updates)
}
#[inline] pub fn writeback<T>(conn: &postgres::Connection, k: &str, v: &T) -> Result<u64> where T: Serialize + for<'de> Deserialize<'de> + Default {
write(conn, k, &serde_json::to_string(v)?)
}
pub fn readout<T>(conn: &postgres::Connection, k: &str) -> T where T: Serialize + for<'de> Deserialize<'de> + Default {
match read(conn, k) {
Ok(s) => serde_json::from_str(s.clone().as_str()).unwrap_or(T::default()),
Err(_) => T::default(),
}
}
pub fn update_list<T>(conn: &postgres::Connection, k: &str, new: Vec<T>, old: &Vec<T>, heed_deletions: bool) -> Result<(Vec<T>, Vec<T>, Vec<T>)> where T: Clone + for<'de> Deserialize<'de> + Serialize + Eq + Ord + Debug {
let (all, additions, deletions) = comm_list(new, old, heed_deletions);
if!(additions.is_empty() && deletions.is_empty()) {
writeback(conn, k, &all)?;
}
Ok((all, additions, deletions))
}
pub fn update_map<K, T>(conn: &postgres::Connection, k: &str, new: BTreeMap<K, Vec<T>>, old: &mut BTreeMap<K, Vec<T>>, heed_deletions: bool) -> Result<(BTreeMap<K, Vec<T>>, BTreeMap<K, Vec<T>>, BTreeMap<K, Vec<T>>)> where T: Clone + for<'de> Deserialize<'de> + Serialize + Eq + Ord + Debug, K: Ord + Clone + for<'de> Deserialize<'de> + Serialize + Debug {
let (all, additions, deletions) = comm_map(new, old, heed_deletions);
if!(additions.is_empty() && deletions.is_empty()) {
writeback(conn, k, &all)?;
}
Ok((all, additions, deletions))
} | // let mut ret: Vec<(std::cmp::Ordering, T)> = Vec::with_capacity(left.capacity()+right.capacity());
// let (mut l, mut r) = (left.iter().peekable(), right.iter().peekable());
// while l.peek().is_some() && r.peek().is_some() {
// let x = l.peek().unwrap().clone();
// let y = r.peek().unwrap().clone(); | random_line_split |
syncer.rs | use postgres;
use serde_json;
use std;
use std::fmt::Debug;
use std::iter::Iterator;
use std::collections::BTreeMap;
use serde::{Serialize,Deserialize};
use errors::*;
pub struct Comm<I: std::iter::Iterator<Item=T>, J: std::iter::Iterator<Item=T>, T: Clone + Eq + Ord> { l: std::iter::Peekable<I>, r: std::iter::Peekable<J>, }
impl<I: std::iter::Iterator<Item=T>, J: std::iter::Iterator<Item=T>, T: Clone + Ord> Comm<I, J, T> {
pub fn new(left: I, right: J) -> Comm<I, J, T> {
Comm { l: left.peekable(), r: right.peekable(), }
}
}
impl<I: std::iter::Iterator<Item=T>, J: std::iter::Iterator<Item=T>, T: Clone + Eq + Ord> Iterator for Comm<I, J, T> {
type Item = (std::cmp::Ordering, T);
fn next(&mut self) -> Option<Self::Item> { // http://stackoverflow.com/a/32020190/6274013
let which = match (self.l.peek(), self.r.peek()) {
(Some(l), Some(r)) => Some(l.cmp(r)),
(Some(_), None) => Some(std::cmp::Ordering::Less),
(None, Some(_)) => Some(std::cmp::Ordering::Greater),
(None, None) => None,
};
match which {
Some(o @ std::cmp::Ordering::Equal) => self.r.next().and(self.l.next()).map(|x| (o, x)),
Some(o @ std::cmp::Ordering::Less) => self.l.next().map(|x| (o, x)),
Some(o @ std::cmp::Ordering::Greater) => self.r.next().map(|x| (o, x)),
None => None,
}
}
}
fn shrink_to_fit<T>(mut v: Vec<T>) -> Vec<T> { v.shrink_to_fit(); v }
//pub fn comm_algorithm_memoryintensive<T>(left: Vec<T>, right: Vec<T>) -> Vec<(std::cmp::Ordering, T)> where T: Clone + Eq + Ord {
// let mut ret: Vec<(std::cmp::Ordering, T)> = Vec::with_capacity(left.capacity()+right.capacity());
// let (mut l, mut r) = (left.iter().peekable(), right.iter().peekable());
// while l.peek().is_some() && r.peek().is_some() {
// let x = l.peek().unwrap().clone();
// let y = r.peek().unwrap().clone();
// match x.cmp(y) {
// o @ std::cmp::Ordering::Equal => { ret.push((o, l.next().and(r.next()).unwrap().clone())); },
// o @ std::cmp::Ordering::Less => { ret.push((o, l.next() .unwrap().clone())); },
// o @ std::cmp::Ordering::Greater => { ret.push((o, r.next() .unwrap().clone())); },
// }
// }
// for item in l { ret.push((std::cmp::Ordering::Less, item.clone())); }
// for item in r { ret.push((std::cmp::Ordering::Greater, item.clone())); }
// shrink_to_fit(ret)
//}
fn comm_list<T>(new: Vec<T>, old: &Vec<T>, heed_deletions: bool) -> (Vec<T>, Vec<T>, Vec<T>) where T: Clone + Eq + Ord |
fn comm_map<'a, K, T>(mut new: BTreeMap<K, Vec<T>>, old: &'a mut BTreeMap<K, Vec<T>>, heed_deletions: bool) -> (BTreeMap<K, Vec<T>>, BTreeMap<K, Vec<T>>, BTreeMap<K, Vec<T>>) where T: Debug + Clone + for<'de> Deserialize<'de> + Serialize + Eq + Ord, K: Debug + Ord + Clone + for<'de> Deserialize<'de> + Serialize {
let (mut all, mut additions, mut deletions) : (BTreeMap<K, Vec<T>>, BTreeMap<K, Vec<T>>, BTreeMap<K, Vec<T>>) = (BTreeMap::new(), BTreeMap::new(), BTreeMap::new());
for k in old.keys() { new.entry(k.clone()).or_insert(vec![]); }
for k in new.keys() { old.entry(k.clone()).or_insert(vec![]); }
//println!("{:?} vs {:?}", new.keys().collect::<Vec<_>>(), old.keys().collect::<Vec<_>>());
for ((key, new_value), (ko, old_value)) in new.into_iter().zip(old.iter()) {
assert!(&key == ko);
let (a, d, l) : (Vec<T>, Vec<T>, Vec<T>) = comm_list::<T>(new_value, old_value, heed_deletions);
all.remove(&key); all.insert(key.clone(), a);
additions.remove(&key); additions.insert(key, d);
deletions.remove(&ko); deletions.insert(ko.clone(), l);
}
//let keys = new.keys().cloned().collect::<Vec<K>>();
//assert!(keys == old.keys().cloned().collect::<Vec<K>>());
//for k in keys {
// let (new_v, old_v) = (new.get(&k).cloned().unwrap(), old.get(&k).cloned().unwrap());
// let (a, d, l) : (Vec<T>, Vec<T>, Vec<T>) = comm_list::<T>(new_v, old_v, heed_deletions);
// all.remove(&k); all.insert(k.clone(), a); additions.remove(&k); additions.insert(k.clone(), d); deletions.remove(&k); deletions.insert(k, l);
//}
(all, additions, deletions)
}
pub fn setup() -> Result<postgres::Connection> {
let conn = postgres::Connection::connect(std::env::var("DATABASE_URL")?.as_str(), postgres::TlsMode::Prefer(&postgres::tls::native_tls::NativeTls::new().unwrap()))?;
conn.execute("CREATE TABLE IF NOT EXISTS blobs (key VARCHAR PRIMARY KEY, val TEXT)", &[])?;
Ok(conn)
}
pub fn read(conn: &postgres::Connection, k: &str) -> Result<String> {
Ok(conn.query("SELECT val FROM blobs WHERE key = $1", &[&k])?.iter().next().map(|r| r.get("val")).unwrap_or_else(String::new))
}
pub fn detect(conn: &postgres::Connection, k: &str) -> Result<bool> {
Ok(conn.query("SELECT val FROM blobs WHERE key = $1", &[&k])?.iter().next().is_some())
}
pub fn write(conn: &postgres::Connection, k: &str, v: &str) -> Result<u64> {
// Yes, the correctness of this methodology relies on a lack of concurrency. Don't try this at home, kids.
let trans = conn.transaction()?;
let updates = trans.execute(if detect(conn, k)? { "UPDATE blobs SET val = $2 WHERE key = $1" } else { "INSERT INTO blobs(key,val) VALUES($1,$2)" }, &[&k, &v])?;
trans.commit()?;
ensure!(updates == 1, ErrorKind::DbWriteNopped(k.to_string()));
Ok(updates)
}
#[inline] pub fn writeback<T>(conn: &postgres::Connection, k: &str, v: &T) -> Result<u64> where T: Serialize + for<'de> Deserialize<'de> + Default {
write(conn, k, &serde_json::to_string(v)?)
}
pub fn readout<T>(conn: &postgres::Connection, k: &str) -> T where T: Serialize + for<'de> Deserialize<'de> + Default {
match read(conn, k) {
Ok(s) => serde_json::from_str(s.clone().as_str()).unwrap_or(T::default()),
Err(_) => T::default(),
}
}
pub fn update_list<T>(conn: &postgres::Connection, k: &str, new: Vec<T>, old: &Vec<T>, heed_deletions: bool) -> Result<(Vec<T>, Vec<T>, Vec<T>)> where T: Clone + for<'de> Deserialize<'de> + Serialize + Eq + Ord + Debug {
let (all, additions, deletions) = comm_list(new, old, heed_deletions);
if!(additions.is_empty() && deletions.is_empty()) {
writeback(conn, k, &all)?;
}
Ok((all, additions, deletions))
}
pub fn update_map<K, T>(conn: &postgres::Connection, k: &str, new: BTreeMap<K, Vec<T>>, old: &mut BTreeMap<K, Vec<T>>, heed_deletions: bool) -> Result<(BTreeMap<K, Vec<T>>, BTreeMap<K, Vec<T>>, BTreeMap<K, Vec<T>>)> where T: Clone + for<'de> Deserialize<'de> + Serialize + Eq + Ord + Debug, K: Ord + Clone + for<'de> Deserialize<'de> + Serialize + Debug {
let (all, additions, deletions) = comm_map(new, old, heed_deletions);
if!(additions.is_empty() && deletions.is_empty()) {
writeback(conn, k, &all)?;
}
Ok((all, additions, deletions))
}
| {
let (mut all, mut additions, mut deletions) : (Vec<T>, Vec<T>, Vec<T>) = (Vec::with_capacity(new.len()), vec![], vec![]);
for (o, x) in Comm::new(new.into_iter(), old.iter().cloned()) {
match o {
std::cmp::Ordering::Equal => all.push(x),
std::cmp::Ordering::Less => { additions.push(x.clone()); all.push(x) },
std::cmp::Ordering::Greater => { deletions.push(x.clone()); if !heed_deletions { all.push(x) } },
}
}
(shrink_to_fit(all), shrink_to_fit(additions), shrink_to_fit(deletions))
} | identifier_body |
main.rs | use std::io;
use std::io::Read;
use std::collections::HashMap;
use std::str::FromStr;
type DynamicInfo<'a> = HashMap<&'a str, u8>;
#[derive(Debug)]
struct Aunt<'a> {
number: u16,
info: DynamicInfo<'a>
}
fn parse_aunt(line: &str) -> Aunt {
let tokens: Vec<_> = line
.split(|c: char|!c.is_alphanumeric())
.filter(|c|!c.is_empty())
.collect();
let mut aunt = Aunt { number: u16::from_str(tokens[1]).unwrap(), info: HashMap::new() };
for i in 0..((tokens.len() - 2) / 2) {
aunt.info.insert(tokens[2 * i + 2], u8::from_str(tokens[2 * i + 3]).unwrap());
}
aunt
}
fn read_input() -> io::Result<String> {
let mut buffer = String::new();
try!(io::stdin().read_to_string(&mut buffer));
Ok(buffer.trim().to_string()) | .into_iter()
.all(|(attribute, value)| {
match specification.get(attribute) {
Some(x) if x == value => true,
_ => false
}
})
}
fn matches_adjusted(aunt: &Aunt, specification: &DynamicInfo) -> bool {
let ref info = aunt.info;
info
.into_iter()
.all(|(attribute, value)| {
match (*attribute, specification.get(attribute)) {
("cats", Some(x)) | ("trees", Some(x)) => x < value,
("pomeranians", Some(x)) | ("goldfish", Some(x)) => x > value,
(_, Some(x)) => x == value,
_ => false
}
})
}
fn main() {
let input = read_input().unwrap();
let aunts: Vec<_> = input.lines().map(parse_aunt).collect();
let mut machine_output = HashMap::with_capacity(10);
machine_output.insert("children", 3);
machine_output.insert("cats", 7);
machine_output.insert("samoyeds", 2);
machine_output.insert("pomeranians", 3);
machine_output.insert("akitas", 0);
machine_output.insert("vizslas", 0);
machine_output.insert("goldfish", 5);
machine_output.insert("trees", 2);
machine_output.insert("cars", 3);
machine_output.insert("perfumes", 1);
let aunt_sue = aunts.iter().find(|aunt| matches(aunt, &machine_output));
println!("Aunt Sue: {:?}", aunt_sue);
let another_aunt_sue = aunts.iter().find(|aunt| matches_adjusted(aunt, &machine_output));
println!("Another Aunt Sue: {:?}", another_aunt_sue);
} | }
fn matches(aunt: &Aunt, specification: &DynamicInfo) -> bool {
let ref info = aunt.info;
info | random_line_split |
main.rs | use std::io;
use std::io::Read;
use std::collections::HashMap;
use std::str::FromStr;
type DynamicInfo<'a> = HashMap<&'a str, u8>;
#[derive(Debug)]
struct Aunt<'a> {
number: u16,
info: DynamicInfo<'a>
}
fn parse_aunt(line: &str) -> Aunt {
let tokens: Vec<_> = line
.split(|c: char|!c.is_alphanumeric())
.filter(|c|!c.is_empty())
.collect();
let mut aunt = Aunt { number: u16::from_str(tokens[1]).unwrap(), info: HashMap::new() };
for i in 0..((tokens.len() - 2) / 2) {
aunt.info.insert(tokens[2 * i + 2], u8::from_str(tokens[2 * i + 3]).unwrap());
}
aunt
}
fn read_input() -> io::Result<String> {
let mut buffer = String::new();
try!(io::stdin().read_to_string(&mut buffer));
Ok(buffer.trim().to_string())
}
fn matches(aunt: &Aunt, specification: &DynamicInfo) -> bool {
let ref info = aunt.info;
info
.into_iter()
.all(|(attribute, value)| {
match specification.get(attribute) {
Some(x) if x == value => true,
_ => false
}
})
}
fn | (aunt: &Aunt, specification: &DynamicInfo) -> bool {
let ref info = aunt.info;
info
.into_iter()
.all(|(attribute, value)| {
match (*attribute, specification.get(attribute)) {
("cats", Some(x)) | ("trees", Some(x)) => x < value,
("pomeranians", Some(x)) | ("goldfish", Some(x)) => x > value,
(_, Some(x)) => x == value,
_ => false
}
})
}
fn main() {
let input = read_input().unwrap();
let aunts: Vec<_> = input.lines().map(parse_aunt).collect();
let mut machine_output = HashMap::with_capacity(10);
machine_output.insert("children", 3);
machine_output.insert("cats", 7);
machine_output.insert("samoyeds", 2);
machine_output.insert("pomeranians", 3);
machine_output.insert("akitas", 0);
machine_output.insert("vizslas", 0);
machine_output.insert("goldfish", 5);
machine_output.insert("trees", 2);
machine_output.insert("cars", 3);
machine_output.insert("perfumes", 1);
let aunt_sue = aunts.iter().find(|aunt| matches(aunt, &machine_output));
println!("Aunt Sue: {:?}", aunt_sue);
let another_aunt_sue = aunts.iter().find(|aunt| matches_adjusted(aunt, &machine_output));
println!("Another Aunt Sue: {:?}", another_aunt_sue);
}
| matches_adjusted | identifier_name |
main.rs | #[macro_use]
extern crate error_chain;
extern crate calamine;
mod errors;
use std::path::{PathBuf, Path};
use std::fs;
use std::io::{BufWriter, Write};
use errors::Result;
use calamine::{Sheets, Range, CellType};
fn main() {
let mut args = ::std::env::args();
let file = args.by_ref()
.skip(1)
.next()
.expect("USAGE: xl2txt file [root]");
let root = args.next().map(|r| r.into());
run(file.into(), root).unwrap();
}
fn run(file: PathBuf, root: Option<PathBuf>) -> Result<()> {
let paths = XlPaths::new(file, root)?;
let mut xl = Sheets::open(&paths.orig)?;
// defined names
{
let mut f = BufWriter::new(fs::File::create(paths.names)?);
writeln!(f, "| Name | Formula |")?;
writeln!(f, "|------|---------|")?;
for &(ref name, ref formula) in xl.defined_names()? {
writeln!(f, "| {} | {} |", name, formula)?;
}
}
// sheets
let sheets = xl.sheet_names()?;
for s in sheets {
write_range(paths.data.join(format!("{}.md", &s)),
xl.worksheet_range(&s)?)?;
write_range(paths.formula.join(format!("{}.md", &s)),
xl.worksheet_formula(&s)?)?;
}
// vba
if!xl.has_vba() {
return Ok(());
}
let mut vba = xl.vba_project()?;
let vba = vba.to_mut();
for module in vba.get_module_names() {
let mut m = fs::File::create(paths.vba.join(format!("{}.vb", module)))?;
write!(m, "{}", vba.get_module(module)?)?;
}
{
let mut f = BufWriter::new(fs::File::create(paths.refs)?);
writeln!(f, "| Name | Description | Path |")?;
writeln!(f, "|------|-------------|------|")?;
for r in vba.get_references() {
writeln!(f,
"| {} | {} | {} |",
r.name,
r.description,
r.path.display())?;
}
}
Ok(())
}
struct XlPaths {
orig: PathBuf,
data: PathBuf,
formula: PathBuf,
vba: PathBuf,
refs: PathBuf,
names: PathBuf,
}
impl XlPaths {
fn new(orig: PathBuf, root: Option<PathBuf>) -> Result<XlPaths> {
if!orig.exists() {
bail!("Cannot find {}", orig.display());
}
if!orig.is_file() {
bail!("{} is not a file", orig.display());
}
match orig.extension().and_then(|e| e.to_str()) {
Some("xls") | Some("xlsx") | Some("xlsb") | Some("xlsm") | Some("xla") |
Some("xlam") | Some("ods") => (),
Some(e) => bail!("Unrecognized extension: {}", e),
None => bail!("Expecting an excel file, couln't find an extension"),
}
let root_next = format!(".{}", &*orig.file_name().unwrap().to_string_lossy());
let root = root.unwrap_or_else(|| orig.parent().map_or(".".into(), |p| p.into()))
.join(root_next);
if root.exists() {
fs::remove_dir_all(&root)?;
}
fs::create_dir_all(&root)?;
let data = root.join("data");
if!data.exists() {
fs::create_dir(&data)?;
}
let vba = root.join("vba");
if!vba.exists() {
fs::create_dir(&vba)?;
}
let formula = root.join("formula");
if!formula.exists() {
fs::create_dir(&formula)?;
}
Ok(XlPaths {
orig: orig,
data: data,
formula: formula,
vba: vba,
refs: root.join("refs.md"),
names: root.join("names.md"),
})
}
}
fn | <P, T>(path: P, range: Range<T>) -> Result<()>
where P: AsRef<Path>,
T: CellType + ::std::fmt::Display
{
if range.is_empty() {
return Ok(());
}
let mut f = BufWriter::new(fs::File::create(path.as_ref())?);
let ((srow, scol), (_, ecol)) = (range.start(), range.end());
write!(f, "| ")?;
for c in scol..ecol + 1 {
write!(f, "| {} ", get_column(c))?;
}
writeln!(f, "|")?;
for _ in scol..ecol + 2 {
write!(f, "|---")?;
}
writeln!(f, "|")?;
// next rows: table data
let srow = srow as usize + 1;
for (i, row) in range.rows().enumerate() {
write!(f, "| __{}__ ", srow + i)?;
for c in row {
write!(f, "| {} ", c)?;
}
writeln!(f, "|")?;
}
Ok(())
}
fn get_column(mut col: u32) -> String {
let mut buf = String::new();
if col < 26 {
buf.push((b'A' + col as u8) as char);
} else {
let mut rev = String::new();
while col >= 26 {
let c = col % 26;
rev.push((b'A' + c as u8) as char);
col -= c;
col /= 26;
}
buf.extend(rev.chars().rev());
}
buf
}
| write_range | identifier_name |
main.rs | #[macro_use]
extern crate error_chain;
extern crate calamine;
mod errors;
use std::path::{PathBuf, Path};
use std::fs;
use std::io::{BufWriter, Write};
use errors::Result;
use calamine::{Sheets, Range, CellType};
fn main() {
let mut args = ::std::env::args();
let file = args.by_ref()
.skip(1)
.next()
.expect("USAGE: xl2txt file [root]");
let root = args.next().map(|r| r.into());
run(file.into(), root).unwrap();
}
fn run(file: PathBuf, root: Option<PathBuf>) -> Result<()> {
let paths = XlPaths::new(file, root)?;
let mut xl = Sheets::open(&paths.orig)?;
// defined names
{
let mut f = BufWriter::new(fs::File::create(paths.names)?);
writeln!(f, "| Name | Formula |")?;
writeln!(f, "|------|---------|")?;
for &(ref name, ref formula) in xl.defined_names()? {
writeln!(f, "| {} | {} |", name, formula)?;
}
}
// sheets
let sheets = xl.sheet_names()?;
for s in sheets {
write_range(paths.data.join(format!("{}.md", &s)),
xl.worksheet_range(&s)?)?;
write_range(paths.formula.join(format!("{}.md", &s)),
xl.worksheet_formula(&s)?)?;
}
// vba
if!xl.has_vba() {
return Ok(());
}
let mut vba = xl.vba_project()?;
let vba = vba.to_mut();
for module in vba.get_module_names() {
let mut m = fs::File::create(paths.vba.join(format!("{}.vb", module)))?;
write!(m, "{}", vba.get_module(module)?)?;
}
{
let mut f = BufWriter::new(fs::File::create(paths.refs)?);
writeln!(f, "| Name | Description | Path |")?;
writeln!(f, "|------|-------------|------|")?;
for r in vba.get_references() {
writeln!(f,
"| {} | {} | {} |",
r.name,
r.description,
r.path.display())?;
}
}
Ok(())
}
struct XlPaths {
orig: PathBuf,
data: PathBuf,
formula: PathBuf,
vba: PathBuf,
refs: PathBuf,
names: PathBuf,
}
impl XlPaths {
fn new(orig: PathBuf, root: Option<PathBuf>) -> Result<XlPaths> {
if!orig.exists() {
bail!("Cannot find {}", orig.display());
}
if!orig.is_file() {
bail!("{} is not a file", orig.display());
}
match orig.extension().and_then(|e| e.to_str()) {
Some("xls") | Some("xlsx") | Some("xlsb") | Some("xlsm") | Some("xla") |
Some("xlam") | Some("ods") => (),
Some(e) => bail!("Unrecognized extension: {}", e),
None => bail!("Expecting an excel file, couln't find an extension"),
}
let root_next = format!(".{}", &*orig.file_name().unwrap().to_string_lossy());
let root = root.unwrap_or_else(|| orig.parent().map_or(".".into(), |p| p.into()))
.join(root_next);
if root.exists() {
fs::remove_dir_all(&root)?;
}
fs::create_dir_all(&root)?;
let data = root.join("data");
if!data.exists() |
let vba = root.join("vba");
if!vba.exists() {
fs::create_dir(&vba)?;
}
let formula = root.join("formula");
if!formula.exists() {
fs::create_dir(&formula)?;
}
Ok(XlPaths {
orig: orig,
data: data,
formula: formula,
vba: vba,
refs: root.join("refs.md"),
names: root.join("names.md"),
})
}
}
fn write_range<P, T>(path: P, range: Range<T>) -> Result<()>
where P: AsRef<Path>,
T: CellType + ::std::fmt::Display
{
if range.is_empty() {
return Ok(());
}
let mut f = BufWriter::new(fs::File::create(path.as_ref())?);
let ((srow, scol), (_, ecol)) = (range.start(), range.end());
write!(f, "| ")?;
for c in scol..ecol + 1 {
write!(f, "| {} ", get_column(c))?;
}
writeln!(f, "|")?;
for _ in scol..ecol + 2 {
write!(f, "|---")?;
}
writeln!(f, "|")?;
// next rows: table data
let srow = srow as usize + 1;
for (i, row) in range.rows().enumerate() {
write!(f, "| __{}__ ", srow + i)?;
for c in row {
write!(f, "| {} ", c)?;
}
writeln!(f, "|")?;
}
Ok(())
}
fn get_column(mut col: u32) -> String {
let mut buf = String::new();
if col < 26 {
buf.push((b'A' + col as u8) as char);
} else {
let mut rev = String::new();
while col >= 26 {
let c = col % 26;
rev.push((b'A' + c as u8) as char);
col -= c;
col /= 26;
}
buf.extend(rev.chars().rev());
}
buf
}
| {
fs::create_dir(&data)?;
} | conditional_block |
main.rs | #[macro_use]
extern crate error_chain;
extern crate calamine;
mod errors;
use std::path::{PathBuf, Path};
use std::fs;
use std::io::{BufWriter, Write};
use errors::Result;
use calamine::{Sheets, Range, CellType};
fn main() {
let mut args = ::std::env::args();
let file = args.by_ref()
.skip(1)
.next()
.expect("USAGE: xl2txt file [root]");
let root = args.next().map(|r| r.into());
run(file.into(), root).unwrap();
}
fn run(file: PathBuf, root: Option<PathBuf>) -> Result<()> {
let paths = XlPaths::new(file, root)?;
let mut xl = Sheets::open(&paths.orig)?;
// defined names
{
let mut f = BufWriter::new(fs::File::create(paths.names)?);
writeln!(f, "| Name | Formula |")?;
writeln!(f, "|------|---------|")?;
for &(ref name, ref formula) in xl.defined_names()? {
writeln!(f, "| {} | {} |", name, formula)?;
}
}
// sheets
let sheets = xl.sheet_names()?;
for s in sheets {
write_range(paths.data.join(format!("{}.md", &s)),
xl.worksheet_range(&s)?)?;
write_range(paths.formula.join(format!("{}.md", &s)),
xl.worksheet_formula(&s)?)?;
}
// vba
if!xl.has_vba() {
return Ok(());
}
let mut vba = xl.vba_project()?;
let vba = vba.to_mut();
for module in vba.get_module_names() {
let mut m = fs::File::create(paths.vba.join(format!("{}.vb", module)))?;
write!(m, "{}", vba.get_module(module)?)?;
}
{
let mut f = BufWriter::new(fs::File::create(paths.refs)?);
writeln!(f, "| Name | Description | Path |")?;
writeln!(f, "|------|-------------|------|")?;
for r in vba.get_references() {
writeln!(f,
"| {} | {} | {} |",
r.name,
r.description,
r.path.display())?;
}
}
Ok(())
}
struct XlPaths {
orig: PathBuf,
data: PathBuf,
formula: PathBuf,
vba: PathBuf,
refs: PathBuf,
names: PathBuf,
}
impl XlPaths {
fn new(orig: PathBuf, root: Option<PathBuf>) -> Result<XlPaths> {
if!orig.exists() {
bail!("Cannot find {}", orig.display());
}
if!orig.is_file() {
bail!("{} is not a file", orig.display());
}
match orig.extension().and_then(|e| e.to_str()) {
Some("xls") | Some("xlsx") | Some("xlsb") | Some("xlsm") | Some("xla") |
Some("xlam") | Some("ods") => (),
Some(e) => bail!("Unrecognized extension: {}", e),
None => bail!("Expecting an excel file, couln't find an extension"),
}
let root_next = format!(".{}", &*orig.file_name().unwrap().to_string_lossy());
let root = root.unwrap_or_else(|| orig.parent().map_or(".".into(), |p| p.into()))
.join(root_next);
if root.exists() {
fs::remove_dir_all(&root)?;
}
fs::create_dir_all(&root)?;
let data = root.join("data");
if!data.exists() {
fs::create_dir(&data)?;
}
let vba = root.join("vba");
if!vba.exists() {
fs::create_dir(&vba)?;
}
let formula = root.join("formula");
if!formula.exists() {
fs::create_dir(&formula)?;
}
Ok(XlPaths {
orig: orig,
data: data,
formula: formula,
vba: vba,
refs: root.join("refs.md"),
names: root.join("names.md"),
})
}
}
fn write_range<P, T>(path: P, range: Range<T>) -> Result<()>
where P: AsRef<Path>,
T: CellType + ::std::fmt::Display
{
if range.is_empty() {
return Ok(());
}
let mut f = BufWriter::new(fs::File::create(path.as_ref())?);
let ((srow, scol), (_, ecol)) = (range.start(), range.end());
write!(f, "| ")?;
for c in scol..ecol + 1 {
write!(f, "| {} ", get_column(c))?;
}
writeln!(f, "|")?;
for _ in scol..ecol + 2 {
write!(f, "|---")?;
}
writeln!(f, "|")?;
// next rows: table data
let srow = srow as usize + 1;
for (i, row) in range.rows().enumerate() {
write!(f, "| __{}__ ", srow + i)?;
for c in row {
write!(f, "| {} ", c)?;
}
writeln!(f, "|")?;
}
Ok(())
}
fn get_column(mut col: u32) -> String {
let mut buf = String::new();
if col < 26 {
buf.push((b'A' + col as u8) as char);
} else {
let mut rev = String::new();
while col >= 26 { | buf.extend(rev.chars().rev());
}
buf
} | let c = col % 26;
rev.push((b'A' + c as u8) as char);
col -= c;
col /= 26;
} | random_line_split |
lib.rs | The interrupt controller is initialized with the provided mapping.
pub fn new(intc_config: &IntcConfig) -> Result<Pruss<'a>> {
// Enforce singleton instantiation.
if PRUSS_IS_INSTANTIATED.swap(true, Ordering::Acquire) {
return Err(Error::AlreadyInstantiated);
}
// Handy function to read the size of system devices.
fn memsize(path: &str) -> io::Result<usize> {
let mut f = try!(File::open(path));
let mut buffer = String::new();
try!(f.read_to_string(&mut buffer));
Ok(usize::from_str_radix(&buffer[2..].trim(), 16).unwrap())
};
// Create memory mapped devices.
let file = try!(SyncFile::new(PRUSS_DEVICE_PATH));
let prumem_size = try!(memsize(UIO_PRUMEM_SIZE_PATH));
let hostmem_size = try!(memsize(UIO_HOSTMEM_SIZE_PATH));
let prumap = try!(MemMap::new(file.fd, prumem_size, 0));
let hostmap = try!(MemMap::new(file.fd, hostmem_size, 1));
// Create and initialize the interrupt controller.
let mut intc = Intc::new(unsafe { prumap.base.offset(INTC_OFFSET as isize) as *mut u32 });
intc.map_interrupts(intc_config);
// Create the PRU code loaders.
let pru0 =
PruLoader::new(unsafe { prumap.base.offset(PRU0CTRL_OFFSET as isize) as *mut u32 },
unsafe { prumap.base.offset(IRAM0_OFFSET as isize) },
IRAM0_SIZE);
let pru1 =
PruLoader::new(unsafe { prumap.base.offset(PRU1CTRL_OFFSET as isize) as *mut u32 },
unsafe { prumap.base.offset(IRAM1_OFFSET as isize) },
IRAM1_SIZE);
// Create memory views.
let dram0 = MemSegment::new(prumap.base, DRAM0_OFFSET, DRAM0_OFFSET + DRAM0_SIZE);
let dram1 = MemSegment::new(prumap.base, DRAM1_OFFSET, DRAM1_OFFSET + DRAM1_SIZE);
let dram2 = MemSegment::new(prumap.base, DRAM2_OFFSET, DRAM2_OFFSET + DRAM2_SIZE);
let hostram = MemSegment::new(hostmap.base, 0, hostmem_size);
// Voila.
Ok(Pruss {
_prumap: prumap,
_hostmap: hostmap,
intc: intc,
pru0: pru0,
pru1: pru1,
dram0: dram0,
dram1: dram1,
dram2: dram2,
hostram: hostram,
})
}
}
impl<'a> Drop for Pruss<'a> {
fn drop(&mut self) {
// Stop instruction executions in both PRUs
self.pru0.reset();
self.pru1.reset();
// Allow another PRU subsystem context to be instantiated.
PRUSS_IS_INSTANTIATED.store(false, Ordering::Release);
}
}
unsafe impl<'a> Send for Pruss<'a> {}
unsafe impl<'a> Sync for Pruss<'a> {}
/// The PRU interrupt controller.
pub struct Intc {
intc_reg: *mut u32,
}
impl Intc {
/// Creates a driver context with sane interrupt intc mapping defaults.
fn new(intc_reg: *mut u32) -> Self {
let intc = Intc { intc_reg: intc_reg };
intc
}
/// Maps PRU interrupts according to the provided configuration.
pub fn map_interrupts(&mut self, interrupts: &IntcConfig) {
unsafe {
// Set the polarity of system interrupts to high.
ptr::write_volatile(self.intc_reg.offset(SIPR1_REG), 0xffffffff);
ptr::write_volatile(self.intc_reg.offset(SIPR2_REG), 0xffffffff);
// Clear all channel map registers and assign system events to channels.
for cmrx in 0..NUM_CMRX {
ptr::write_volatile(self.intc_reg.offset(CMR_REG + cmrx), 0);
}
for m in &interrupts.sysevt_to_channel_map {
let cmrx = (m.sysevt >> 2) as isize;
debug_assert!(cmrx < NUM_CMRX);
let val = ptr::read_volatile(self.intc_reg.offset(CMR_REG + cmrx));
ptr::write_volatile(self.intc_reg.offset(CMR_REG + cmrx),
val | (m.channel as u32) << ((m.sysevt as u32 & 0b11) * 8));
}
// Clear all host map registers and assign channels to hosts.
for hmrx in 0..NUM_HMRX {
ptr::write_volatile(self.intc_reg.offset(HMR_REG + hmrx), 0);
}
for m in &interrupts.channel_to_host_map {
let hmrx = (m.channel >> 2) as isize;
debug_assert!(hmrx < NUM_HMRX);
let val = ptr::read_volatile(self.intc_reg.offset(HMR_REG + hmrx));
ptr::write_volatile(self.intc_reg.offset(HMR_REG + hmrx),
val | (m.host as u32) << ((m.channel as u32 & 0b11) * 8));
}
// Set the type of system interrupts to pulse.
ptr::write_volatile(self.intc_reg.offset(SITR1_REG), 0x0);
ptr::write_volatile(self.intc_reg.offset(SITR2_REG), 0x0);
// Enable and clear system events.
let (mut mask1, mut mask2) = (0u32, 0u32);
for se in &interrupts.sysevt_enable {
match *se {
0...31 => mask1 |= 1u32 << se,
32...63 => mask2 |= 1u32 << (se - 32),
_ => unreachable!(),
};
}
ptr::write_volatile(self.intc_reg.offset(ESR1_REG), mask1);
ptr::write_volatile(self.intc_reg.offset(SECR1_REG), mask1);
ptr::write_volatile(self.intc_reg.offset(ESR2_REG), mask2);
ptr::write_volatile(self.intc_reg.offset(SECR2_REG), mask2);
// Enable host interrupts.
for h in &interrupts.host_enable {
ptr::write_volatile(self.intc_reg.offset(HIEISR_REG), *h as u32);
}
ptr::write_volatile(self.intc_reg.offset(GER_REG), 0x1);
}
}
/// Triggers a system event.
pub fn send_sysevt(&self, sysevt: Sysevt) {
unsafe {
match sysevt as u8 {
se @ 0...31 => ptr::write_volatile(self.intc_reg.offset(SRSR1_REG),
1u32 << se),
se @ 32...63 => ptr::write_volatile(self.intc_reg.offset(SRSR2_REG),
1u32 << (se - 32)),
_ => unreachable!(),
};
}
}
/// Clears a system event.
pub fn clear_sysevt(&self, sysevt: Sysevt) {
unsafe {
ptr::write_volatile(self.intc_reg.offset(SICR_REG), sysevt as u32);
}
}
/// Enables a system event.
pub fn enable_sysevt(&self, sysevt: Sysevt) {
unsafe {
ptr::write_volatile(self.intc_reg.offset(EISR_REG), sysevt as u32 );
}
}
/// Disables a system event.
pub fn disable_sysevt(&self, sysevt: Sysevt) {
unsafe {
ptr::write_volatile(self.intc_reg.offset(EICR_REG), sysevt as u32 );
}
}
/// Enables or re-enables a host interrupt.
///
/// Beware: calling this function before the triggering system event was cleared will trigger
/// the host interrupt again.
pub fn enable_host<T: Into<Host>>(&self, host: T) {
let host: Host = host.into();
unsafe {
ptr::write_volatile(self.intc_reg.offset(HIEISR_REG), host as u32 );
}
}
/// Disables a host interrupt.
pub fn disable_host<T: Into<Host>>(&self, host: T) {
let host: Host = host.into();
unsafe {
ptr::write_volatile(self.intc_reg.offset(HIDISR_REG), host as u32 );
}
}
/// Returns a synchronization primitive for event out host interrupts.
///
/// Important: this function should be called before any corresponding event out is triggered.
///
/// # Panics
///
/// This function should not panic provided that the uio_pruss kernel module is loaded, which
/// is theoretically guaranteed at this point since `Pruss` could not have been created
/// otherwise.
pub fn register_irq(&self, e: Evtout) -> EvtoutIrq {
EvtoutIrq::new(e)
}
}
/// PRU instruction code loader.
pub struct PruLoader {
pructrl_reg: *mut u32,
iram_base: *mut u8,
iram_size: usize,
}
impl PruLoader {
fn new(pructrl_reg: *mut u32, iram_base: *mut u8, iram_size: usize) -> PruLoader {
PruLoader {
pructrl_reg: pructrl_reg,
iram_base: iram_base,
iram_size: iram_size,
}
}
/// Loads a binary of opcodes to the PRU without executing it.
///
/// This function proceeds as follows:
///
/// * a soft PRU reset is forced,
/// * the code is written to the PRU instruction RAM.
///
/// The code can be subsequently started and stopped using the returned `PruCode` handle.
///
/// # Errors
///
/// IO errors that may occur while reading the buffer are forwarded.
/// If the buffer cannot be read entirely because the code does not fit into the instruction
/// RAM, an error of the kind `ErrorKind::InvalidInput` is returned.
pub fn load_code<R: Read>(&mut self, code: &mut R) -> io::Result<PruCode> {
// Invoke a soft reset of the PRU to make sure no code is currently running.
self.reset();
// Write the code to the instruction RAM.
let n: usize = try!(code.read( unsafe {
std::slice::from_raw_parts_mut(self.iram_base, self.iram_size)
}));
// Make sure the whole buffer was read, otherwise return an InvalidInput error kind.
match n {
0 => {
Err(io::Error::new(io::ErrorKind::InvalidInput,
"size of PRU code exceeding instruction RAM capacity"))
}
_ => {
// Introduce a fence to ensure that IRAM writes are not reordered past the
// call to PruCode::run().
// Does it actually work? Who knows, we did what we could.
compiler_fence(Ordering::Release);
Ok(PruCode::new(self.pructrl_reg))
}
}
}
/// Resets the PRU.
///
/// Invokes a soft reset by clearing the PRU control register.
fn reset(&mut self) {
unsafe {
ptr::write_volatile(self.pructrl_reg, 0);
}
}
}
/// View of a contiguous memory segment.
///
/// The design of MemSegment is meant to allow allocation at arbitrary addresses while preventing
/// memory aliasing. This is achieved by allowing segments to be recursively split and by
/// borrowing segments upon object allocation, thus preventing further splitting and allocation
/// until the allocated object goes out of scope. For this reason, segments are neither copyable
/// nor clonable.
pub struct MemSegment<'a> {
// It is necessary to keep the `from` index rather than offset the `base` pointer because
// alignment must be checked when allocating memory for arbitrary types.
base: *mut u8,
from: usize,
to: usize,
_memory_marker: PhantomData<&'a [u8]>,
}
impl<'a> MemSegment<'a> {
fn new<'b>(base: *mut u8, from: usize, to: usize) -> MemSegment<'b> {
MemSegment {
base: base,
from: from,
to: to,
_memory_marker: PhantomData,
}
}
/// Allocates an object at the beginning of the segment.
///
/// # Panics
///
/// This function will panic if the beginning of the segment is not properly aligned
/// for type T or if the size of T exceeds its capacity.
#[inline]
pub fn alloc<T: Copy>(&mut self, source: T) -> &mut T {
let target: &mut T = unsafe { self.alloc_uninitialized() };
*target = source;
target
}
/// Allocates an object at the begining of the segment without initializing it.
///
/// This can save some unecessary initialization if the PRU is anyway going to initialize
/// memory before it will be read by the host. In some cases, it can also be used to avoid
/// trashing the stack with a large temporary initialization object if for some reason the
/// compiler cannot inline the call to `alloc`.
///
/// # Undefined Behavior
///
/// Reading an uninitialized object is undefined behavior (even for Copy types).
///
/// # Panics
///
/// This function will panic if the beginning of the segment is not properly aligned
/// for type T or if the size of T exceeds its capacity.
pub unsafe fn alloc_uninitialized<T: Copy>(&mut self) -> &mut T {
// Make sure the begining of the memory region is properly aligned for type T.
assert!(self.from % mem::align_of::<T>() == 0);
// Make sure the region is large enough to hold type T.
assert!(self.to - self.from >= mem::size_of::<T>());
&mut *(self.base.offset(self.from as isize) as *mut T)
}
/// Position at which the segment starts (in bytes).
pub fn begin(&self) -> usize {
self.from
}
/// Position at which the segment ends (in bytes).
pub fn end(&self) -> usize {
self.to
}
/// Splits the memory segment into two at the given byte position.
///
/// Note that positions (addresses) are absolute and remain valid after the splitting
/// operation. If for instance a segment is split at 0x00001000, the `begin` method of
/// the second segment hence created will return 0x00001000 and not 0x00000000.
pub fn split_at(&mut self, position: usize) -> (MemSegment, MemSegment) |
}
unsafe impl<'a> Send for MemSegment<'a> {}
unsafe impl<'a> Sync for MemSegment<'a> {}
/// PRU interrupt controller configuration.
///
/// A call to the `new_populated` method automatically initializes the data with the same defaults
/// as the PRUSS_INTC_INITDATA macro of the C prussdrv library. Alternatively, a blank-state
/// initialization data structure can be created with `new_empty` and then populated with the
/// dedicated methods.
#[derive(Clone)]
pub struct IntcConfig {
sysevt_to_channel_map: Vec<SysevtToChannel>,
channel_to_host_map: Vec<ChannelToHost>,
sysevt_enable: Vec<u8>,
host_enable: Vec<u8>,
}
impl IntcConfig {
/// Constructs an empty PRU interrupt controller configuration.
pub fn new_empty() -> IntcConfig {
IntcConfig {
sysevt_to_channel_map: Vec::new(),
channel_to_host_map: Vec::new(),
sysevt_enable: Vec::new(),
host_enable: Vec::new(),
}
}
/// Constructs a PRU interrupt controller configuration with a default mapping.
///
/// The mapping reflects the one defined in the `PRUSS_INTC_INITDATA` C macro of the C
/// prussdrv library, namely:
///
/// * it maps:
/// - `Sysevt::S17` to `Channel::C1`,
/// - `Sysevt::S18` to `Channel::C0`,
/// - `Sysevt::S19` to `Channel::C2`,
/// - `Sysevt::S20` to `Channel::C3`,
/// - `Sysevt::S21` to `Channel::C0`,
/// - `Sysevt::S22` to `Channel::C1`,
///
/// * it maps:
/// - `Channel::C0` to `Host::Pru0`,
/// - `Channel::C1` to `Host::Pru1`,
/// - `Channel::C2` to `Host::Evtout0`,
/// - `Channel::C3` to `Host::Evtout1`,
///
/// * it enables:
/// - `Sysevt::S17`,
/// - `Sysevt::S18`,
/// - `Sysevt::S19`,
/// - `Sysevt::S20`,
/// - `Sysevt::S21`,
/// - `Sysevt::S22`,
///
/// * it enables:
/// - `Host::Pru0`,
/// - `Host::Pru1`,
/// - `Host::Evtout0`,
/// - `Host::Evtout1`
///
pub fn new_populated() -> IntcConfig {
let mut config_data = Self::new_empty();
config_data.map_sysevts_to_channels(&[(Sysevt::S17, Channel::C1),
(Sysevt::S18, Channel::C0),
(Sysevt::S19, Channel::C2),
(Sysevt::S20, Channel::C3),
(Sysevt::S21, Channel::C0),
(Sysevt::S22, Channel::C1)]);
config_data.map_channels_to_hosts(&[(Channel::C0, Host::Pru0),
(Channel::C1, Host::Pru1),
(Channel::C2, Host::Evtout0),
(Channel::C3, Host::Evtout1)]);
config_data.auto_enable_sysevts();
config_data.auto_enable_hosts();
config_data
}
/// Enables the specified system events.
///
/// # Panics
///
/// This will panic if a system event is enabled several times.
pub fn enable_sysevts(&mut self, sysevts: &[Sysevt]) {
let mut bitfield = BitField64::new(NUM_SYSEVTS);
self.sysevt_enable = sysevts.iter()
.map(|&sysevt| {
assert!(bitfield.try_set(sysevt as u8));
sysevt as u8
})
.collect();
}
/// Enables the specified host interrupts.
///
/// # Panics
///
/// This will panic if a host interrupt is enabled several times.
pub fn enable_hosts(&mut self, hosts: &[Host]) {
let mut bitfield = BitField32::new(NUM_HOSTS);
self.host_enable = hosts.iter()
.map(|&host| {
assert!(bitfield.try_set(host as u8));
host as u8
})
.collect()
}
/// Automatically enables system events that are already assigned to a channel.
pub fn auto_enable_sysevts(&mut self) {
self.sysevt_enable = self.sysevt_to_channel_map
.iter()
.map(|sysevt_to_channel| sysevt_to_channel.sysevt)
.collect();
}
/// Automatically enables host interrupts that are already mapped to a channel.
pub fn auto_enable_hosts(& | {
assert!(position >= self.from && position <= self.to);
(MemSegment {
base: self.base,
from: self.from,
to: position,
_memory_marker: PhantomData,
},
MemSegment {
base: self.base,
from: position,
to: self.to,
_memory_marker: PhantomData,
})
} | identifier_body |
lib.rs | /// The interrupt controller is initialized with the provided mapping.
pub fn new(intc_config: &IntcConfig) -> Result<Pruss<'a>> {
// Enforce singleton instantiation.
if PRUSS_IS_INSTANTIATED.swap(true, Ordering::Acquire) {
return Err(Error::AlreadyInstantiated);
}
// Handy function to read the size of system devices.
fn memsize(path: &str) -> io::Result<usize> {
let mut f = try!(File::open(path));
let mut buffer = String::new();
try!(f.read_to_string(&mut buffer));
Ok(usize::from_str_radix(&buffer[2..].trim(), 16).unwrap())
};
// Create memory mapped devices.
let file = try!(SyncFile::new(PRUSS_DEVICE_PATH));
let prumem_size = try!(memsize(UIO_PRUMEM_SIZE_PATH));
let hostmem_size = try!(memsize(UIO_HOSTMEM_SIZE_PATH));
let prumap = try!(MemMap::new(file.fd, prumem_size, 0));
let hostmap = try!(MemMap::new(file.fd, hostmem_size, 1));
// Create and initialize the interrupt controller.
let mut intc = Intc::new(unsafe { prumap.base.offset(INTC_OFFSET as isize) as *mut u32 });
intc.map_interrupts(intc_config);
// Create the PRU code loaders.
let pru0 =
PruLoader::new(unsafe { prumap.base.offset(PRU0CTRL_OFFSET as isize) as *mut u32 },
unsafe { prumap.base.offset(IRAM0_OFFSET as isize) },
IRAM0_SIZE);
let pru1 =
PruLoader::new(unsafe { prumap.base.offset(PRU1CTRL_OFFSET as isize) as *mut u32 },
unsafe { prumap.base.offset(IRAM1_OFFSET as isize) },
IRAM1_SIZE);
// Create memory views.
let dram0 = MemSegment::new(prumap.base, DRAM0_OFFSET, DRAM0_OFFSET + DRAM0_SIZE);
let dram1 = MemSegment::new(prumap.base, DRAM1_OFFSET, DRAM1_OFFSET + DRAM1_SIZE);
let dram2 = MemSegment::new(prumap.base, DRAM2_OFFSET, DRAM2_OFFSET + DRAM2_SIZE);
let hostram = MemSegment::new(hostmap.base, 0, hostmem_size);
// Voila.
Ok(Pruss {
_prumap: prumap,
_hostmap: hostmap,
intc: intc,
pru0: pru0,
pru1: pru1,
dram0: dram0,
dram1: dram1,
dram2: dram2,
hostram: hostram,
})
}
}
impl<'a> Drop for Pruss<'a> {
fn drop(&mut self) {
// Stop instruction executions in both PRUs
self.pru0.reset();
self.pru1.reset();
// Allow another PRU subsystem context to be instantiated.
PRUSS_IS_INSTANTIATED.store(false, Ordering::Release);
}
}
unsafe impl<'a> Send for Pruss<'a> {}
unsafe impl<'a> Sync for Pruss<'a> {}
/// The PRU interrupt controller.
pub struct Intc {
intc_reg: *mut u32,
}
impl Intc {
/// Creates a driver context with sane interrupt intc mapping defaults.
fn new(intc_reg: *mut u32) -> Self {
let intc = Intc { intc_reg: intc_reg };
intc
}
/// Maps PRU interrupts according to the provided configuration.
pub fn map_interrupts(&mut self, interrupts: &IntcConfig) {
unsafe {
// Set the polarity of system interrupts to high.
ptr::write_volatile(self.intc_reg.offset(SIPR1_REG), 0xffffffff);
ptr::write_volatile(self.intc_reg.offset(SIPR2_REG), 0xffffffff);
// Clear all channel map registers and assign system events to channels.
for cmrx in 0..NUM_CMRX {
ptr::write_volatile(self.intc_reg.offset(CMR_REG + cmrx), 0);
}
for m in &interrupts.sysevt_to_channel_map {
let cmrx = (m.sysevt >> 2) as isize;
debug_assert!(cmrx < NUM_CMRX);
let val = ptr::read_volatile(self.intc_reg.offset(CMR_REG + cmrx));
ptr::write_volatile(self.intc_reg.offset(CMR_REG + cmrx),
val | (m.channel as u32) << ((m.sysevt as u32 & 0b11) * 8));
}
// Clear all host map registers and assign channels to hosts.
for hmrx in 0..NUM_HMRX {
ptr::write_volatile(self.intc_reg.offset(HMR_REG + hmrx), 0);
}
for m in &interrupts.channel_to_host_map {
let hmrx = (m.channel >> 2) as isize;
debug_assert!(hmrx < NUM_HMRX);
let val = ptr::read_volatile(self.intc_reg.offset(HMR_REG + hmrx));
ptr::write_volatile(self.intc_reg.offset(HMR_REG + hmrx),
val | (m.host as u32) << ((m.channel as u32 & 0b11) * 8));
}
// Set the type of system interrupts to pulse.
ptr::write_volatile(self.intc_reg.offset(SITR1_REG), 0x0);
ptr::write_volatile(self.intc_reg.offset(SITR2_REG), 0x0);
// Enable and clear system events.
let (mut mask1, mut mask2) = (0u32, 0u32);
for se in &interrupts.sysevt_enable {
match *se {
0...31 => mask1 |= 1u32 << se,
32...63 => mask2 |= 1u32 << (se - 32),
_ => unreachable!(),
};
}
ptr::write_volatile(self.intc_reg.offset(ESR1_REG), mask1);
ptr::write_volatile(self.intc_reg.offset(SECR1_REG), mask1);
ptr::write_volatile(self.intc_reg.offset(ESR2_REG), mask2);
ptr::write_volatile(self.intc_reg.offset(SECR2_REG), mask2);
// Enable host interrupts.
for h in &interrupts.host_enable {
ptr::write_volatile(self.intc_reg.offset(HIEISR_REG), *h as u32);
}
ptr::write_volatile(self.intc_reg.offset(GER_REG), 0x1);
}
}
/// Triggers a system event.
pub fn send_sysevt(&self, sysevt: Sysevt) {
unsafe {
match sysevt as u8 {
se @ 0...31 => ptr::write_volatile(self.intc_reg.offset(SRSR1_REG),
1u32 << se),
se @ 32...63 => ptr::write_volatile(self.intc_reg.offset(SRSR2_REG),
1u32 << (se - 32)),
_ => unreachable!(),
};
} |
/// Clears a system event.
pub fn clear_sysevt(&self, sysevt: Sysevt) {
unsafe {
ptr::write_volatile(self.intc_reg.offset(SICR_REG), sysevt as u32);
}
}
/// Enables a system event.
pub fn enable_sysevt(&self, sysevt: Sysevt) {
unsafe {
ptr::write_volatile(self.intc_reg.offset(EISR_REG), sysevt as u32 );
}
}
/// Disables a system event.
pub fn disable_sysevt(&self, sysevt: Sysevt) {
unsafe {
ptr::write_volatile(self.intc_reg.offset(EICR_REG), sysevt as u32 );
}
}
/// Enables or re-enables a host interrupt.
///
/// Beware: calling this function before the triggering system event was cleared will trigger
/// the host interrupt again.
pub fn enable_host<T: Into<Host>>(&self, host: T) {
let host: Host = host.into();
unsafe {
ptr::write_volatile(self.intc_reg.offset(HIEISR_REG), host as u32 );
}
}
/// Disables a host interrupt.
pub fn disable_host<T: Into<Host>>(&self, host: T) {
let host: Host = host.into();
unsafe {
ptr::write_volatile(self.intc_reg.offset(HIDISR_REG), host as u32 );
}
}
/// Returns a synchronization primitive for event out host interrupts.
///
/// Important: this function should be called before any corresponding event out is triggered.
///
/// # Panics
///
/// This function should not panic provided that the uio_pruss kernel module is loaded, which
/// is theoretically guaranteed at this point since `Pruss` could not have been created
/// otherwise.
pub fn register_irq(&self, e: Evtout) -> EvtoutIrq {
EvtoutIrq::new(e)
}
}
/// PRU instruction code loader.
pub struct PruLoader {
pructrl_reg: *mut u32,
iram_base: *mut u8,
iram_size: usize,
}
impl PruLoader {
fn new(pructrl_reg: *mut u32, iram_base: *mut u8, iram_size: usize) -> PruLoader {
PruLoader {
pructrl_reg: pructrl_reg,
iram_base: iram_base,
iram_size: iram_size,
}
}
/// Loads a binary of opcodes to the PRU without executing it.
///
/// This function proceeds as follows:
///
/// * a soft PRU reset is forced,
/// * the code is written to the PRU instruction RAM.
///
/// The code can be subsequently started and stopped using the returned `PruCode` handle.
///
/// # Errors
///
/// IO errors that may occur while reading the buffer are forwarded.
/// If the buffer cannot be read entirely because the code does not fit into the instruction
/// RAM, an error of the kind `ErrorKind::InvalidInput` is returned.
pub fn load_code<R: Read>(&mut self, code: &mut R) -> io::Result<PruCode> {
// Invoke a soft reset of the PRU to make sure no code is currently running.
self.reset();
// Write the code to the instruction RAM.
let n: usize = try!(code.read( unsafe {
std::slice::from_raw_parts_mut(self.iram_base, self.iram_size)
}));
// Make sure the whole buffer was read, otherwise return an InvalidInput error kind.
match n {
0 => {
Err(io::Error::new(io::ErrorKind::InvalidInput,
"size of PRU code exceeding instruction RAM capacity"))
}
_ => {
// Introduce a fence to ensure that IRAM writes are not reordered past the
// call to PruCode::run().
// Does it actually work? Who knows, we did what we could.
compiler_fence(Ordering::Release);
Ok(PruCode::new(self.pructrl_reg))
}
}
}
/// Resets the PRU.
///
/// Invokes a soft reset by clearing the PRU control register.
fn reset(&mut self) {
unsafe {
ptr::write_volatile(self.pructrl_reg, 0);
}
}
}
/// View of a contiguous memory segment.
///
/// The design of MemSegment is meant to allow allocation at arbitrary addresses while preventing
/// memory aliasing. This is achieved by allowing segments to be recursively split and by
/// borrowing segments upon object allocation, thus preventing further splitting and allocation
/// until the allocated object goes out of scope. For this reason, segments are neither copyable
/// nor clonable.
pub struct MemSegment<'a> {
// It is necessary to keep the `from` index rather than offset the `base` pointer because
// alignment must be checked when allocating memory for arbitrary types.
base: *mut u8,
from: usize,
to: usize,
_memory_marker: PhantomData<&'a [u8]>,
}
impl<'a> MemSegment<'a> {
fn new<'b>(base: *mut u8, from: usize, to: usize) -> MemSegment<'b> {
MemSegment {
base: base,
from: from,
to: to,
_memory_marker: PhantomData,
}
}
/// Allocates an object at the beginning of the segment.
///
/// # Panics
///
/// This function will panic if the beginning of the segment is not properly aligned
/// for type T or if the size of T exceeds its capacity.
#[inline]
pub fn alloc<T: Copy>(&mut self, source: T) -> &mut T {
let target: &mut T = unsafe { self.alloc_uninitialized() };
*target = source;
target
}
/// Allocates an object at the begining of the segment without initializing it.
///
/// This can save some unecessary initialization if the PRU is anyway going to initialize
/// memory before it will be read by the host. In some cases, it can also be used to avoid
/// trashing the stack with a large temporary initialization object if for some reason the
/// compiler cannot inline the call to `alloc`.
///
/// # Undefined Behavior
///
/// Reading an uninitialized object is undefined behavior (even for Copy types).
///
/// # Panics
///
/// This function will panic if the beginning of the segment is not properly aligned
/// for type T or if the size of T exceeds its capacity.
pub unsafe fn alloc_uninitialized<T: Copy>(&mut self) -> &mut T {
// Make sure the begining of the memory region is properly aligned for type T.
assert!(self.from % mem::align_of::<T>() == 0);
// Make sure the region is large enough to hold type T.
assert!(self.to - self.from >= mem::size_of::<T>());
&mut *(self.base.offset(self.from as isize) as *mut T)
}
/// Position at which the segment starts (in bytes).
pub fn begin(&self) -> usize {
self.from
}
/// Position at which the segment ends (in bytes).
pub fn end(&self) -> usize {
self.to
}
/// Splits the memory segment into two at the given byte position.
///
/// Note that positions (addresses) are absolute and remain valid after the splitting
/// operation. If for instance a segment is split at 0x00001000, the `begin` method of
/// the second segment hence created will return 0x00001000 and not 0x00000000.
pub fn split_at(&mut self, position: usize) -> (MemSegment, MemSegment) {
assert!(position >= self.from && position <= self.to);
(MemSegment {
base: self.base,
from: self.from,
to: position,
_memory_marker: PhantomData,
},
MemSegment {
base: self.base,
from: position,
to: self.to,
_memory_marker: PhantomData,
})
}
}
unsafe impl<'a> Send for MemSegment<'a> {}
unsafe impl<'a> Sync for MemSegment<'a> {}
/// PRU interrupt controller configuration.
///
/// A call to the `new_populated` method automatically initializes the data with the same defaults
/// as the PRUSS_INTC_INITDATA macro of the C prussdrv library. Alternatively, a blank-state
/// initialization data structure can be created with `new_empty` and then populated with the
/// dedicated methods.
#[derive(Clone)]
pub struct IntcConfig {
sysevt_to_channel_map: Vec<SysevtToChannel>,
channel_to_host_map: Vec<ChannelToHost>,
sysevt_enable: Vec<u8>,
host_enable: Vec<u8>,
}
impl IntcConfig {
/// Constructs an empty PRU interrupt controller configuration.
pub fn new_empty() -> IntcConfig {
IntcConfig {
sysevt_to_channel_map: Vec::new(),
channel_to_host_map: Vec::new(),
sysevt_enable: Vec::new(),
host_enable: Vec::new(),
}
}
/// Constructs a PRU interrupt controller configuration with a default mapping.
///
/// The mapping reflects the one defined in the `PRUSS_INTC_INITDATA` C macro of the C
/// prussdrv library, namely:
///
/// * it maps:
/// - `Sysevt::S17` to `Channel::C1`,
/// - `Sysevt::S18` to `Channel::C0`,
/// - `Sysevt::S19` to `Channel::C2`,
/// - `Sysevt::S20` to `Channel::C3`,
/// - `Sysevt::S21` to `Channel::C0`,
/// - `Sysevt::S22` to `Channel::C1`,
///
/// * it maps:
/// - `Channel::C0` to `Host::Pru0`,
/// - `Channel::C1` to `Host::Pru1`,
/// - `Channel::C2` to `Host::Evtout0`,
/// - `Channel::C3` to `Host::Evtout1`,
///
/// * it enables:
/// - `Sysevt::S17`,
/// - `Sysevt::S18`,
/// - `Sysevt::S19`,
/// - `Sysevt::S20`,
/// - `Sysevt::S21`,
/// - `Sysevt::S22`,
///
/// * it enables:
/// - `Host::Pru0`,
/// - `Host::Pru1`,
/// - `Host::Evtout0`,
/// - `Host::Evtout1`
///
pub fn new_populated() -> IntcConfig {
let mut config_data = Self::new_empty();
config_data.map_sysevts_to_channels(&[(Sysevt::S17, Channel::C1),
(Sysevt::S18, Channel::C0),
(Sysevt::S19, Channel::C2),
(Sysevt::S20, Channel::C3),
(Sysevt::S21, Channel::C0),
(Sysevt::S22, Channel::C1)]);
config_data.map_channels_to_hosts(&[(Channel::C0, Host::Pru0),
(Channel::C1, Host::Pru1),
(Channel::C2, Host::Evtout0),
(Channel::C3, Host::Evtout1)]);
config_data.auto_enable_sysevts();
config_data.auto_enable_hosts();
config_data
}
/// Enables the specified system events.
///
/// # Panics
///
/// This will panic if a system event is enabled several times.
pub fn enable_sysevts(&mut self, sysevts: &[Sysevt]) {
let mut bitfield = BitField64::new(NUM_SYSEVTS);
self.sysevt_enable = sysevts.iter()
.map(|&sysevt| {
assert!(bitfield.try_set(sysevt as u8));
sysevt as u8
})
.collect();
}
/// Enables the specified host interrupts.
///
/// # Panics
///
/// This will panic if a host interrupt is enabled several times.
pub fn enable_hosts(&mut self, hosts: &[Host]) {
let mut bitfield = BitField32::new(NUM_HOSTS);
self.host_enable = hosts.iter()
.map(|&host| {
assert!(bitfield.try_set(host as u8));
host as u8
})
.collect()
}
/// Automatically enables system events that are already assigned to a channel.
pub fn auto_enable_sysevts(&mut self) {
self.sysevt_enable = self.sysevt_to_channel_map
.iter()
.map(|sysevt_to_channel| sysevt_to_channel.sysevt)
.collect();
}
/// Automatically enables host interrupts that are already mapped to a channel.
pub fn auto_enable_hosts(&mut self | } | random_line_split |
lib.rs | 1_SIZE);
// Create memory views.
let dram0 = MemSegment::new(prumap.base, DRAM0_OFFSET, DRAM0_OFFSET + DRAM0_SIZE);
let dram1 = MemSegment::new(prumap.base, DRAM1_OFFSET, DRAM1_OFFSET + DRAM1_SIZE);
let dram2 = MemSegment::new(prumap.base, DRAM2_OFFSET, DRAM2_OFFSET + DRAM2_SIZE);
let hostram = MemSegment::new(hostmap.base, 0, hostmem_size);
// Voila.
Ok(Pruss {
_prumap: prumap,
_hostmap: hostmap,
intc: intc,
pru0: pru0,
pru1: pru1,
dram0: dram0,
dram1: dram1,
dram2: dram2,
hostram: hostram,
})
}
}
impl<'a> Drop for Pruss<'a> {
fn drop(&mut self) {
// Stop instruction executions in both PRUs
self.pru0.reset();
self.pru1.reset();
// Allow another PRU subsystem context to be instantiated.
PRUSS_IS_INSTANTIATED.store(false, Ordering::Release);
}
}
unsafe impl<'a> Send for Pruss<'a> {}
unsafe impl<'a> Sync for Pruss<'a> {}
/// The PRU interrupt controller.
pub struct Intc {
intc_reg: *mut u32,
}
impl Intc {
/// Creates a driver context with sane interrupt intc mapping defaults.
fn new(intc_reg: *mut u32) -> Self {
let intc = Intc { intc_reg: intc_reg };
intc
}
/// Maps PRU interrupts according to the provided configuration.
pub fn map_interrupts(&mut self, interrupts: &IntcConfig) {
unsafe {
// Set the polarity of system interrupts to high.
ptr::write_volatile(self.intc_reg.offset(SIPR1_REG), 0xffffffff);
ptr::write_volatile(self.intc_reg.offset(SIPR2_REG), 0xffffffff);
// Clear all channel map registers and assign system events to channels.
for cmrx in 0..NUM_CMRX {
ptr::write_volatile(self.intc_reg.offset(CMR_REG + cmrx), 0);
}
for m in &interrupts.sysevt_to_channel_map {
let cmrx = (m.sysevt >> 2) as isize;
debug_assert!(cmrx < NUM_CMRX);
let val = ptr::read_volatile(self.intc_reg.offset(CMR_REG + cmrx));
ptr::write_volatile(self.intc_reg.offset(CMR_REG + cmrx),
val | (m.channel as u32) << ((m.sysevt as u32 & 0b11) * 8));
}
// Clear all host map registers and assign channels to hosts.
for hmrx in 0..NUM_HMRX {
ptr::write_volatile(self.intc_reg.offset(HMR_REG + hmrx), 0);
}
for m in &interrupts.channel_to_host_map {
let hmrx = (m.channel >> 2) as isize;
debug_assert!(hmrx < NUM_HMRX);
let val = ptr::read_volatile(self.intc_reg.offset(HMR_REG + hmrx));
ptr::write_volatile(self.intc_reg.offset(HMR_REG + hmrx),
val | (m.host as u32) << ((m.channel as u32 & 0b11) * 8));
}
// Set the type of system interrupts to pulse.
ptr::write_volatile(self.intc_reg.offset(SITR1_REG), 0x0);
ptr::write_volatile(self.intc_reg.offset(SITR2_REG), 0x0);
// Enable and clear system events.
let (mut mask1, mut mask2) = (0u32, 0u32);
for se in &interrupts.sysevt_enable {
match *se {
0...31 => mask1 |= 1u32 << se,
32...63 => mask2 |= 1u32 << (se - 32),
_ => unreachable!(),
};
}
ptr::write_volatile(self.intc_reg.offset(ESR1_REG), mask1);
ptr::write_volatile(self.intc_reg.offset(SECR1_REG), mask1);
ptr::write_volatile(self.intc_reg.offset(ESR2_REG), mask2);
ptr::write_volatile(self.intc_reg.offset(SECR2_REG), mask2);
// Enable host interrupts.
for h in &interrupts.host_enable {
ptr::write_volatile(self.intc_reg.offset(HIEISR_REG), *h as u32);
}
ptr::write_volatile(self.intc_reg.offset(GER_REG), 0x1);
}
}
/// Triggers a system event.
pub fn send_sysevt(&self, sysevt: Sysevt) {
unsafe {
match sysevt as u8 {
se @ 0...31 => ptr::write_volatile(self.intc_reg.offset(SRSR1_REG),
1u32 << se),
se @ 32...63 => ptr::write_volatile(self.intc_reg.offset(SRSR2_REG),
1u32 << (se - 32)),
_ => unreachable!(),
};
}
}
/// Clears a system event.
pub fn clear_sysevt(&self, sysevt: Sysevt) {
unsafe {
ptr::write_volatile(self.intc_reg.offset(SICR_REG), sysevt as u32);
}
}
/// Enables a system event.
pub fn enable_sysevt(&self, sysevt: Sysevt) {
unsafe {
ptr::write_volatile(self.intc_reg.offset(EISR_REG), sysevt as u32 );
}
}
/// Disables a system event.
pub fn disable_sysevt(&self, sysevt: Sysevt) {
unsafe {
ptr::write_volatile(self.intc_reg.offset(EICR_REG), sysevt as u32 );
}
}
/// Enables or re-enables a host interrupt.
///
/// Beware: calling this function before the triggering system event was cleared will trigger
/// the host interrupt again.
pub fn enable_host<T: Into<Host>>(&self, host: T) {
let host: Host = host.into();
unsafe {
ptr::write_volatile(self.intc_reg.offset(HIEISR_REG), host as u32 );
}
}
/// Disables a host interrupt.
pub fn disable_host<T: Into<Host>>(&self, host: T) {
let host: Host = host.into();
unsafe {
ptr::write_volatile(self.intc_reg.offset(HIDISR_REG), host as u32 );
}
}
/// Returns a synchronization primitive for event out host interrupts.
///
/// Important: this function should be called before any corresponding event out is triggered.
///
/// # Panics
///
/// This function should not panic provided that the uio_pruss kernel module is loaded, which
/// is theoretically guaranteed at this point since `Pruss` could not have been created
/// otherwise.
pub fn register_irq(&self, e: Evtout) -> EvtoutIrq {
EvtoutIrq::new(e)
}
}
/// PRU instruction code loader.
pub struct PruLoader {
pructrl_reg: *mut u32,
iram_base: *mut u8,
iram_size: usize,
}
impl PruLoader {
fn new(pructrl_reg: *mut u32, iram_base: *mut u8, iram_size: usize) -> PruLoader {
PruLoader {
pructrl_reg: pructrl_reg,
iram_base: iram_base,
iram_size: iram_size,
}
}
/// Loads a binary of opcodes to the PRU without executing it.
///
/// This function proceeds as follows:
///
/// * a soft PRU reset is forced,
/// * the code is written to the PRU instruction RAM.
///
/// The code can be subsequently started and stopped using the returned `PruCode` handle.
///
/// # Errors
///
/// IO errors that may occur while reading the buffer are forwarded.
/// If the buffer cannot be read entirely because the code does not fit into the instruction
/// RAM, an error of the kind `ErrorKind::InvalidInput` is returned.
pub fn load_code<R: Read>(&mut self, code: &mut R) -> io::Result<PruCode> {
// Invoke a soft reset of the PRU to make sure no code is currently running.
self.reset();
// Write the code to the instruction RAM.
let n: usize = try!(code.read( unsafe {
std::slice::from_raw_parts_mut(self.iram_base, self.iram_size)
}));
// Make sure the whole buffer was read, otherwise return an InvalidInput error kind.
match n {
0 => {
Err(io::Error::new(io::ErrorKind::InvalidInput,
"size of PRU code exceeding instruction RAM capacity"))
}
_ => {
// Introduce a fence to ensure that IRAM writes are not reordered past the
// call to PruCode::run().
// Does it actually work? Who knows, we did what we could.
compiler_fence(Ordering::Release);
Ok(PruCode::new(self.pructrl_reg))
}
}
}
/// Resets the PRU.
///
/// Invokes a soft reset by clearing the PRU control register.
fn reset(&mut self) {
unsafe {
ptr::write_volatile(self.pructrl_reg, 0);
}
}
}
/// View of a contiguous memory segment.
///
/// The design of MemSegment is meant to allow allocation at arbitrary addresses while preventing
/// memory aliasing. This is achieved by allowing segments to be recursively split and by
/// borrowing segments upon object allocation, thus preventing further splitting and allocation
/// until the allocated object goes out of scope. For this reason, segments are neither copyable
/// nor clonable.
pub struct MemSegment<'a> {
// It is necessary to keep the `from` index rather than offset the `base` pointer because
// alignment must be checked when allocating memory for arbitrary types.
base: *mut u8,
from: usize,
to: usize,
_memory_marker: PhantomData<&'a [u8]>,
}
impl<'a> MemSegment<'a> {
fn new<'b>(base: *mut u8, from: usize, to: usize) -> MemSegment<'b> {
MemSegment {
base: base,
from: from,
to: to,
_memory_marker: PhantomData,
}
}
/// Allocates an object at the beginning of the segment.
///
/// # Panics
///
/// This function will panic if the beginning of the segment is not properly aligned
/// for type T or if the size of T exceeds its capacity.
#[inline]
pub fn alloc<T: Copy>(&mut self, source: T) -> &mut T {
let target: &mut T = unsafe { self.alloc_uninitialized() };
*target = source;
target
}
/// Allocates an object at the begining of the segment without initializing it.
///
/// This can save some unecessary initialization if the PRU is anyway going to initialize
/// memory before it will be read by the host. In some cases, it can also be used to avoid
/// trashing the stack with a large temporary initialization object if for some reason the
/// compiler cannot inline the call to `alloc`.
///
/// # Undefined Behavior
///
/// Reading an uninitialized object is undefined behavior (even for Copy types).
///
/// # Panics
///
/// This function will panic if the beginning of the segment is not properly aligned
/// for type T or if the size of T exceeds its capacity.
pub unsafe fn alloc_uninitialized<T: Copy>(&mut self) -> &mut T {
// Make sure the begining of the memory region is properly aligned for type T.
assert!(self.from % mem::align_of::<T>() == 0);
// Make sure the region is large enough to hold type T.
assert!(self.to - self.from >= mem::size_of::<T>());
&mut *(self.base.offset(self.from as isize) as *mut T)
}
/// Position at which the segment starts (in bytes).
pub fn begin(&self) -> usize {
self.from
}
/// Position at which the segment ends (in bytes).
pub fn end(&self) -> usize {
self.to
}
/// Splits the memory segment into two at the given byte position.
///
/// Note that positions (addresses) are absolute and remain valid after the splitting
/// operation. If for instance a segment is split at 0x00001000, the `begin` method of
/// the second segment hence created will return 0x00001000 and not 0x00000000.
pub fn split_at(&mut self, position: usize) -> (MemSegment, MemSegment) {
assert!(position >= self.from && position <= self.to);
(MemSegment {
base: self.base,
from: self.from,
to: position,
_memory_marker: PhantomData,
},
MemSegment {
base: self.base,
from: position,
to: self.to,
_memory_marker: PhantomData,
})
}
}
unsafe impl<'a> Send for MemSegment<'a> {}
unsafe impl<'a> Sync for MemSegment<'a> {}
/// PRU interrupt controller configuration.
///
/// A call to the `new_populated` method automatically initializes the data with the same defaults
/// as the PRUSS_INTC_INITDATA macro of the C prussdrv library. Alternatively, a blank-state
/// initialization data structure can be created with `new_empty` and then populated with the
/// dedicated methods.
#[derive(Clone)]
pub struct IntcConfig {
sysevt_to_channel_map: Vec<SysevtToChannel>,
channel_to_host_map: Vec<ChannelToHost>,
sysevt_enable: Vec<u8>,
host_enable: Vec<u8>,
}
impl IntcConfig {
/// Constructs an empty PRU interrupt controller configuration.
pub fn new_empty() -> IntcConfig {
IntcConfig {
sysevt_to_channel_map: Vec::new(),
channel_to_host_map: Vec::new(),
sysevt_enable: Vec::new(),
host_enable: Vec::new(),
}
}
/// Constructs a PRU interrupt controller configuration with a default mapping.
///
/// The mapping reflects the one defined in the `PRUSS_INTC_INITDATA` C macro of the C
/// prussdrv library, namely:
///
/// * it maps:
/// - `Sysevt::S17` to `Channel::C1`,
/// - `Sysevt::S18` to `Channel::C0`,
/// - `Sysevt::S19` to `Channel::C2`,
/// - `Sysevt::S20` to `Channel::C3`,
/// - `Sysevt::S21` to `Channel::C0`,
/// - `Sysevt::S22` to `Channel::C1`,
///
/// * it maps:
/// - `Channel::C0` to `Host::Pru0`,
/// - `Channel::C1` to `Host::Pru1`,
/// - `Channel::C2` to `Host::Evtout0`,
/// - `Channel::C3` to `Host::Evtout1`,
///
/// * it enables:
/// - `Sysevt::S17`,
/// - `Sysevt::S18`,
/// - `Sysevt::S19`,
/// - `Sysevt::S20`,
/// - `Sysevt::S21`,
/// - `Sysevt::S22`,
///
/// * it enables:
/// - `Host::Pru0`,
/// - `Host::Pru1`,
/// - `Host::Evtout0`,
/// - `Host::Evtout1`
///
pub fn new_populated() -> IntcConfig {
let mut config_data = Self::new_empty();
config_data.map_sysevts_to_channels(&[(Sysevt::S17, Channel::C1),
(Sysevt::S18, Channel::C0),
(Sysevt::S19, Channel::C2),
(Sysevt::S20, Channel::C3),
(Sysevt::S21, Channel::C0),
(Sysevt::S22, Channel::C1)]);
config_data.map_channels_to_hosts(&[(Channel::C0, Host::Pru0),
(Channel::C1, Host::Pru1),
(Channel::C2, Host::Evtout0),
(Channel::C3, Host::Evtout1)]);
config_data.auto_enable_sysevts();
config_data.auto_enable_hosts();
config_data
}
/// Enables the specified system events.
///
/// # Panics
///
/// This will panic if a system event is enabled several times.
pub fn enable_sysevts(&mut self, sysevts: &[Sysevt]) {
let mut bitfield = BitField64::new(NUM_SYSEVTS);
self.sysevt_enable = sysevts.iter()
.map(|&sysevt| {
assert!(bitfield.try_set(sysevt as u8));
sysevt as u8
})
.collect();
}
/// Enables the specified host interrupts.
///
/// # Panics
///
/// This will panic if a host interrupt is enabled several times.
pub fn enable_hosts(&mut self, hosts: &[Host]) {
let mut bitfield = BitField32::new(NUM_HOSTS);
self.host_enable = hosts.iter()
.map(|&host| {
assert!(bitfield.try_set(host as u8));
host as u8
})
.collect()
}
/// Automatically enables system events that are already assigned to a channel.
pub fn auto_enable_sysevts(&mut self) {
self.sysevt_enable = self.sysevt_to_channel_map
.iter()
.map(|sysevt_to_channel| sysevt_to_channel.sysevt)
.collect();
}
/// Automatically enables host interrupts that are already mapped to a channel.
pub fn auto_enable_hosts(&mut self) {
self.host_enable = self.channel_to_host_map
.iter()
.map(|channel_to_host| channel_to_host.host)
.collect()
}
/// Assigns system events to channels.
///
/// A channel can be targeted by several events but an event can be mapped to only one channel.
///
/// # Panics
///
/// This will panic if a system event is mapped to several channels simultaneously.
pub fn map_sysevts_to_channels(&mut self, scmap: &[(Sysevt, Channel)]) {
let mut bitfield = BitField64::new(NUM_SYSEVTS);
self.sysevt_to_channel_map = scmap.iter()
.map(|&(s, c)| {
assert!(bitfield.try_set(s as u8));
SysevtToChannel {
sysevt: s as u8,
channel: c as u8,
}
})
.collect();
}
/// Assigns channel numbers to host interrupts.
///
/// A host interrupt can be targeted by several channels but a channel can be mapped to only
/// one host.
///
/// # Panics
///
/// This will panic if a channel is mapped to several hosts.
pub fn map_channels_to_hosts(&mut self, chmap: &[(Channel, Host)]) {
let mut bitfield = BitField32::new(NUM_CHANNELS);
self.channel_to_host_map = chmap.iter()
.map(|&(c, h)| {
assert!(bitfield.try_set(c as u8));
ChannelToHost {
channel: c as u8,
host: h as u8,
}
})
.collect();
}
}
/// Synchronization primitive that can be used to wait for an event out.
pub struct | EvtoutIrq | identifier_name |
|
nf.rs | use e2d2::headers::*;
use e2d2::operators::*;
#[inline]
fn lat() {
unsafe {
asm!("nop"
:
:
:
: "volatile");
}
}
#[inline]
fn delay_loop(delay: u64) {
let mut d = 0;
while d < delay {
lat();
d += 1;
} | pub fn delay<T:'static + Batch<Header = NullHeader>>(
parent: T,
delay: u64,
) -> MapBatch<NullHeader, ResetParsingBatch<TransformBatch<MacHeader, ParsedBatch<MacHeader, T>>>> {
parent
.parse::<MacHeader>()
.transform(box move |pkt| {
assert!(pkt.refcnt() == 1);
let hdr = pkt.get_mut_header();
hdr.swap_addresses();
delay_loop(delay);
})
.reset()
.map(box move |pkt| assert!(pkt.refcnt() == 1))
} | }
| random_line_split |
nf.rs | use e2d2::headers::*;
use e2d2::operators::*;
#[inline]
fn | () {
unsafe {
asm!("nop"
:
:
:
: "volatile");
}
}
#[inline]
fn delay_loop(delay: u64) {
let mut d = 0;
while d < delay {
lat();
d += 1;
}
}
pub fn delay<T:'static + Batch<Header = NullHeader>>(
parent: T,
delay: u64,
) -> MapBatch<NullHeader, ResetParsingBatch<TransformBatch<MacHeader, ParsedBatch<MacHeader, T>>>> {
parent
.parse::<MacHeader>()
.transform(box move |pkt| {
assert!(pkt.refcnt() == 1);
let hdr = pkt.get_mut_header();
hdr.swap_addresses();
delay_loop(delay);
})
.reset()
.map(box move |pkt| assert!(pkt.refcnt() == 1))
}
| lat | identifier_name |
nf.rs | use e2d2::headers::*;
use e2d2::operators::*;
#[inline]
fn lat() |
#[inline]
fn delay_loop(delay: u64) {
let mut d = 0;
while d < delay {
lat();
d += 1;
}
}
pub fn delay<T:'static + Batch<Header = NullHeader>>(
parent: T,
delay: u64,
) -> MapBatch<NullHeader, ResetParsingBatch<TransformBatch<MacHeader, ParsedBatch<MacHeader, T>>>> {
parent
.parse::<MacHeader>()
.transform(box move |pkt| {
assert!(pkt.refcnt() == 1);
let hdr = pkt.get_mut_header();
hdr.swap_addresses();
delay_loop(delay);
})
.reset()
.map(box move |pkt| assert!(pkt.refcnt() == 1))
}
| {
unsafe {
asm!("nop"
:
:
:
: "volatile");
}
} | identifier_body |
delete.rs | use nom::character::complete::multispace1;
use std::{fmt, str};
use common::{statement_terminator, schema_table_reference};
use condition::ConditionExpression;
use keywords::escape_if_keyword;
use nom::bytes::complete::tag_no_case;
use nom::combinator::opt;
use nom::sequence::{delimited, tuple};
use nom::IResult;
use select::where_clause;
use table::Table;
#[derive(Clone, Debug, Default, Eq, Hash, PartialEq, Serialize, Deserialize)]
pub struct DeleteStatement {
pub table: Table,
pub where_clause: Option<ConditionExpression>,
}
impl fmt::Display for DeleteStatement {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "DELETE FROM ")?;
write!(f, "{}", escape_if_keyword(&self.table.name))?;
if let Some(ref where_clause) = self.where_clause {
write!(f, " WHERE ")?;
write!(f, "{}", where_clause)?;
}
Ok(())
}
}
pub fn | (i: &[u8]) -> IResult<&[u8], DeleteStatement> {
let (remaining_input, (_, _, table, where_clause, _)) = tuple((
tag_no_case("delete"),
delimited(multispace1, tag_no_case("from"), multispace1),
schema_table_reference,
opt(where_clause),
statement_terminator,
))(i)?;
Ok((
remaining_input,
DeleteStatement {
table,
where_clause,
},
))
}
#[cfg(test)]
mod tests {
use super::*;
use column::Column;
use common::{Literal, Operator};
use condition::ConditionBase::*;
use condition::ConditionExpression::*;
use condition::ConditionTree;
use table::Table;
#[test]
fn simple_delete() {
let qstring = "DELETE FROM users;";
let res = deletion(qstring.as_bytes());
assert_eq!(
res.unwrap().1,
DeleteStatement {
table: Table::from("users"),
..Default::default()
}
);
}
#[test]
fn simple_delete_schema() {
let qstring = "DELETE FROM db1.users;";
let res = deletion(qstring.as_bytes());
assert_eq!(
res.unwrap().1,
DeleteStatement {
table: Table::from(("db1","users")),
..Default::default()
}
);
}
#[test]
fn delete_with_where_clause() {
let qstring = "DELETE FROM users WHERE id = 1;";
let res = deletion(qstring.as_bytes());
let expected_left = Base(Field(Column::from("id")));
let expected_where_cond = Some(ComparisonOp(ConditionTree {
left: Box::new(expected_left),
right: Box::new(Base(Literal(Literal::Integer(1)))),
operator: Operator::Equal,
}));
assert_eq!(
res.unwrap().1,
DeleteStatement {
table: Table::from("users"),
where_clause: expected_where_cond,
..Default::default()
}
);
}
#[test]
fn format_delete() {
let qstring = "DELETE FROM users WHERE id = 1";
let expected = "DELETE FROM users WHERE id = 1";
let res = deletion(qstring.as_bytes());
assert_eq!(format!("{}", res.unwrap().1), expected);
}
}
| deletion | identifier_name |
delete.rs | use nom::character::complete::multispace1;
use std::{fmt, str};
use common::{statement_terminator, schema_table_reference};
use condition::ConditionExpression;
use keywords::escape_if_keyword;
use nom::bytes::complete::tag_no_case;
use nom::combinator::opt;
use nom::sequence::{delimited, tuple};
use nom::IResult;
use select::where_clause;
use table::Table;
#[derive(Clone, Debug, Default, Eq, Hash, PartialEq, Serialize, Deserialize)]
pub struct DeleteStatement {
pub table: Table,
pub where_clause: Option<ConditionExpression>,
}
impl fmt::Display for DeleteStatement {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "DELETE FROM ")?;
write!(f, "{}", escape_if_keyword(&self.table.name))?;
if let Some(ref where_clause) = self.where_clause |
Ok(())
}
}
pub fn deletion(i: &[u8]) -> IResult<&[u8], DeleteStatement> {
let (remaining_input, (_, _, table, where_clause, _)) = tuple((
tag_no_case("delete"),
delimited(multispace1, tag_no_case("from"), multispace1),
schema_table_reference,
opt(where_clause),
statement_terminator,
))(i)?;
Ok((
remaining_input,
DeleteStatement {
table,
where_clause,
},
))
}
#[cfg(test)]
mod tests {
use super::*;
use column::Column;
use common::{Literal, Operator};
use condition::ConditionBase::*;
use condition::ConditionExpression::*;
use condition::ConditionTree;
use table::Table;
#[test]
fn simple_delete() {
let qstring = "DELETE FROM users;";
let res = deletion(qstring.as_bytes());
assert_eq!(
res.unwrap().1,
DeleteStatement {
table: Table::from("users"),
..Default::default()
}
);
}
#[test]
fn simple_delete_schema() {
let qstring = "DELETE FROM db1.users;";
let res = deletion(qstring.as_bytes());
assert_eq!(
res.unwrap().1,
DeleteStatement {
table: Table::from(("db1","users")),
..Default::default()
}
);
}
#[test]
fn delete_with_where_clause() {
let qstring = "DELETE FROM users WHERE id = 1;";
let res = deletion(qstring.as_bytes());
let expected_left = Base(Field(Column::from("id")));
let expected_where_cond = Some(ComparisonOp(ConditionTree {
left: Box::new(expected_left),
right: Box::new(Base(Literal(Literal::Integer(1)))),
operator: Operator::Equal,
}));
assert_eq!(
res.unwrap().1,
DeleteStatement {
table: Table::from("users"),
where_clause: expected_where_cond,
..Default::default()
}
);
}
#[test]
fn format_delete() {
let qstring = "DELETE FROM users WHERE id = 1";
let expected = "DELETE FROM users WHERE id = 1";
let res = deletion(qstring.as_bytes());
assert_eq!(format!("{}", res.unwrap().1), expected);
}
}
| {
write!(f, " WHERE ")?;
write!(f, "{}", where_clause)?;
} | conditional_block |
delete.rs | use nom::character::complete::multispace1;
use std::{fmt, str};
use common::{statement_terminator, schema_table_reference};
use condition::ConditionExpression;
use keywords::escape_if_keyword;
use nom::bytes::complete::tag_no_case;
use nom::combinator::opt;
use nom::sequence::{delimited, tuple};
use nom::IResult;
use select::where_clause;
use table::Table;
#[derive(Clone, Debug, Default, Eq, Hash, PartialEq, Serialize, Deserialize)]
pub struct DeleteStatement {
pub table: Table,
pub where_clause: Option<ConditionExpression>,
}
impl fmt::Display for DeleteStatement {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "DELETE FROM ")?;
write!(f, "{}", escape_if_keyword(&self.table.name))?;
if let Some(ref where_clause) = self.where_clause {
write!(f, " WHERE ")?;
write!(f, "{}", where_clause)?;
}
Ok(())
}
}
pub fn deletion(i: &[u8]) -> IResult<&[u8], DeleteStatement> {
let (remaining_input, (_, _, table, where_clause, _)) = tuple((
tag_no_case("delete"),
delimited(multispace1, tag_no_case("from"), multispace1),
schema_table_reference,
opt(where_clause),
statement_terminator,
))(i)?;
Ok((
remaining_input,
DeleteStatement {
table,
where_clause,
},
))
}
#[cfg(test)]
mod tests {
use super::*;
use column::Column;
use common::{Literal, Operator};
use condition::ConditionBase::*;
use condition::ConditionExpression::*;
use condition::ConditionTree;
use table::Table;
#[test]
fn simple_delete() {
let qstring = "DELETE FROM users;";
let res = deletion(qstring.as_bytes());
assert_eq!(
res.unwrap().1,
DeleteStatement {
table: Table::from("users"),
..Default::default()
}
);
}
#[test]
fn simple_delete_schema() {
let qstring = "DELETE FROM db1.users;";
let res = deletion(qstring.as_bytes());
assert_eq!(
res.unwrap().1,
DeleteStatement {
table: Table::from(("db1","users")),
..Default::default()
}
);
}
#[test]
fn delete_with_where_clause() {
let qstring = "DELETE FROM users WHERE id = 1;";
let res = deletion(qstring.as_bytes()); | operator: Operator::Equal,
}));
assert_eq!(
res.unwrap().1,
DeleteStatement {
table: Table::from("users"),
where_clause: expected_where_cond,
..Default::default()
}
);
}
#[test]
fn format_delete() {
let qstring = "DELETE FROM users WHERE id = 1";
let expected = "DELETE FROM users WHERE id = 1";
let res = deletion(qstring.as_bytes());
assert_eq!(format!("{}", res.unwrap().1), expected);
}
} | let expected_left = Base(Field(Column::from("id")));
let expected_where_cond = Some(ComparisonOp(ConditionTree {
left: Box::new(expected_left),
right: Box::new(Base(Literal(Literal::Integer(1)))), | random_line_split |
delete.rs | use nom::character::complete::multispace1;
use std::{fmt, str};
use common::{statement_terminator, schema_table_reference};
use condition::ConditionExpression;
use keywords::escape_if_keyword;
use nom::bytes::complete::tag_no_case;
use nom::combinator::opt;
use nom::sequence::{delimited, tuple};
use nom::IResult;
use select::where_clause;
use table::Table;
#[derive(Clone, Debug, Default, Eq, Hash, PartialEq, Serialize, Deserialize)]
pub struct DeleteStatement {
pub table: Table,
pub where_clause: Option<ConditionExpression>,
}
impl fmt::Display for DeleteStatement {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result |
}
pub fn deletion(i: &[u8]) -> IResult<&[u8], DeleteStatement> {
let (remaining_input, (_, _, table, where_clause, _)) = tuple((
tag_no_case("delete"),
delimited(multispace1, tag_no_case("from"), multispace1),
schema_table_reference,
opt(where_clause),
statement_terminator,
))(i)?;
Ok((
remaining_input,
DeleteStatement {
table,
where_clause,
},
))
}
#[cfg(test)]
mod tests {
use super::*;
use column::Column;
use common::{Literal, Operator};
use condition::ConditionBase::*;
use condition::ConditionExpression::*;
use condition::ConditionTree;
use table::Table;
#[test]
fn simple_delete() {
let qstring = "DELETE FROM users;";
let res = deletion(qstring.as_bytes());
assert_eq!(
res.unwrap().1,
DeleteStatement {
table: Table::from("users"),
..Default::default()
}
);
}
#[test]
fn simple_delete_schema() {
let qstring = "DELETE FROM db1.users;";
let res = deletion(qstring.as_bytes());
assert_eq!(
res.unwrap().1,
DeleteStatement {
table: Table::from(("db1","users")),
..Default::default()
}
);
}
#[test]
fn delete_with_where_clause() {
let qstring = "DELETE FROM users WHERE id = 1;";
let res = deletion(qstring.as_bytes());
let expected_left = Base(Field(Column::from("id")));
let expected_where_cond = Some(ComparisonOp(ConditionTree {
left: Box::new(expected_left),
right: Box::new(Base(Literal(Literal::Integer(1)))),
operator: Operator::Equal,
}));
assert_eq!(
res.unwrap().1,
DeleteStatement {
table: Table::from("users"),
where_clause: expected_where_cond,
..Default::default()
}
);
}
#[test]
fn format_delete() {
let qstring = "DELETE FROM users WHERE id = 1";
let expected = "DELETE FROM users WHERE id = 1";
let res = deletion(qstring.as_bytes());
assert_eq!(format!("{}", res.unwrap().1), expected);
}
}
| {
write!(f, "DELETE FROM ")?;
write!(f, "{}", escape_if_keyword(&self.table.name))?;
if let Some(ref where_clause) = self.where_clause {
write!(f, " WHERE ")?;
write!(f, "{}", where_clause)?;
}
Ok(())
} | identifier_body |
job.rs | use std::collections::HashSet;
use std::net::SocketAddr;
use std::time;
use crate::control::cio;
use crate::torrent::Torrent;
use crate::util::UHashMap;
pub trait Job<T: cio::CIO> {
fn update(&mut self, torrents: &mut UHashMap<Torrent<T>>);
}
pub struct TrackerUpdate;
impl<T: cio::CIO> Job<T> for TrackerUpdate {
fn update(&mut self, torrents: &mut UHashMap<Torrent<T>>) {
for (_, torrent) in torrents.iter_mut() {
torrent.try_update_tracker();
}
}
}
pub struct UnchokeUpdate;
impl<T: cio::CIO> Job<T> for UnchokeUpdate {
fn update(&mut self, torrents: &mut UHashMap<Torrent<T>>) {
for (_, torrent) in torrents.iter_mut() {
torrent.update_unchoked();
}
}
}
pub struct SessionUpdate;
impl<T: cio::CIO> Job<T> for SessionUpdate {
fn update(&mut self, torrents: &mut UHashMap<Torrent<T>>) {
for (_, torrent) in torrents.iter_mut() {
if torrent.dirty() {
torrent.serialize();
}
}
}
}
pub struct TorrentTxUpdate {
piece_update: time::Instant,
active: UHashMap<bool>,
}
impl TorrentTxUpdate {
pub fn new() -> TorrentTxUpdate {
TorrentTxUpdate {
piece_update: time::Instant::now(),
active: UHashMap::default(),
}
}
}
impl<T: cio::CIO> Job<T> for TorrentTxUpdate {
fn update(&mut self, torrents: &mut UHashMap<Torrent<T>>) {
for (id, torrent) in torrents.iter_mut() {
let active = torrent.tick();
if active {
torrent.update_rpc_transfer();
torrent.update_rpc_peers();
// TODO: consider making tick triggered by on the fly validation
if self.piece_update.elapsed() > time::Duration::from_secs(30) |
}
if!torrent.complete() {
torrent.rank_peers();
}
if!self.active.contains_key(id) {
self.active.insert(*id, active);
}
let prev = self.active.get_mut(id).unwrap();
if *prev!= active {
*prev = active;
torrent.announce_status();
}
}
self.active.retain(|id, _| torrents.contains_key(id));
}
}
pub struct PEXUpdate {
peers: UHashMap<HashSet<SocketAddr>>,
}
impl PEXUpdate {
pub fn new() -> PEXUpdate {
PEXUpdate {
peers: UHashMap::default(),
}
}
}
impl<T: cio::CIO> Job<T> for PEXUpdate {
fn update(&mut self, torrents: &mut UHashMap<Torrent<T>>) {
for (id, torrent) in torrents.iter_mut().filter(|&(_, ref t)|!t.info().private) {
if!self.peers.contains_key(id) {
self.peers.insert(*id, HashSet::new());
}
let (added, removed) = {
let peers: HashSet<_> = torrent.peers().values().map(|p| p.addr()).collect();
let prev = self.peers.get_mut(id).unwrap();
let mut add: Vec<_> = peers.difference(prev).cloned().collect();
let mut rem: Vec<_> = prev.difference(&peers).cloned().collect();
add.truncate(50);
rem.truncate(50 - add.len());
(add, rem)
};
torrent.update_pex(&added, &removed);
}
self.peers.retain(|id, _| torrents.contains_key(id));
}
}
| {
torrent.rpc_update_pieces();
self.piece_update = time::Instant::now();
} | conditional_block |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.