file_name
large_stringlengths 4
69
| prefix
large_stringlengths 0
26.7k
| suffix
large_stringlengths 0
24.8k
| middle
large_stringlengths 0
2.12k
| fim_type
large_stringclasses 4
values |
---|---|---|---|---|
lib.rs | /*!
* This library provides an API client for Diffbot.
*
* Making API requests
* -------------------
*
* There are a handful of different ways to make API calls:
*
* 1. The most basic way to make a request is with the ``call()`` function.
* Everything must be specified for each request.
*
* 2. Use the ``Diffbot`` struct to keep track of your token and API version
* and then use its ``.call()`` method to make API calls. This has the
* advantage that you can specify those things just once and they'll be
* retained.
*
* 3. Instead of making a request in one step, you can make it two steps with
* the ``prepare_request()`` function. This allows you to specify to Diffbot
* certain details of how *it* should make the request. That gives you a
* ``Request`` object.
*
* 4. In the same manner, if you have a ``Diffbot`` struct you can call the
* ``.prepare_request()`` method on it.
*
* Prepared requests
* -----------------
*
* If you use the ``prepare_request()`` function or method, you can tweak the
* request that will be sent to Diffbot. You can alter the User-Agent, Referer
* or Cookie headers that it will send and then call ``.call()`` to make the
* request, or you can call ``.post_body()`` to send the HTML yourself, if it
* is not publicly available to the wider Internet.
*
* Getting data out of the result
* ------------------------------
*
* At present, the successful return value of a request is simply a JSON object,
* a tree map. This *will* make it moderately difficult to work with, but if
* you're determined, it's possible. You'll end up with results like these:
*
* // First of all, you must, of course, have a response to work on.
* let mut response: TreeMap<~str, Json>
* = diffbot::call(..., "article",...).unwrap();
*
* // Get the title of the article
* let title = match response.pop(&~"title").unwrap() {
* json::String(s) => s,
* _ => unreachable!(),
* };
*
* // Get the URL of each image
* let image_urls: ~[Url] = match response.pop(&~"images").unwrap() {
* json::List(images) => images.move_iter().map(|image| match image {
* json::Object(~mut o) => {
* match o.pop(&~"url").unwrap() {
* json::String(ref s) => from_str(s),
* _ => unreachable!(),
* }
* },
* _ => unreachable!(),
* }),
* _ => unreachable!(),
* }.collect();
*
* (Yep, I'll freely admit that these are clumsier than they might be in another
* language, which might allow something like this:
*
* let response =...;
*
* let title = response.title;
* let image_urls = [from_str(image.url) for image in response.images];
*
* In time we may get strongly typed interfaces which would be much nicer, but
* for now, you'd need to do that yourself. It can be done with the tools in
* ``extra::serialize``, by the way.)
*/
#[crate_id = "diffbot#1.0"];
#[crate_type = "dylib"];
#[crate_type = "rlib"];
#[doc(html_logo_url = "diffy-d.png",
html_favicon_url = "http://www.diffbot.com/favicon.ico")];
extern mod extra = "extra#0.10-pre";
extern mod http = "http#0.1-pre";
use std::io::net::tcp::TcpStream;
use extra::json;
use extra::url::Url;
use http::client::RequestWriter;
use http::method::{Get, Post};
use http::headers::content_type::MediaType;
/// A convenience type which simply keeps track of a developer token and version
/// number.
///
/// There is no necessity to use this type; you can call ``call()`` directly
/// should you so desire.
#[deriving(Eq, Clone)]
pub struct Diffbot {
/// The developer's token
token: ~str,
/// The API version number
version: uint,
}
// Basic methods
impl Diffbot {
/// Construct a new ``Diffbot`` instance from the passed parameters.
pub fn new(token: ~str, version: uint) -> Diffbot {
Diffbot {
token: token,
version: version,
}
}
/// Make a call to any Diffbot API with the stored token and API version.
///
/// See the ``call()`` function for an explanation of the parameters.
pub fn call(&self, url: &Url, api: &str, fields: &[&str])
-> Result<json::Object, Error> {
call(url, self.token, api, fields, self.version)
}
/// Prepare a request to any Diffbot API with the stored token and API version.
///
/// See the ``call()`` function for an explanation of the parameters.
pub fn prepare_request(&self, url: &Url, api: &str, fields: &[&str])
-> Request {
prepare_request(url, self.token, api, fields, self.version)
}
}
/// An in-progress Diffbot API call.
pub struct Request {
priv request: RequestWriter<TcpStream>,
}
impl Request {
/// Set the value for Diffbot to send as the ``User-Agent`` header when
/// making your request.
pub fn user_agent(&mut self, user_agent: ~str) {
self.request.headers.extensions.insert(~"X-Forwarded-User-Agent",
user_agent);
}
/// Set the value for Diffbot to send as the ``Referer`` header when
/// making your request.
pub fn referer(&mut self, referer: ~str) {
self.request.headers.extensions.insert(~"X-Forwarded-Referer",
referer);
}
/// Set the value for Diffbot to send as the ``Cookie`` header when
/// making your request.
pub fn cookie(&mut self, cookie: ~str) {
self.request.headers.extensions.insert(~"X-Forwarded-Cookie",
cookie);
}
/// Set Diffbot's timeout, in milliseconds. The default is five seconds.
pub fn timeout(&mut self, milliseconds: u64) {
self.request.url.query.push((~"timeout", milliseconds.to_str()));
}
/// Execute the request and get the results.
pub fn call(self) -> Result<json::Object, Error> {
let mut response = match self.request.read_response() {
Ok(response) => response,
Err(_request) => return Err(IoError), // Request failed
};
let json = match json::from_reader(&mut response as &mut Reader) {
Ok(json) => json,
Err(error) => return Err(JsonError(error)), // It... wasn't JSON!?
};
// Now let's see if this is an API error or not.
// API errors are of the form {"error":"Invalid API.","errorCode":500} | let num = num as uint;
let msg = match o.pop(&~"error")
.expect("JSON had errorCode but not error") {
json::String(s) => s,
uh_oh => fail!("error was {} instead of a string", uh_oh.to_str()),
};
Err(ApiError(msg, num))
},
Some(uh_oh) => fail!("errorCode was {} instead of a number", uh_oh.to_str()),
None => Ok(o),
}
},
// All API responses must be objects.
// If it's not, there's something screwy going on.
_ => fail!("API return value wasn't a JSON object"),
}
}
/// Execute the request as a POST request, sending it through with the given
/// text/html entity body.
///
/// This has the effect that Diffbot will skip requesting the URL and will
/// instead take the passed body as the HTML it is to check. This is mainly
/// useful for non-public websites.
pub fn post_body(mut self, body: &[u8]) -> Result<json::Object, Error> {
self.request.method = Post;
self.request.headers.content_type = Some(MediaType(~"text", ~"html", ~[]));
self.request.headers.content_length = Some(body.len());
// Calling write_headers is an extra and unnecessary safety guard which
// will cause the task to fail if the request has already started to be
// sent (which would render the three statements above ineffectual)
self.request.write_headers();
self.request.write(body);
self.call()
}
}
/// Error code: "unauthorized token"
pub static UNAUTHORIZED_TOKEN: uint = 401;
/// Error code: "requested page not found"
pub static REQUESTED_PAGE_NOT_FOUND: uint = 404;
/// Error code: "your token has exceeded the allowed number of calls, or has
/// otherwise been throttled for API abuse."
pub static TOKEN_EXCEEDED_OR_THROTTLED: uint = 429;
/// Error code: "error processing the page. Specific information will be
/// returned in the JSON response."
pub static ERROR_PROCESSING: uint = 500;
/// Something went wrong with the Diffbot API call.
#[deriving(Eq)]
pub enum Error {
/// An error code returned by the Diffbot API, with message and code.
/// Refer to http://www.diffbot.com/dev/docs/error/ for an explanation of
/// the error codes.
///
/// When comparing the error code, you should use these constants:
///
/// - ``UNAUTHORIZED_TOKEN``: "unauthorized token"
/// - ``REQUESTED_PAGE_NOT_FOUND``: "requested page not found"
/// - ``TOKEN_EXCEEDED_OR_THROTTLED``: "your token has exceeded the allowed
/// number of calls, or has otherwise been throttled for API abuse."
/// - ``ERROR_PROCESSING``: "error processing the page. Specific information
/// will be returned in the JSON response."
ApiError(~str, uint),
/// The JSON was not valid. This is one of those ones that *should* never
/// happen; you know...
///
/// Actually, I can percieve that it might happen if a document returned
/// included invalid UTF-8, but this case has not been tested.
JsonError(json::Error),
/// An I/O error occurred and the condition was trapped somewhere (by you).
IoError,
}
impl ToStr for Error {
fn to_str(&self) -> ~str {
match *self {
ApiError(ref msg, code) => format!("API error {}: {}", code, *msg),
JsonError(ref error) => format!("JSON error: {}", error.to_str()),
IoError => format!("I/O error (already handled)"),
}
}
}
/// Make a simple Diffbot API call.
///
/// For more complex requests, use ``Diffbot`` or ``prepare_request()``.
///
/// Arguments
/// =========
///
/// - ``url`` is the URL that you wish Diffbot to operate upon. If this is a
/// publicly-inaccessible URL, you should use ``post_body()`` on a prepared
/// request instead of ``call()``.
///
/// - ``token`` is the developer's token.
///
/// - ``api`` is the name of the API endpoint, e.g. "article", "product".
///
/// - ``fields`` is the set of fields you want the API call to return; it
/// follows the form specified by the Diffbot API and so should have values
/// like "*", "meta", "querystring", "images(*)".
///
/// - ``version`` is the Diffbot API version number.
pub fn call(url: &Url, token: &str, api: &str, fields: &[&str], version: uint)
-> Result<json::Object, Error> {
prepare_request(url, token, api, fields, version).call()
}
/// Prepare, but do not send, a request.
///
/// This allows you to use some of the more advanced features of the API like
/// setting certain headers for Diffbot to use, or uploading a private document
/// for it.
pub fn prepare_request(url: &Url, token: &str, api: &str, fields: &[&str],
version: uint)
-> Request {
// First of all we must calculate the GET parameters.
let mut query = ~[(~"token", token.to_owned()),
(~"url", url.to_str())];
if fields.len() > 0 {
query.push((~"fields", fields.connect(",")));
}
// Now that we've got that, we can figure out the complete URL.
let url = Url::new(~"http", // scheme
None, // user
~"api.diffbot.com", // host
None, // port
format!("/v{}/{}", version, api), // path
query, // query
None); // fragment
// And with that, we can now make the request. Whee!
Request {
request: RequestWriter::new(Get, url)
}
} | match json {
json::Object(~mut o) => {
match o.pop(&~"errorCode") {
Some(json::Number(num)) => { | random_line_split |
lib.rs | /*!
* This library provides an API client for Diffbot.
*
* Making API requests
* -------------------
*
* There are a handful of different ways to make API calls:
*
* 1. The most basic way to make a request is with the ``call()`` function.
* Everything must be specified for each request.
*
* 2. Use the ``Diffbot`` struct to keep track of your token and API version
* and then use its ``.call()`` method to make API calls. This has the
* advantage that you can specify those things just once and they'll be
* retained.
*
* 3. Instead of making a request in one step, you can make it two steps with
* the ``prepare_request()`` function. This allows you to specify to Diffbot
* certain details of how *it* should make the request. That gives you a
* ``Request`` object.
*
* 4. In the same manner, if you have a ``Diffbot`` struct you can call the
* ``.prepare_request()`` method on it.
*
* Prepared requests
* -----------------
*
* If you use the ``prepare_request()`` function or method, you can tweak the
* request that will be sent to Diffbot. You can alter the User-Agent, Referer
* or Cookie headers that it will send and then call ``.call()`` to make the
* request, or you can call ``.post_body()`` to send the HTML yourself, if it
* is not publicly available to the wider Internet.
*
* Getting data out of the result
* ------------------------------
*
* At present, the successful return value of a request is simply a JSON object,
* a tree map. This *will* make it moderately difficult to work with, but if
* you're determined, it's possible. You'll end up with results like these:
*
* // First of all, you must, of course, have a response to work on.
* let mut response: TreeMap<~str, Json>
* = diffbot::call(..., "article",...).unwrap();
*
* // Get the title of the article
* let title = match response.pop(&~"title").unwrap() {
* json::String(s) => s,
* _ => unreachable!(),
* };
*
* // Get the URL of each image
* let image_urls: ~[Url] = match response.pop(&~"images").unwrap() {
* json::List(images) => images.move_iter().map(|image| match image {
* json::Object(~mut o) => {
* match o.pop(&~"url").unwrap() {
* json::String(ref s) => from_str(s),
* _ => unreachable!(),
* }
* },
* _ => unreachable!(),
* }),
* _ => unreachable!(),
* }.collect();
*
* (Yep, I'll freely admit that these are clumsier than they might be in another
* language, which might allow something like this:
*
* let response =...;
*
* let title = response.title;
* let image_urls = [from_str(image.url) for image in response.images];
*
* In time we may get strongly typed interfaces which would be much nicer, but
* for now, you'd need to do that yourself. It can be done with the tools in
* ``extra::serialize``, by the way.)
*/
#[crate_id = "diffbot#1.0"];
#[crate_type = "dylib"];
#[crate_type = "rlib"];
#[doc(html_logo_url = "diffy-d.png",
html_favicon_url = "http://www.diffbot.com/favicon.ico")];
extern mod extra = "extra#0.10-pre";
extern mod http = "http#0.1-pre";
use std::io::net::tcp::TcpStream;
use extra::json;
use extra::url::Url;
use http::client::RequestWriter;
use http::method::{Get, Post};
use http::headers::content_type::MediaType;
/// A convenience type which simply keeps track of a developer token and version
/// number.
///
/// There is no necessity to use this type; you can call ``call()`` directly
/// should you so desire.
#[deriving(Eq, Clone)]
pub struct Diffbot {
/// The developer's token
token: ~str,
/// The API version number
version: uint,
}
// Basic methods
impl Diffbot {
/// Construct a new ``Diffbot`` instance from the passed parameters.
pub fn new(token: ~str, version: uint) -> Diffbot {
Diffbot {
token: token,
version: version,
}
}
/// Make a call to any Diffbot API with the stored token and API version.
///
/// See the ``call()`` function for an explanation of the parameters.
pub fn call(&self, url: &Url, api: &str, fields: &[&str])
-> Result<json::Object, Error> {
call(url, self.token, api, fields, self.version)
}
/// Prepare a request to any Diffbot API with the stored token and API version.
///
/// See the ``call()`` function for an explanation of the parameters.
pub fn prepare_request(&self, url: &Url, api: &str, fields: &[&str])
-> Request |
}
/// An in-progress Diffbot API call.
pub struct Request {
priv request: RequestWriter<TcpStream>,
}
impl Request {
/// Set the value for Diffbot to send as the ``User-Agent`` header when
/// making your request.
pub fn user_agent(&mut self, user_agent: ~str) {
self.request.headers.extensions.insert(~"X-Forwarded-User-Agent",
user_agent);
}
/// Set the value for Diffbot to send as the ``Referer`` header when
/// making your request.
pub fn referer(&mut self, referer: ~str) {
self.request.headers.extensions.insert(~"X-Forwarded-Referer",
referer);
}
/// Set the value for Diffbot to send as the ``Cookie`` header when
/// making your request.
pub fn cookie(&mut self, cookie: ~str) {
self.request.headers.extensions.insert(~"X-Forwarded-Cookie",
cookie);
}
/// Set Diffbot's timeout, in milliseconds. The default is five seconds.
pub fn timeout(&mut self, milliseconds: u64) {
self.request.url.query.push((~"timeout", milliseconds.to_str()));
}
/// Execute the request and get the results.
pub fn call(self) -> Result<json::Object, Error> {
let mut response = match self.request.read_response() {
Ok(response) => response,
Err(_request) => return Err(IoError), // Request failed
};
let json = match json::from_reader(&mut response as &mut Reader) {
Ok(json) => json,
Err(error) => return Err(JsonError(error)), // It... wasn't JSON!?
};
// Now let's see if this is an API error or not.
// API errors are of the form {"error":"Invalid API.","errorCode":500}
match json {
json::Object(~mut o) => {
match o.pop(&~"errorCode") {
Some(json::Number(num)) => {
let num = num as uint;
let msg = match o.pop(&~"error")
.expect("JSON had errorCode but not error") {
json::String(s) => s,
uh_oh => fail!("error was {} instead of a string", uh_oh.to_str()),
};
Err(ApiError(msg, num))
},
Some(uh_oh) => fail!("errorCode was {} instead of a number", uh_oh.to_str()),
None => Ok(o),
}
},
// All API responses must be objects.
// If it's not, there's something screwy going on.
_ => fail!("API return value wasn't a JSON object"),
}
}
/// Execute the request as a POST request, sending it through with the given
/// text/html entity body.
///
/// This has the effect that Diffbot will skip requesting the URL and will
/// instead take the passed body as the HTML it is to check. This is mainly
/// useful for non-public websites.
pub fn post_body(mut self, body: &[u8]) -> Result<json::Object, Error> {
self.request.method = Post;
self.request.headers.content_type = Some(MediaType(~"text", ~"html", ~[]));
self.request.headers.content_length = Some(body.len());
// Calling write_headers is an extra and unnecessary safety guard which
// will cause the task to fail if the request has already started to be
// sent (which would render the three statements above ineffectual)
self.request.write_headers();
self.request.write(body);
self.call()
}
}
/// Error code: "unauthorized token"
pub static UNAUTHORIZED_TOKEN: uint = 401;
/// Error code: "requested page not found"
pub static REQUESTED_PAGE_NOT_FOUND: uint = 404;
/// Error code: "your token has exceeded the allowed number of calls, or has
/// otherwise been throttled for API abuse."
pub static TOKEN_EXCEEDED_OR_THROTTLED: uint = 429;
/// Error code: "error processing the page. Specific information will be
/// returned in the JSON response."
pub static ERROR_PROCESSING: uint = 500;
/// Something went wrong with the Diffbot API call.
#[deriving(Eq)]
pub enum Error {
/// An error code returned by the Diffbot API, with message and code.
/// Refer to http://www.diffbot.com/dev/docs/error/ for an explanation of
/// the error codes.
///
/// When comparing the error code, you should use these constants:
///
/// - ``UNAUTHORIZED_TOKEN``: "unauthorized token"
/// - ``REQUESTED_PAGE_NOT_FOUND``: "requested page not found"
/// - ``TOKEN_EXCEEDED_OR_THROTTLED``: "your token has exceeded the allowed
/// number of calls, or has otherwise been throttled for API abuse."
/// - ``ERROR_PROCESSING``: "error processing the page. Specific information
/// will be returned in the JSON response."
ApiError(~str, uint),
/// The JSON was not valid. This is one of those ones that *should* never
/// happen; you know...
///
/// Actually, I can percieve that it might happen if a document returned
/// included invalid UTF-8, but this case has not been tested.
JsonError(json::Error),
/// An I/O error occurred and the condition was trapped somewhere (by you).
IoError,
}
impl ToStr for Error {
fn to_str(&self) -> ~str {
match *self {
ApiError(ref msg, code) => format!("API error {}: {}", code, *msg),
JsonError(ref error) => format!("JSON error: {}", error.to_str()),
IoError => format!("I/O error (already handled)"),
}
}
}
/// Make a simple Diffbot API call.
///
/// For more complex requests, use ``Diffbot`` or ``prepare_request()``.
///
/// Arguments
/// =========
///
/// - ``url`` is the URL that you wish Diffbot to operate upon. If this is a
/// publicly-inaccessible URL, you should use ``post_body()`` on a prepared
/// request instead of ``call()``.
///
/// - ``token`` is the developer's token.
///
/// - ``api`` is the name of the API endpoint, e.g. "article", "product".
///
/// - ``fields`` is the set of fields you want the API call to return; it
/// follows the form specified by the Diffbot API and so should have values
/// like "*", "meta", "querystring", "images(*)".
///
/// - ``version`` is the Diffbot API version number.
pub fn call(url: &Url, token: &str, api: &str, fields: &[&str], version: uint)
-> Result<json::Object, Error> {
prepare_request(url, token, api, fields, version).call()
}
/// Prepare, but do not send, a request.
///
/// This allows you to use some of the more advanced features of the API like
/// setting certain headers for Diffbot to use, or uploading a private document
/// for it.
pub fn prepare_request(url: &Url, token: &str, api: &str, fields: &[&str],
version: uint)
-> Request {
// First of all we must calculate the GET parameters.
let mut query = ~[(~"token", token.to_owned()),
(~"url", url.to_str())];
if fields.len() > 0 {
query.push((~"fields", fields.connect(",")));
}
// Now that we've got that, we can figure out the complete URL.
let url = Url::new(~"http", // scheme
None, // user
~"api.diffbot.com", // host
None, // port
format!("/v{}/{}", version, api), // path
query, // query
None); // fragment
// And with that, we can now make the request. Whee!
Request {
request: RequestWriter::new(Get, url)
}
}
| {
prepare_request(url, self.token, api, fields, self.version)
} | identifier_body |
main.rs | use std::time;
use vulkano::instance::Instance;
use vulkano::instance::PhysicalDevice;
use vulkano::pipeline::GraphicsPipeline;
use vulkano::pipeline::viewport::Viewport;
use vulkano::device::Device;
use vulkano::device::Features;
use vulkano::device::RawDeviceExtensions;
use vulkano::framebuffer::{
Framebuffer,
FramebufferAbstract,
Subpass,
RenderPassAbstract
};
use vulkano::image::SwapchainImage;
use vulkano::sampler::{
Sampler,
SamplerAddressMode,
Filter,
MipmapMode
};
use vulkano::buffer::BufferUsage;
use vulkano::buffer::CpuAccessibleBuffer;
use vulkano::command_buffer::{
AutoCommandBufferBuilder,
DynamicState
};
use vulkano::descriptor::descriptor_set::PersistentDescriptorSet;
use std::sync::Arc;
// use vulkano::pipeline::ComputePipeline;
use vulkano::sync;
use vulkano::sync::{
GpuFuture,
FlushError
};
use vulkano::format::Format;
use vulkano::image::{
Dimensions,
ImmutableImage,
};
// use rand::{
// thread_rng,
// Rng
// };
use image::{
GenericImageView,
};
use vulkano_win::VkSurfaceBuild;
use winit::{
EventsLoop,
WindowBuilder,
Window,
Event,
WindowEvent
};
use vulkano::swapchain;
use vulkano::swapchain::{
AcquireError,
Swapchain,
SurfaceTransform,
PresentMode,
SwapchainCreationError
};
mod cs {
vulkano_shaders::shader! {
ty: "compute",
path: "./src/shader/particle_update.cp.glsl"
}
}
#[derive(Copy, Clone, Debug)]
struct Particle {
pos: [f32; 2],
speed: [f32; 2],
tail: [f32; 2],
prev_pos: [f32; 2],
prev_tail: [f32; 2],
}
// struct ImageDataIterator<P> {
// originalIterator: image::buffer::Pie,
// length: usize
// }
//
// impl<T> ImageDataIterator<T> {
// fn from_dynamic_image(img: &DynamicImage) -> ImageDataIterator<T> {
// let dimensions = img.dimensions();
//
// ImageDataIterator {
// originalIterator: img.to_rgba().pixels(),
// length: ( dimensions.0 * dimensions.1 ) as usize
// }
// }
// }
//
// impl<'a, T> Iterator for ImageDataIterator<'a, T> {
// type Item = [u8; 4];
// fn next(&mut self) -> Option<[u8; 4]> {
// return match self.originalIterator.next() {
// Some(pixel) => {
// let rgba = pixel.2;
// let data: [u8; 4] = [ rgba[0], rgba[1], rgba[2], rgba[3] ];
// return Some(data);
// },
// None => None
// }
// }
// }
//
// impl<'a, T> ExactSizeIterator for ImageDataIterator<'a, T> {
// fn len(&self) -> usize {
// return self.length;
// }
// }
fn | () {
let img = match image::open("./media/autumn.png") {
Ok(image) => image,
Err(err) => panic!("{:?}", err)
};
{ // stdout image info
println!("color {:?}", img.color());
println!("dimensions {:?}", img.dimensions());
// println!("first pixel {:?}", img.pixels().next().unwrap());
// println!("first pixel {:?}", img.pixels().next().map(|item| item.2).unwrap());
}
let instance = {
let inst_exts = vulkano_win::required_extensions();
Instance::new(None, &inst_exts, None).expect("failed to create instance")
};
//TODO: list devices, choose based on user input
for p in PhysicalDevice::enumerate(&instance) {
print!("{}", p.name());
println!(", driver version: {}", p.driver_version());
}
let physical = PhysicalDevice::enumerate(&instance)
.next()
.expect("no device available");
let queue_family = physical
.queue_families()
.find(|&q| q.supports_graphics())
.expect("couldn't find a graphical queue family");
let (device, mut queues) = {
let unraw_dev_exts = vulkano::device::DeviceExtensions {
khr_swapchain: true,
.. vulkano::device::DeviceExtensions::none()
};
let mut all_dev_exts = RawDeviceExtensions::from(&unraw_dev_exts);
all_dev_exts.insert(std::ffi::CString::new("VK_KHR_storage_buffer_storage_class").unwrap());
Device::new(
physical,
&Features::none(),
all_dev_exts,
[(queue_family, 0.5)].iter().cloned(),
)
.expect("failed to create device")
};
let queue = queues.next().unwrap();
// let particles = init_particles_buffer();
// let particles_buffer =
// CpuAccessibleBuffer::from_data(device.clone(), BufferUsage::all(), particles)
// .expect("failed to create buffer");
// let shader = cs::Shader::load(device.clone()).expect("failed to create shader module");
// let compute_pipeline = Arc::new(
// ComputePipeline::new(device.clone(), &shader.main_entry_point(), &())
// .expect("failed to create compute pipeline"),
// );
// let set = Arc::new(
// PersistentDescriptorSet::start(compute_pipeline.clone(), 0)
// .add_buffer(particles_buffer.clone())
// .unwrap()
// .build()
// .unwrap(),
// );
// let command_buffer = AutoCommandBufferBuilder::new(device.clone(), queue.family())
// .unwrap()
// .dispatch([PARTICLE_COUNT as u32 / 32, 1, 1], compute_pipeline.clone(), set.clone(), ())
// .unwrap()
// .build()
// .unwrap();
let mut events_loop = EventsLoop::new();
let surface = WindowBuilder::new().build_vk_surface(&events_loop, instance.clone()).unwrap();
let window = surface.window();
let (mut swapchain, images) = {
let caps = surface.capabilities(physical)
.expect("failed to get surface capabilities");
let usage = caps.supported_usage_flags;
let alpha = caps.supported_composite_alpha.iter().next().unwrap();
let format = caps.supported_formats[0].0;
caps.supported_formats.iter().for_each(|sth| println!("{:?}", sth));
let initial_dimensions = if let Some(dimensions) = window.get_inner_size() {
let dimensions: (u32, u32) = dimensions.to_physical(window.get_hidpi_factor()).into();
[dimensions.0, dimensions.1]
} else {
return;
};
Swapchain::new(device.clone(), surface.clone(),
caps.min_image_count, format, initial_dimensions, 1, usage, &queue,
SurfaceTransform::Identity, alpha, PresentMode::Fifo, true, None)
.expect("failed to create swapchain")
};
#[derive(Default, Debug, Clone)]
struct Vertex {
position: [f32; 2]
}
let vertex_buffer = {
vulkano::impl_vertex!(Vertex, position);
CpuAccessibleBuffer::from_iter(device.clone(), BufferUsage::all(), [
Vertex { position: [-0.5, -0.5] },
Vertex { position: [ 0.5, -0.5] },
Vertex { position: [-0.5, 0.5] },
Vertex { position: [ 0.5, 0.5] },
Vertex { position: [-0.5, 0.5] },
Vertex { position: [ 0.5, -0.5] },
].iter().cloned()).unwrap()
};
// texture
let img_dim = img.dimensions();
let (autumn_texture, autumn_texture_future) = match ImmutableImage::from_iter(
img.as_rgba8().unwrap().pixels().map(|rgba| {
let bytes : [u8; 4] = [rgba[0], rgba[1], rgba[2], rgba[3]];
bytes
}),
Dimensions::Dim2d { width: img_dim.0, height: img_dim.1 },
Format::R8G8B8A8Unorm,
queue.clone()
) {
Ok(i) => i,
Err(err) => panic!("{:?}", err)
};
let sampler = Sampler::new(device.clone(), Filter::Linear, Filter::Linear,
MipmapMode::Nearest, SamplerAddressMode::Repeat, SamplerAddressMode::Repeat,
SamplerAddressMode::Repeat, 0.0, 1.0, 0.0, 0.0).unwrap();
mod square_vs {
vulkano_shaders::shader!{
ty: "vertex",
path: "./src/shader/square.vs.glsl"
}
}
mod square_fs {
vulkano_shaders::shader!{
ty: "fragment",
path: "./src/shader/square.fs.glsl"
}
}
let square_vs = square_vs::Shader::load(device.clone()).unwrap();
let square_fs = square_fs::Shader::load(device.clone()).unwrap();
let render_pass = Arc::new(vulkano::single_pass_renderpass!(
device.clone(),
attachments: {
color: {
load: Clear,
store: Store,
format: swapchain.format(),
samples: 1,
}
},
pass: {
color: [color],
depth_stencil: {}
}
).unwrap());
let pipeline = Arc::new(GraphicsPipeline::start()
.vertex_input_single_buffer::<Vertex>()
.vertex_shader(square_vs.main_entry_point(), ())
.triangle_list()
.viewports_dynamic_scissors_irrelevant(1)
.fragment_shader(square_fs.main_entry_point(), ())
.blend_alpha_blending()
.render_pass(Subpass::from(render_pass.clone(), 0).unwrap())
.build(device.clone())
.unwrap());
let set = Arc::new(PersistentDescriptorSet::start(pipeline.clone(), 0)
.add_sampled_image(autumn_texture.clone(), sampler.clone()).unwrap()
.build().unwrap()
);
let mut dynamic_state = DynamicState {
line_width: None,
viewports: None,
scissors: None,
compare_mask: None,
write_mask: None,
reference: None
};
let mut framebuffers = window_size_dependent_setup(&images, render_pass.clone(), &mut dynamic_state);
let mut recreate_swapchain = false;
let mut previous_frame_end = Box::new(sync::now(device.clone()).join(autumn_texture_future)) as Box<dyn GpuFuture>;
let t0 = time::SystemTime::now();
let mut now = t0;
let mut then;
loop {
previous_frame_end.cleanup_finished();
if recreate_swapchain {
let dimensions = if let Some(dimensions) = window.get_inner_size() {
let dimensions: (u32, u32) = dimensions.to_physical(window.get_hidpi_factor()).into();
[dimensions.0, dimensions.1]
} else {
return ;
};
let (new_swapchain, new_images) = match swapchain.recreate_with_dimension(dimensions) {
Ok(r) => r,
Err(SwapchainCreationError::UnsupportedDimensions) => continue,
Err(err) => panic!("{:?}", err)
};
swapchain = new_swapchain;
framebuffers = window_size_dependent_setup(&new_images, render_pass.clone(), &mut dynamic_state);
recreate_swapchain = false;
}
let (image_num, acquire_future) = match swapchain::acquire_next_image(swapchain.clone(), None){
Ok(r) => r,
Err(AcquireError::OutOfDate) => {
recreate_swapchain = true;
continue;
},
Err(err) => panic!("{:?}", err)
};
if image_num > 2 {
recreate_swapchain = true;
continue;
} // ugly workaround for a situation when image_num is out of bounds
then = now;
now = time::SystemTime::now();
let time = now.duration_since(t0).unwrap().as_millis() as i32;
let dtime = now.duration_since(then).unwrap().as_millis() as i32;
let push_constants = square_fs::ty::PushConstantData {
time,
dtime
};
let clear_values = vec!([0.0, 0.0, 0.0, 1.0].into());
let command_buffer = AutoCommandBufferBuilder::primary_one_time_submit(
device.clone(),
queue.family()
)
.unwrap()
.begin_render_pass(framebuffers[image_num].clone(), false, clear_values)
.unwrap()
.draw(
pipeline.clone(),
&dynamic_state,
vertex_buffer.clone(),
set.clone(),
push_constants)
.unwrap()
.end_render_pass()
.unwrap()
.build()
.unwrap();
let future = previous_frame_end.join(acquire_future)
.then_execute(queue.clone(), command_buffer).unwrap()
.then_swapchain_present(queue.clone(), swapchain.clone(), image_num)
.then_signal_fence_and_flush();
match future {
Ok(future) => {
future.wait(None).unwrap();
previous_frame_end = Box::new(future) as Box<_>;
}
Err(FlushError::OutOfDate) => {
recreate_swapchain = true;
previous_frame_end = Box::new(sync::now(device.clone())) as Box<_>;
}
Err(e) => {
println!("{:?}", e);
previous_frame_end = Box::new(sync::now(device.clone())) as Box<_>;
}
}
let mut done = false;
events_loop.poll_events(|event| {
match event {
Event::WindowEvent { event: WindowEvent::CloseRequested,.. } => done = true,
Event::WindowEvent { event: WindowEvent::Resized(_),.. } => recreate_swapchain = true,
_ => ()
}
});
if done { return; }
}
}
// fn init_particles_buffer() -> [Particle; PARTICLE_COUNT] {
// let mut rng = thread_rng();
// let mut particles = [Particle {
// pos: [0.0, 0.0],
// tail: [0.0, 0.0],
// speed: [0.0, 0.0],
// prev_pos: [0.0, 0.0],
// prev_tail: [0.0, 0.0],
// }; PARTICLE_COUNT];
// for i in 0..particles.len() {
// particles[i].pos = [rng.gen_range(-1.0, 1.0), rng.gen_range(-1.0, 1.0)];
// particles[i].tail = particles[i].pos.clone();
// particles[i].speed = [rng.gen_range(-0.1, 0.1), rng.gen_range(-0.1, 0.1)];
// }
// return particles;
// }
fn window_size_dependent_setup(
images: &[Arc<SwapchainImage<Window>>],
render_pass: Arc<dyn RenderPassAbstract + Send + Sync>,
dynamic_state: &mut DynamicState
) -> Vec<Arc<dyn FramebufferAbstract + Send + Sync>> {
let dimensions = images[0].dimensions();
let viewport = Viewport {
origin: [0.0, 0.0],
dimensions: [dimensions[0] as f32, dimensions[1] as f32],
depth_range: 0.0.. 1.0,
};
dynamic_state.viewports = Some(vec!(viewport));
images.iter().map(|image| {
Arc::new(
Framebuffer::start(render_pass.clone())
.add(image.clone()).unwrap()
.build().unwrap()
) as Arc<dyn FramebufferAbstract + Send + Sync>
}).collect::<Vec<_>>()
}
| main | identifier_name |
main.rs | use std::time;
use vulkano::instance::Instance;
use vulkano::instance::PhysicalDevice;
use vulkano::pipeline::GraphicsPipeline;
use vulkano::pipeline::viewport::Viewport;
use vulkano::device::Device;
use vulkano::device::Features;
use vulkano::device::RawDeviceExtensions;
use vulkano::framebuffer::{
Framebuffer,
FramebufferAbstract,
Subpass,
RenderPassAbstract
};
use vulkano::image::SwapchainImage;
use vulkano::sampler::{
Sampler,
SamplerAddressMode,
Filter,
MipmapMode
};
use vulkano::buffer::BufferUsage;
use vulkano::buffer::CpuAccessibleBuffer;
use vulkano::command_buffer::{
AutoCommandBufferBuilder,
DynamicState
};
use vulkano::descriptor::descriptor_set::PersistentDescriptorSet;
use std::sync::Arc;
// use vulkano::pipeline::ComputePipeline;
use vulkano::sync;
use vulkano::sync::{
GpuFuture,
FlushError
};
use vulkano::format::Format;
use vulkano::image::{
Dimensions,
ImmutableImage,
};
// use rand::{
// thread_rng,
// Rng
// };
use image::{
GenericImageView,
};
use vulkano_win::VkSurfaceBuild;
use winit::{
EventsLoop,
WindowBuilder,
Window,
Event,
WindowEvent
};
use vulkano::swapchain;
use vulkano::swapchain::{
AcquireError,
Swapchain,
SurfaceTransform,
PresentMode,
SwapchainCreationError
};
mod cs {
vulkano_shaders::shader! {
ty: "compute",
path: "./src/shader/particle_update.cp.glsl"
}
}
#[derive(Copy, Clone, Debug)]
struct Particle {
pos: [f32; 2],
speed: [f32; 2],
tail: [f32; 2],
prev_pos: [f32; 2],
prev_tail: [f32; 2],
}
// struct ImageDataIterator<P> {
// originalIterator: image::buffer::Pie,
// length: usize
// }
//
// impl<T> ImageDataIterator<T> {
// fn from_dynamic_image(img: &DynamicImage) -> ImageDataIterator<T> {
// let dimensions = img.dimensions();
//
// ImageDataIterator {
// originalIterator: img.to_rgba().pixels(),
// length: ( dimensions.0 * dimensions.1 ) as usize
// }
// }
// }
//
// impl<'a, T> Iterator for ImageDataIterator<'a, T> {
// type Item = [u8; 4];
// fn next(&mut self) -> Option<[u8; 4]> {
// return match self.originalIterator.next() {
// Some(pixel) => {
// let rgba = pixel.2;
// let data: [u8; 4] = [ rgba[0], rgba[1], rgba[2], rgba[3] ];
// return Some(data);
// },
// None => None
// }
// }
// }
//
// impl<'a, T> ExactSizeIterator for ImageDataIterator<'a, T> {
// fn len(&self) -> usize {
// return self.length;
// }
// }
fn main() {
let img = match image::open("./media/autumn.png") {
Ok(image) => image,
Err(err) => panic!("{:?}", err)
};
{ // stdout image info
println!("color {:?}", img.color());
println!("dimensions {:?}", img.dimensions());
// println!("first pixel {:?}", img.pixels().next().unwrap());
// println!("first pixel {:?}", img.pixels().next().map(|item| item.2).unwrap());
}
let instance = {
let inst_exts = vulkano_win::required_extensions();
Instance::new(None, &inst_exts, None).expect("failed to create instance")
};
//TODO: list devices, choose based on user input
for p in PhysicalDevice::enumerate(&instance) {
print!("{}", p.name());
println!(", driver version: {}", p.driver_version());
}
let physical = PhysicalDevice::enumerate(&instance)
.next()
.expect("no device available");
let queue_family = physical
.queue_families()
.find(|&q| q.supports_graphics())
.expect("couldn't find a graphical queue family");
let (device, mut queues) = {
let unraw_dev_exts = vulkano::device::DeviceExtensions {
khr_swapchain: true,
.. vulkano::device::DeviceExtensions::none()
};
let mut all_dev_exts = RawDeviceExtensions::from(&unraw_dev_exts);
all_dev_exts.insert(std::ffi::CString::new("VK_KHR_storage_buffer_storage_class").unwrap());
Device::new(
physical,
&Features::none(),
all_dev_exts,
[(queue_family, 0.5)].iter().cloned(),
)
.expect("failed to create device")
};
let queue = queues.next().unwrap();
// let particles = init_particles_buffer();
// let particles_buffer =
// CpuAccessibleBuffer::from_data(device.clone(), BufferUsage::all(), particles)
// .expect("failed to create buffer");
// let shader = cs::Shader::load(device.clone()).expect("failed to create shader module");
// let compute_pipeline = Arc::new(
// ComputePipeline::new(device.clone(), &shader.main_entry_point(), &())
// .expect("failed to create compute pipeline"),
// );
// let set = Arc::new(
// PersistentDescriptorSet::start(compute_pipeline.clone(), 0)
// .add_buffer(particles_buffer.clone())
// .unwrap()
// .build()
// .unwrap(),
// );
// let command_buffer = AutoCommandBufferBuilder::new(device.clone(), queue.family())
// .unwrap()
// .dispatch([PARTICLE_COUNT as u32 / 32, 1, 1], compute_pipeline.clone(), set.clone(), ())
// .unwrap()
// .build()
// .unwrap();
let mut events_loop = EventsLoop::new();
let surface = WindowBuilder::new().build_vk_surface(&events_loop, instance.clone()).unwrap();
let window = surface.window();
let (mut swapchain, images) = {
let caps = surface.capabilities(physical)
.expect("failed to get surface capabilities");
let usage = caps.supported_usage_flags;
let alpha = caps.supported_composite_alpha.iter().next().unwrap();
let format = caps.supported_formats[0].0;
caps.supported_formats.iter().for_each(|sth| println!("{:?}", sth));
let initial_dimensions = if let Some(dimensions) = window.get_inner_size() {
let dimensions: (u32, u32) = dimensions.to_physical(window.get_hidpi_factor()).into();
[dimensions.0, dimensions.1]
} else {
return;
};
Swapchain::new(device.clone(), surface.clone(),
caps.min_image_count, format, initial_dimensions, 1, usage, &queue,
SurfaceTransform::Identity, alpha, PresentMode::Fifo, true, None)
.expect("failed to create swapchain")
};
#[derive(Default, Debug, Clone)]
struct Vertex {
position: [f32; 2]
}
let vertex_buffer = {
vulkano::impl_vertex!(Vertex, position);
CpuAccessibleBuffer::from_iter(device.clone(), BufferUsage::all(), [
Vertex { position: [-0.5, -0.5] },
Vertex { position: [ 0.5, -0.5] },
Vertex { position: [-0.5, 0.5] },
Vertex { position: [ 0.5, 0.5] },
Vertex { position: [-0.5, 0.5] },
Vertex { position: [ 0.5, -0.5] },
].iter().cloned()).unwrap()
};
// texture
let img_dim = img.dimensions();
let (autumn_texture, autumn_texture_future) = match ImmutableImage::from_iter(
img.as_rgba8().unwrap().pixels().map(|rgba| {
let bytes : [u8; 4] = [rgba[0], rgba[1], rgba[2], rgba[3]];
bytes
}),
Dimensions::Dim2d { width: img_dim.0, height: img_dim.1 },
Format::R8G8B8A8Unorm,
queue.clone()
) {
Ok(i) => i,
Err(err) => panic!("{:?}", err)
};
let sampler = Sampler::new(device.clone(), Filter::Linear, Filter::Linear,
MipmapMode::Nearest, SamplerAddressMode::Repeat, SamplerAddressMode::Repeat,
SamplerAddressMode::Repeat, 0.0, 1.0, 0.0, 0.0).unwrap();
mod square_vs {
vulkano_shaders::shader!{
ty: "vertex",
path: "./src/shader/square.vs.glsl"
}
}
mod square_fs {
vulkano_shaders::shader!{
ty: "fragment",
path: "./src/shader/square.fs.glsl"
}
}
let square_vs = square_vs::Shader::load(device.clone()).unwrap();
let square_fs = square_fs::Shader::load(device.clone()).unwrap();
let render_pass = Arc::new(vulkano::single_pass_renderpass!(
device.clone(),
attachments: {
color: {
load: Clear,
store: Store,
format: swapchain.format(),
samples: 1,
}
},
pass: {
color: [color],
depth_stencil: {}
}
).unwrap());
let pipeline = Arc::new(GraphicsPipeline::start()
.vertex_input_single_buffer::<Vertex>()
.vertex_shader(square_vs.main_entry_point(), ())
.triangle_list()
.viewports_dynamic_scissors_irrelevant(1)
.fragment_shader(square_fs.main_entry_point(), ())
.blend_alpha_blending()
.render_pass(Subpass::from(render_pass.clone(), 0).unwrap())
.build(device.clone())
.unwrap());
let set = Arc::new(PersistentDescriptorSet::start(pipeline.clone(), 0)
.add_sampled_image(autumn_texture.clone(), sampler.clone()).unwrap()
.build().unwrap()
);
let mut dynamic_state = DynamicState {
line_width: None,
viewports: None,
scissors: None,
compare_mask: None,
write_mask: None,
reference: None
};
let mut framebuffers = window_size_dependent_setup(&images, render_pass.clone(), &mut dynamic_state);
let mut recreate_swapchain = false;
let mut previous_frame_end = Box::new(sync::now(device.clone()).join(autumn_texture_future)) as Box<dyn GpuFuture>;
let t0 = time::SystemTime::now();
let mut now = t0;
let mut then;
loop {
previous_frame_end.cleanup_finished();
if recreate_swapchain {
let dimensions = if let Some(dimensions) = window.get_inner_size() {
let dimensions: (u32, u32) = dimensions.to_physical(window.get_hidpi_factor()).into();
[dimensions.0, dimensions.1]
} else {
return ;
};
let (new_swapchain, new_images) = match swapchain.recreate_with_dimension(dimensions) {
Ok(r) => r,
Err(SwapchainCreationError::UnsupportedDimensions) => continue, |
swapchain = new_swapchain;
framebuffers = window_size_dependent_setup(&new_images, render_pass.clone(), &mut dynamic_state);
recreate_swapchain = false;
}
let (image_num, acquire_future) = match swapchain::acquire_next_image(swapchain.clone(), None){
Ok(r) => r,
Err(AcquireError::OutOfDate) => {
recreate_swapchain = true;
continue;
},
Err(err) => panic!("{:?}", err)
};
if image_num > 2 {
recreate_swapchain = true;
continue;
} // ugly workaround for a situation when image_num is out of bounds
then = now;
now = time::SystemTime::now();
let time = now.duration_since(t0).unwrap().as_millis() as i32;
let dtime = now.duration_since(then).unwrap().as_millis() as i32;
let push_constants = square_fs::ty::PushConstantData {
time,
dtime
};
let clear_values = vec!([0.0, 0.0, 0.0, 1.0].into());
let command_buffer = AutoCommandBufferBuilder::primary_one_time_submit(
device.clone(),
queue.family()
)
.unwrap()
.begin_render_pass(framebuffers[image_num].clone(), false, clear_values)
.unwrap()
.draw(
pipeline.clone(),
&dynamic_state,
vertex_buffer.clone(),
set.clone(),
push_constants)
.unwrap()
.end_render_pass()
.unwrap()
.build()
.unwrap();
let future = previous_frame_end.join(acquire_future)
.then_execute(queue.clone(), command_buffer).unwrap()
.then_swapchain_present(queue.clone(), swapchain.clone(), image_num)
.then_signal_fence_and_flush();
match future {
Ok(future) => {
future.wait(None).unwrap();
previous_frame_end = Box::new(future) as Box<_>;
}
Err(FlushError::OutOfDate) => {
recreate_swapchain = true;
previous_frame_end = Box::new(sync::now(device.clone())) as Box<_>;
}
Err(e) => {
println!("{:?}", e);
previous_frame_end = Box::new(sync::now(device.clone())) as Box<_>;
}
}
let mut done = false;
events_loop.poll_events(|event| {
match event {
Event::WindowEvent { event: WindowEvent::CloseRequested,.. } => done = true,
Event::WindowEvent { event: WindowEvent::Resized(_),.. } => recreate_swapchain = true,
_ => ()
}
});
if done { return; }
}
}
// fn init_particles_buffer() -> [Particle; PARTICLE_COUNT] {
// let mut rng = thread_rng();
// let mut particles = [Particle {
// pos: [0.0, 0.0],
// tail: [0.0, 0.0],
// speed: [0.0, 0.0],
// prev_pos: [0.0, 0.0],
// prev_tail: [0.0, 0.0],
// }; PARTICLE_COUNT];
// for i in 0..particles.len() {
// particles[i].pos = [rng.gen_range(-1.0, 1.0), rng.gen_range(-1.0, 1.0)];
// particles[i].tail = particles[i].pos.clone();
// particles[i].speed = [rng.gen_range(-0.1, 0.1), rng.gen_range(-0.1, 0.1)];
// }
// return particles;
// }
fn window_size_dependent_setup(
images: &[Arc<SwapchainImage<Window>>],
render_pass: Arc<dyn RenderPassAbstract + Send + Sync>,
dynamic_state: &mut DynamicState
) -> Vec<Arc<dyn FramebufferAbstract + Send + Sync>> {
let dimensions = images[0].dimensions();
let viewport = Viewport {
origin: [0.0, 0.0],
dimensions: [dimensions[0] as f32, dimensions[1] as f32],
depth_range: 0.0.. 1.0,
};
dynamic_state.viewports = Some(vec!(viewport));
images.iter().map(|image| {
Arc::new(
Framebuffer::start(render_pass.clone())
.add(image.clone()).unwrap()
.build().unwrap()
) as Arc<dyn FramebufferAbstract + Send + Sync>
}).collect::<Vec<_>>()
} | Err(err) => panic!("{:?}", err)
}; | random_line_split |
main.rs | use std::time;
use vulkano::instance::Instance;
use vulkano::instance::PhysicalDevice;
use vulkano::pipeline::GraphicsPipeline;
use vulkano::pipeline::viewport::Viewport;
use vulkano::device::Device;
use vulkano::device::Features;
use vulkano::device::RawDeviceExtensions;
use vulkano::framebuffer::{
Framebuffer,
FramebufferAbstract,
Subpass,
RenderPassAbstract
};
use vulkano::image::SwapchainImage;
use vulkano::sampler::{
Sampler,
SamplerAddressMode,
Filter,
MipmapMode
};
use vulkano::buffer::BufferUsage;
use vulkano::buffer::CpuAccessibleBuffer;
use vulkano::command_buffer::{
AutoCommandBufferBuilder,
DynamicState
};
use vulkano::descriptor::descriptor_set::PersistentDescriptorSet;
use std::sync::Arc;
// use vulkano::pipeline::ComputePipeline;
use vulkano::sync;
use vulkano::sync::{
GpuFuture,
FlushError
};
use vulkano::format::Format;
use vulkano::image::{
Dimensions,
ImmutableImage,
};
// use rand::{
// thread_rng,
// Rng
// };
use image::{
GenericImageView,
};
use vulkano_win::VkSurfaceBuild;
use winit::{
EventsLoop,
WindowBuilder,
Window,
Event,
WindowEvent
};
use vulkano::swapchain;
use vulkano::swapchain::{
AcquireError,
Swapchain,
SurfaceTransform,
PresentMode,
SwapchainCreationError
};
mod cs {
vulkano_shaders::shader! {
ty: "compute",
path: "./src/shader/particle_update.cp.glsl"
}
}
#[derive(Copy, Clone, Debug)]
struct Particle {
pos: [f32; 2],
speed: [f32; 2],
tail: [f32; 2],
prev_pos: [f32; 2],
prev_tail: [f32; 2],
}
// struct ImageDataIterator<P> {
// originalIterator: image::buffer::Pie,
// length: usize
// }
//
// impl<T> ImageDataIterator<T> {
// fn from_dynamic_image(img: &DynamicImage) -> ImageDataIterator<T> {
// let dimensions = img.dimensions();
//
// ImageDataIterator {
// originalIterator: img.to_rgba().pixels(),
// length: ( dimensions.0 * dimensions.1 ) as usize
// }
// }
// }
//
// impl<'a, T> Iterator for ImageDataIterator<'a, T> {
// type Item = [u8; 4];
// fn next(&mut self) -> Option<[u8; 4]> {
// return match self.originalIterator.next() {
// Some(pixel) => {
// let rgba = pixel.2;
// let data: [u8; 4] = [ rgba[0], rgba[1], rgba[2], rgba[3] ];
// return Some(data);
// },
// None => None
// }
// }
// }
//
// impl<'a, T> ExactSizeIterator for ImageDataIterator<'a, T> {
// fn len(&self) -> usize {
// return self.length;
// }
// }
fn main() |
//TODO: list devices, choose based on user input
for p in PhysicalDevice::enumerate(&instance) {
print!("{}", p.name());
println!(", driver version: {}", p.driver_version());
}
let physical = PhysicalDevice::enumerate(&instance)
.next()
.expect("no device available");
let queue_family = physical
.queue_families()
.find(|&q| q.supports_graphics())
.expect("couldn't find a graphical queue family");
let (device, mut queues) = {
let unraw_dev_exts = vulkano::device::DeviceExtensions {
khr_swapchain: true,
.. vulkano::device::DeviceExtensions::none()
};
let mut all_dev_exts = RawDeviceExtensions::from(&unraw_dev_exts);
all_dev_exts.insert(std::ffi::CString::new("VK_KHR_storage_buffer_storage_class").unwrap());
Device::new(
physical,
&Features::none(),
all_dev_exts,
[(queue_family, 0.5)].iter().cloned(),
)
.expect("failed to create device")
};
let queue = queues.next().unwrap();
// let particles = init_particles_buffer();
// let particles_buffer =
// CpuAccessibleBuffer::from_data(device.clone(), BufferUsage::all(), particles)
// .expect("failed to create buffer");
// let shader = cs::Shader::load(device.clone()).expect("failed to create shader module");
// let compute_pipeline = Arc::new(
// ComputePipeline::new(device.clone(), &shader.main_entry_point(), &())
// .expect("failed to create compute pipeline"),
// );
// let set = Arc::new(
// PersistentDescriptorSet::start(compute_pipeline.clone(), 0)
// .add_buffer(particles_buffer.clone())
// .unwrap()
// .build()
// .unwrap(),
// );
// let command_buffer = AutoCommandBufferBuilder::new(device.clone(), queue.family())
// .unwrap()
// .dispatch([PARTICLE_COUNT as u32 / 32, 1, 1], compute_pipeline.clone(), set.clone(), ())
// .unwrap()
// .build()
// .unwrap();
let mut events_loop = EventsLoop::new();
let surface = WindowBuilder::new().build_vk_surface(&events_loop, instance.clone()).unwrap();
let window = surface.window();
let (mut swapchain, images) = {
let caps = surface.capabilities(physical)
.expect("failed to get surface capabilities");
let usage = caps.supported_usage_flags;
let alpha = caps.supported_composite_alpha.iter().next().unwrap();
let format = caps.supported_formats[0].0;
caps.supported_formats.iter().for_each(|sth| println!("{:?}", sth));
let initial_dimensions = if let Some(dimensions) = window.get_inner_size() {
let dimensions: (u32, u32) = dimensions.to_physical(window.get_hidpi_factor()).into();
[dimensions.0, dimensions.1]
} else {
return;
};
Swapchain::new(device.clone(), surface.clone(),
caps.min_image_count, format, initial_dimensions, 1, usage, &queue,
SurfaceTransform::Identity, alpha, PresentMode::Fifo, true, None)
.expect("failed to create swapchain")
};
#[derive(Default, Debug, Clone)]
struct Vertex {
position: [f32; 2]
}
let vertex_buffer = {
vulkano::impl_vertex!(Vertex, position);
CpuAccessibleBuffer::from_iter(device.clone(), BufferUsage::all(), [
Vertex { position: [-0.5, -0.5] },
Vertex { position: [ 0.5, -0.5] },
Vertex { position: [-0.5, 0.5] },
Vertex { position: [ 0.5, 0.5] },
Vertex { position: [-0.5, 0.5] },
Vertex { position: [ 0.5, -0.5] },
].iter().cloned()).unwrap()
};
// texture
let img_dim = img.dimensions();
let (autumn_texture, autumn_texture_future) = match ImmutableImage::from_iter(
img.as_rgba8().unwrap().pixels().map(|rgba| {
let bytes : [u8; 4] = [rgba[0], rgba[1], rgba[2], rgba[3]];
bytes
}),
Dimensions::Dim2d { width: img_dim.0, height: img_dim.1 },
Format::R8G8B8A8Unorm,
queue.clone()
) {
Ok(i) => i,
Err(err) => panic!("{:?}", err)
};
let sampler = Sampler::new(device.clone(), Filter::Linear, Filter::Linear,
MipmapMode::Nearest, SamplerAddressMode::Repeat, SamplerAddressMode::Repeat,
SamplerAddressMode::Repeat, 0.0, 1.0, 0.0, 0.0).unwrap();
mod square_vs {
vulkano_shaders::shader!{
ty: "vertex",
path: "./src/shader/square.vs.glsl"
}
}
mod square_fs {
vulkano_shaders::shader!{
ty: "fragment",
path: "./src/shader/square.fs.glsl"
}
}
let square_vs = square_vs::Shader::load(device.clone()).unwrap();
let square_fs = square_fs::Shader::load(device.clone()).unwrap();
let render_pass = Arc::new(vulkano::single_pass_renderpass!(
device.clone(),
attachments: {
color: {
load: Clear,
store: Store,
format: swapchain.format(),
samples: 1,
}
},
pass: {
color: [color],
depth_stencil: {}
}
).unwrap());
let pipeline = Arc::new(GraphicsPipeline::start()
.vertex_input_single_buffer::<Vertex>()
.vertex_shader(square_vs.main_entry_point(), ())
.triangle_list()
.viewports_dynamic_scissors_irrelevant(1)
.fragment_shader(square_fs.main_entry_point(), ())
.blend_alpha_blending()
.render_pass(Subpass::from(render_pass.clone(), 0).unwrap())
.build(device.clone())
.unwrap());
let set = Arc::new(PersistentDescriptorSet::start(pipeline.clone(), 0)
.add_sampled_image(autumn_texture.clone(), sampler.clone()).unwrap()
.build().unwrap()
);
let mut dynamic_state = DynamicState {
line_width: None,
viewports: None,
scissors: None,
compare_mask: None,
write_mask: None,
reference: None
};
let mut framebuffers = window_size_dependent_setup(&images, render_pass.clone(), &mut dynamic_state);
let mut recreate_swapchain = false;
let mut previous_frame_end = Box::new(sync::now(device.clone()).join(autumn_texture_future)) as Box<dyn GpuFuture>;
let t0 = time::SystemTime::now();
let mut now = t0;
let mut then;
loop {
previous_frame_end.cleanup_finished();
if recreate_swapchain {
let dimensions = if let Some(dimensions) = window.get_inner_size() {
let dimensions: (u32, u32) = dimensions.to_physical(window.get_hidpi_factor()).into();
[dimensions.0, dimensions.1]
} else {
return ;
};
let (new_swapchain, new_images) = match swapchain.recreate_with_dimension(dimensions) {
Ok(r) => r,
Err(SwapchainCreationError::UnsupportedDimensions) => continue,
Err(err) => panic!("{:?}", err)
};
swapchain = new_swapchain;
framebuffers = window_size_dependent_setup(&new_images, render_pass.clone(), &mut dynamic_state);
recreate_swapchain = false;
}
let (image_num, acquire_future) = match swapchain::acquire_next_image(swapchain.clone(), None){
Ok(r) => r,
Err(AcquireError::OutOfDate) => {
recreate_swapchain = true;
continue;
},
Err(err) => panic!("{:?}", err)
};
if image_num > 2 {
recreate_swapchain = true;
continue;
} // ugly workaround for a situation when image_num is out of bounds
then = now;
now = time::SystemTime::now();
let time = now.duration_since(t0).unwrap().as_millis() as i32;
let dtime = now.duration_since(then).unwrap().as_millis() as i32;
let push_constants = square_fs::ty::PushConstantData {
time,
dtime
};
let clear_values = vec!([0.0, 0.0, 0.0, 1.0].into());
let command_buffer = AutoCommandBufferBuilder::primary_one_time_submit(
device.clone(),
queue.family()
)
.unwrap()
.begin_render_pass(framebuffers[image_num].clone(), false, clear_values)
.unwrap()
.draw(
pipeline.clone(),
&dynamic_state,
vertex_buffer.clone(),
set.clone(),
push_constants)
.unwrap()
.end_render_pass()
.unwrap()
.build()
.unwrap();
let future = previous_frame_end.join(acquire_future)
.then_execute(queue.clone(), command_buffer).unwrap()
.then_swapchain_present(queue.clone(), swapchain.clone(), image_num)
.then_signal_fence_and_flush();
match future {
Ok(future) => {
future.wait(None).unwrap();
previous_frame_end = Box::new(future) as Box<_>;
}
Err(FlushError::OutOfDate) => {
recreate_swapchain = true;
previous_frame_end = Box::new(sync::now(device.clone())) as Box<_>;
}
Err(e) => {
println!("{:?}", e);
previous_frame_end = Box::new(sync::now(device.clone())) as Box<_>;
}
}
let mut done = false;
events_loop.poll_events(|event| {
match event {
Event::WindowEvent { event: WindowEvent::CloseRequested,.. } => done = true,
Event::WindowEvent { event: WindowEvent::Resized(_),.. } => recreate_swapchain = true,
_ => ()
}
});
if done { return; }
}
}
// fn init_particles_buffer() -> [Particle; PARTICLE_COUNT] {
// let mut rng = thread_rng();
// let mut particles = [Particle {
// pos: [0.0, 0.0],
// tail: [0.0, 0.0],
// speed: [0.0, 0.0],
// prev_pos: [0.0, 0.0],
// prev_tail: [0.0, 0.0],
// }; PARTICLE_COUNT];
// for i in 0..particles.len() {
// particles[i].pos = [rng.gen_range(-1.0, 1.0), rng.gen_range(-1.0, 1.0)];
// particles[i].tail = particles[i].pos.clone();
// particles[i].speed = [rng.gen_range(-0.1, 0.1), rng.gen_range(-0.1, 0.1)];
// }
// return particles;
// }
fn window_size_dependent_setup(
images: &[Arc<SwapchainImage<Window>>],
render_pass: Arc<dyn RenderPassAbstract + Send + Sync>,
dynamic_state: &mut DynamicState
) -> Vec<Arc<dyn FramebufferAbstract + Send + Sync>> {
let dimensions = images[0].dimensions();
let viewport = Viewport {
origin: [0.0, 0.0],
dimensions: [dimensions[0] as f32, dimensions[1] as f32],
depth_range: 0.0.. 1.0,
};
dynamic_state.viewports = Some(vec!(viewport));
images.iter().map(|image| {
Arc::new(
Framebuffer::start(render_pass.clone())
.add(image.clone()).unwrap()
.build().unwrap()
) as Arc<dyn FramebufferAbstract + Send + Sync>
}).collect::<Vec<_>>()
}
| {
let img = match image::open("./media/autumn.png") {
Ok(image) => image,
Err(err) => panic!("{:?}", err)
};
{ // stdout image info
println!("color {:?}", img.color());
println!("dimensions {:?}", img.dimensions());
// println!("first pixel {:?}", img.pixels().next().unwrap());
// println!("first pixel {:?}", img.pixels().next().map(|item| item.2).unwrap());
}
let instance = {
let inst_exts = vulkano_win::required_extensions();
Instance::new(None, &inst_exts, None).expect("failed to create instance")
}; | identifier_body |
main.rs | use std::time;
use vulkano::instance::Instance;
use vulkano::instance::PhysicalDevice;
use vulkano::pipeline::GraphicsPipeline;
use vulkano::pipeline::viewport::Viewport;
use vulkano::device::Device;
use vulkano::device::Features;
use vulkano::device::RawDeviceExtensions;
use vulkano::framebuffer::{
Framebuffer,
FramebufferAbstract,
Subpass,
RenderPassAbstract
};
use vulkano::image::SwapchainImage;
use vulkano::sampler::{
Sampler,
SamplerAddressMode,
Filter,
MipmapMode
};
use vulkano::buffer::BufferUsage;
use vulkano::buffer::CpuAccessibleBuffer;
use vulkano::command_buffer::{
AutoCommandBufferBuilder,
DynamicState
};
use vulkano::descriptor::descriptor_set::PersistentDescriptorSet;
use std::sync::Arc;
// use vulkano::pipeline::ComputePipeline;
use vulkano::sync;
use vulkano::sync::{
GpuFuture,
FlushError
};
use vulkano::format::Format;
use vulkano::image::{
Dimensions,
ImmutableImage,
};
// use rand::{
// thread_rng,
// Rng
// };
use image::{
GenericImageView,
};
use vulkano_win::VkSurfaceBuild;
use winit::{
EventsLoop,
WindowBuilder,
Window,
Event,
WindowEvent
};
use vulkano::swapchain;
use vulkano::swapchain::{
AcquireError,
Swapchain,
SurfaceTransform,
PresentMode,
SwapchainCreationError
};
mod cs {
vulkano_shaders::shader! {
ty: "compute",
path: "./src/shader/particle_update.cp.glsl"
}
}
#[derive(Copy, Clone, Debug)]
struct Particle {
pos: [f32; 2],
speed: [f32; 2],
tail: [f32; 2],
prev_pos: [f32; 2],
prev_tail: [f32; 2],
}
// struct ImageDataIterator<P> {
// originalIterator: image::buffer::Pie,
// length: usize
// }
//
// impl<T> ImageDataIterator<T> {
// fn from_dynamic_image(img: &DynamicImage) -> ImageDataIterator<T> {
// let dimensions = img.dimensions();
//
// ImageDataIterator {
// originalIterator: img.to_rgba().pixels(),
// length: ( dimensions.0 * dimensions.1 ) as usize
// }
// }
// }
//
// impl<'a, T> Iterator for ImageDataIterator<'a, T> {
// type Item = [u8; 4];
// fn next(&mut self) -> Option<[u8; 4]> {
// return match self.originalIterator.next() {
// Some(pixel) => {
// let rgba = pixel.2;
// let data: [u8; 4] = [ rgba[0], rgba[1], rgba[2], rgba[3] ];
// return Some(data);
// },
// None => None
// }
// }
// }
//
// impl<'a, T> ExactSizeIterator for ImageDataIterator<'a, T> {
// fn len(&self) -> usize {
// return self.length;
// }
// }
fn main() {
let img = match image::open("./media/autumn.png") {
Ok(image) => image,
Err(err) => panic!("{:?}", err)
};
{ // stdout image info
println!("color {:?}", img.color());
println!("dimensions {:?}", img.dimensions());
// println!("first pixel {:?}", img.pixels().next().unwrap());
// println!("first pixel {:?}", img.pixels().next().map(|item| item.2).unwrap());
}
let instance = {
let inst_exts = vulkano_win::required_extensions();
Instance::new(None, &inst_exts, None).expect("failed to create instance")
};
//TODO: list devices, choose based on user input
for p in PhysicalDevice::enumerate(&instance) {
print!("{}", p.name());
println!(", driver version: {}", p.driver_version());
}
let physical = PhysicalDevice::enumerate(&instance)
.next()
.expect("no device available");
let queue_family = physical
.queue_families()
.find(|&q| q.supports_graphics())
.expect("couldn't find a graphical queue family");
let (device, mut queues) = {
let unraw_dev_exts = vulkano::device::DeviceExtensions {
khr_swapchain: true,
.. vulkano::device::DeviceExtensions::none()
};
let mut all_dev_exts = RawDeviceExtensions::from(&unraw_dev_exts);
all_dev_exts.insert(std::ffi::CString::new("VK_KHR_storage_buffer_storage_class").unwrap());
Device::new(
physical,
&Features::none(),
all_dev_exts,
[(queue_family, 0.5)].iter().cloned(),
)
.expect("failed to create device")
};
let queue = queues.next().unwrap();
// let particles = init_particles_buffer();
// let particles_buffer =
// CpuAccessibleBuffer::from_data(device.clone(), BufferUsage::all(), particles)
// .expect("failed to create buffer");
// let shader = cs::Shader::load(device.clone()).expect("failed to create shader module");
// let compute_pipeline = Arc::new(
// ComputePipeline::new(device.clone(), &shader.main_entry_point(), &())
// .expect("failed to create compute pipeline"),
// );
// let set = Arc::new(
// PersistentDescriptorSet::start(compute_pipeline.clone(), 0)
// .add_buffer(particles_buffer.clone())
// .unwrap()
// .build()
// .unwrap(),
// );
// let command_buffer = AutoCommandBufferBuilder::new(device.clone(), queue.family())
// .unwrap()
// .dispatch([PARTICLE_COUNT as u32 / 32, 1, 1], compute_pipeline.clone(), set.clone(), ())
// .unwrap()
// .build()
// .unwrap();
let mut events_loop = EventsLoop::new();
let surface = WindowBuilder::new().build_vk_surface(&events_loop, instance.clone()).unwrap();
let window = surface.window();
let (mut swapchain, images) = {
let caps = surface.capabilities(physical)
.expect("failed to get surface capabilities");
let usage = caps.supported_usage_flags;
let alpha = caps.supported_composite_alpha.iter().next().unwrap();
let format = caps.supported_formats[0].0;
caps.supported_formats.iter().for_each(|sth| println!("{:?}", sth));
let initial_dimensions = if let Some(dimensions) = window.get_inner_size() {
let dimensions: (u32, u32) = dimensions.to_physical(window.get_hidpi_factor()).into();
[dimensions.0, dimensions.1]
} else {
return;
};
Swapchain::new(device.clone(), surface.clone(),
caps.min_image_count, format, initial_dimensions, 1, usage, &queue,
SurfaceTransform::Identity, alpha, PresentMode::Fifo, true, None)
.expect("failed to create swapchain")
};
#[derive(Default, Debug, Clone)]
struct Vertex {
position: [f32; 2]
}
let vertex_buffer = {
vulkano::impl_vertex!(Vertex, position);
CpuAccessibleBuffer::from_iter(device.clone(), BufferUsage::all(), [
Vertex { position: [-0.5, -0.5] },
Vertex { position: [ 0.5, -0.5] },
Vertex { position: [-0.5, 0.5] },
Vertex { position: [ 0.5, 0.5] },
Vertex { position: [-0.5, 0.5] },
Vertex { position: [ 0.5, -0.5] },
].iter().cloned()).unwrap()
};
// texture
let img_dim = img.dimensions();
let (autumn_texture, autumn_texture_future) = match ImmutableImage::from_iter(
img.as_rgba8().unwrap().pixels().map(|rgba| {
let bytes : [u8; 4] = [rgba[0], rgba[1], rgba[2], rgba[3]];
bytes
}),
Dimensions::Dim2d { width: img_dim.0, height: img_dim.1 },
Format::R8G8B8A8Unorm,
queue.clone()
) {
Ok(i) => i,
Err(err) => panic!("{:?}", err)
};
let sampler = Sampler::new(device.clone(), Filter::Linear, Filter::Linear,
MipmapMode::Nearest, SamplerAddressMode::Repeat, SamplerAddressMode::Repeat,
SamplerAddressMode::Repeat, 0.0, 1.0, 0.0, 0.0).unwrap();
mod square_vs {
vulkano_shaders::shader!{
ty: "vertex",
path: "./src/shader/square.vs.glsl"
}
}
mod square_fs {
vulkano_shaders::shader!{
ty: "fragment",
path: "./src/shader/square.fs.glsl"
}
}
let square_vs = square_vs::Shader::load(device.clone()).unwrap();
let square_fs = square_fs::Shader::load(device.clone()).unwrap();
let render_pass = Arc::new(vulkano::single_pass_renderpass!(
device.clone(),
attachments: {
color: {
load: Clear,
store: Store,
format: swapchain.format(),
samples: 1,
}
},
pass: {
color: [color],
depth_stencil: {}
}
).unwrap());
let pipeline = Arc::new(GraphicsPipeline::start()
.vertex_input_single_buffer::<Vertex>()
.vertex_shader(square_vs.main_entry_point(), ())
.triangle_list()
.viewports_dynamic_scissors_irrelevant(1)
.fragment_shader(square_fs.main_entry_point(), ())
.blend_alpha_blending()
.render_pass(Subpass::from(render_pass.clone(), 0).unwrap())
.build(device.clone())
.unwrap());
let set = Arc::new(PersistentDescriptorSet::start(pipeline.clone(), 0)
.add_sampled_image(autumn_texture.clone(), sampler.clone()).unwrap()
.build().unwrap()
);
let mut dynamic_state = DynamicState {
line_width: None,
viewports: None,
scissors: None,
compare_mask: None,
write_mask: None,
reference: None
};
let mut framebuffers = window_size_dependent_setup(&images, render_pass.clone(), &mut dynamic_state);
let mut recreate_swapchain = false;
let mut previous_frame_end = Box::new(sync::now(device.clone()).join(autumn_texture_future)) as Box<dyn GpuFuture>;
let t0 = time::SystemTime::now();
let mut now = t0;
let mut then;
loop {
previous_frame_end.cleanup_finished();
if recreate_swapchain {
let dimensions = if let Some(dimensions) = window.get_inner_size() | else {
return ;
};
let (new_swapchain, new_images) = match swapchain.recreate_with_dimension(dimensions) {
Ok(r) => r,
Err(SwapchainCreationError::UnsupportedDimensions) => continue,
Err(err) => panic!("{:?}", err)
};
swapchain = new_swapchain;
framebuffers = window_size_dependent_setup(&new_images, render_pass.clone(), &mut dynamic_state);
recreate_swapchain = false;
}
let (image_num, acquire_future) = match swapchain::acquire_next_image(swapchain.clone(), None){
Ok(r) => r,
Err(AcquireError::OutOfDate) => {
recreate_swapchain = true;
continue;
},
Err(err) => panic!("{:?}", err)
};
if image_num > 2 {
recreate_swapchain = true;
continue;
} // ugly workaround for a situation when image_num is out of bounds
then = now;
now = time::SystemTime::now();
let time = now.duration_since(t0).unwrap().as_millis() as i32;
let dtime = now.duration_since(then).unwrap().as_millis() as i32;
let push_constants = square_fs::ty::PushConstantData {
time,
dtime
};
let clear_values = vec!([0.0, 0.0, 0.0, 1.0].into());
let command_buffer = AutoCommandBufferBuilder::primary_one_time_submit(
device.clone(),
queue.family()
)
.unwrap()
.begin_render_pass(framebuffers[image_num].clone(), false, clear_values)
.unwrap()
.draw(
pipeline.clone(),
&dynamic_state,
vertex_buffer.clone(),
set.clone(),
push_constants)
.unwrap()
.end_render_pass()
.unwrap()
.build()
.unwrap();
let future = previous_frame_end.join(acquire_future)
.then_execute(queue.clone(), command_buffer).unwrap()
.then_swapchain_present(queue.clone(), swapchain.clone(), image_num)
.then_signal_fence_and_flush();
match future {
Ok(future) => {
future.wait(None).unwrap();
previous_frame_end = Box::new(future) as Box<_>;
}
Err(FlushError::OutOfDate) => {
recreate_swapchain = true;
previous_frame_end = Box::new(sync::now(device.clone())) as Box<_>;
}
Err(e) => {
println!("{:?}", e);
previous_frame_end = Box::new(sync::now(device.clone())) as Box<_>;
}
}
let mut done = false;
events_loop.poll_events(|event| {
match event {
Event::WindowEvent { event: WindowEvent::CloseRequested,.. } => done = true,
Event::WindowEvent { event: WindowEvent::Resized(_),.. } => recreate_swapchain = true,
_ => ()
}
});
if done { return; }
}
}
// fn init_particles_buffer() -> [Particle; PARTICLE_COUNT] {
// let mut rng = thread_rng();
// let mut particles = [Particle {
// pos: [0.0, 0.0],
// tail: [0.0, 0.0],
// speed: [0.0, 0.0],
// prev_pos: [0.0, 0.0],
// prev_tail: [0.0, 0.0],
// }; PARTICLE_COUNT];
// for i in 0..particles.len() {
// particles[i].pos = [rng.gen_range(-1.0, 1.0), rng.gen_range(-1.0, 1.0)];
// particles[i].tail = particles[i].pos.clone();
// particles[i].speed = [rng.gen_range(-0.1, 0.1), rng.gen_range(-0.1, 0.1)];
// }
// return particles;
// }
fn window_size_dependent_setup(
images: &[Arc<SwapchainImage<Window>>],
render_pass: Arc<dyn RenderPassAbstract + Send + Sync>,
dynamic_state: &mut DynamicState
) -> Vec<Arc<dyn FramebufferAbstract + Send + Sync>> {
let dimensions = images[0].dimensions();
let viewport = Viewport {
origin: [0.0, 0.0],
dimensions: [dimensions[0] as f32, dimensions[1] as f32],
depth_range: 0.0.. 1.0,
};
dynamic_state.viewports = Some(vec!(viewport));
images.iter().map(|image| {
Arc::new(
Framebuffer::start(render_pass.clone())
.add(image.clone()).unwrap()
.build().unwrap()
) as Arc<dyn FramebufferAbstract + Send + Sync>
}).collect::<Vec<_>>()
}
| {
let dimensions: (u32, u32) = dimensions.to_physical(window.get_hidpi_factor()).into();
[dimensions.0, dimensions.1]
} | conditional_block |
body.rs | #![allow(dead_code)]
#![allow(unused_doc_comments)]
/**
* body.rs contains the Body struct and implements methods for it. A body struct contains only the
* position and velocity vectors of the body, other parameters are calculated using methods. A body
* is instantiated using using the Body::new() method, which also determines the type of orbit the
* body has at the same time. orbit_type does not have to be given manually.
*/
use nalgebra::{Matrix3, Vector3};
use std::f64::consts::PI;
use colored::*;
const DAYTOSEC: f64 = 24.0 * 3600.0;
const SOLARGM: f64 = 1.328905188132376e11;
const PI2: f64 = 2.0 * PI;
/**
* OrbitType is used to abstract away some of the functions that depend on
* the type of orbit the body is in, like kepler's equation. That way, you
* can call one function and it will return the correct value
*/
#[derive(Debug)]
pub enum OrbitType {
Circular,
Elliptic,
Parabolic,
Hyperbolic,
}
impl OrbitType {
/// Return the orbit type given the eccentricity, Body::new
/// uses this function to set the orbit type when an instance
/// is constructed
pub fn new(eccentricity: f64) -> OrbitType {
if eccentricity == 0.0 {
return OrbitType::Circular;
} else if eccentricity < 1.0 && eccentricity > 0.0 {
return OrbitType::Elliptic;
} else if eccentricity == 1.0 {
return OrbitType::Parabolic;
} else {
return OrbitType::Hyperbolic;
}
}
}
#[derive(Debug)]
pub struct Body {
pub position: Vector3<f64>,
pub velocity: Vector3<f64>,
pub orbit_type: OrbitType,
}
/* Adds methods to Body struct */
impl Body {
pub fn new(position: Vector3<f64>, velocity: Vector3<f64>) -> Body {
// h and e are used for determining what kind of orbit the body is currently in
let h = position.cross(&velocity);
let e = ((velocity.cross(&h) / SOLARGM) - position.normalize()).norm();
Body {
position: position,
velocity: velocity,
orbit_type: OrbitType::new(e),
}
}
pub fn radial_velocity(&self) -> Vector3<f64> {
(self.velocity.dot(&self.position) / self.position.norm_squared()) * self.position
}
pub fn tangential_velocity(&self) -> Vector3<f64> {
self.omega().cross(&self.position)
}
pub fn true_anomaly(&self) -> f64 {
let e_vec = self.eccentricity_vector();
let posit = self.position.normalize();
let val = e_vec.dot(&posit) / (e_vec.norm() * posit.norm());
if posit.dot(&self.velocity.normalize()) < 0.0 {
return 2.0 * PI - val.acos();
} else {
return val.acos();
}
}
/* points from focus to perigee if I'm not mistaken */
pub fn eccentricity_vector(&self) -> Vector3<f64> {
let veloc = self.velocity;
let posit = self.position;
let h = self.angular_momentum();
(veloc.cross(&h) / SOLARGM) - posit.normalize()
}
pub fn angular_momentum(&self) -> Vector3<f64> {
self.position.cross(&self.velocity)
}
pub fn total_energy(&self) -> f64 {
let posit = self.position.norm();
let veloc = self.velocity.norm();
0.5 * veloc.powi(2) - (SOLARGM / posit)
}
pub fn omega(&self) -> Vector3<f64> {
self.angular_momentum() / self.position.norm_squared()
}
pub fn frame_rotation_rate(&self) -> f64 {
self.omega().norm()
}
pub fn position_at_angle(&self, angle: f64) -> Vector3<f64> {
let e = self.eccentricity();
let numer = self.angular_momentum().norm_squared() / SOLARGM;
let denom = 1_f64 + (e * (angle).cos());
let radius = numer / denom;
Vector3::new(radius, 0.0, 0.0)
}
pub fn velocity_at_angle(&self, angle: f64) -> Vector3<f64> {
let p = self.orbital_parameter();
let e = self.eccentricity();
let h = self.angular_momentum().norm_squared();
Vector3::new(
(h / p) * e * angle.sin(),
(h / p) * (1_f64 + e * angle.cos()),
0.0,
)
}
pub fn | (&self, angle: f64) -> (Vector3<f64>, Vector3<f64>) {
let r = self.position_at_angle(angle);
let v = self.velocity_at_angle(angle);
let tht = angle - self.true_anomaly();
let trans = Matrix3::from_rows(&[
Vector3::new(tht.cos(), -tht.sin(), 0.0).transpose(),
Vector3::new(tht.sin(), tht.cos(), 0.0).transpose(),
Vector3::new(0.0, 0.0, 1.0).transpose(),
]);
(trans * r, trans * v)
}
// Angle to other body, keep getting the wrong thing anyway, tried everything
pub fn angle_to(&self, other: &Body) -> f64 {
(self.position.dot(&other.position) / (self.position.norm() * other.position.norm())).acos()
}
/* Return a transformation matrix constructed from body's orbit in inertial frame */
pub fn make_frame(&self) -> Matrix3<f64> {
let e_r = self.position.normalize();
let e_h = self.angular_momentum().normalize();
let e_tht = e_h.cross(&e_r);
Matrix3::from_rows(&[e_r.transpose(), e_tht.transpose(), e_h.transpose()])
}
pub fn semi_major_axis(&self) -> f64 {
let ang_moment = self.angular_momentum().norm();
let e = self.eccentricity();
ang_moment.powi(2) / (SOLARGM * (1_f64 - e.powi(2)))
}
pub fn orbital_period(&self) -> f64 {
2_f64 * PI * (self.semi_major_axis().powi(3) / SOLARGM).sqrt()
}
pub fn orbital_parameter(&self) -> f64 {
let e = self.eccentricity();
self.semi_major_axis() * (1.0 - e.powi(2))
}
pub fn eccentric_anomaly(&self) -> f64 {
let e = self.eccentricity();
let theta = self.true_anomaly();
2.0 * ((theta / 2.0).tan() / ((1.0 + e) / (1.0 - e)).sqrt()).atan()
}
pub fn time_since_periapsis(&self) -> f64 {
let t_anom = self.true_anomaly();
let e_anom = self.true_to_eccentric(t_anom);
let a = self.semi_major_axis();
let e = self.eccentricity();
(a.powi(3) / SOLARGM).sqrt() * (e_anom - e * e_anom.sin())
}
pub fn eccentricity(&self) -> f64 {
self.eccentricity_vector().norm()
}
pub fn inclination(&self) -> f64 {
let h = self.angular_momentum();
(h[2] / h.norm()).acos() // h[2] is the z component of the vector
}
pub fn ascending_node(&self) -> Vector3<f64> {
let k = Vector3::new(0.0, 0.0, 1.0);
k.cross(&self.angular_momentum())
}
pub fn argument_of_periapsis(&self) -> f64 {
let n = self.ascending_node();
let e = self.eccentricity_vector();
let omega = (n.dot(&e) / (n.norm() * e.norm())).acos();
if e[2] < 0.0 {
PI2 - omega
} else {
omega
}
}
pub fn argument_of_ascending_node(&self) -> f64 {
let n = self.ascending_node();
let n_x = n[0];
let n_y = n[1];
if n_y >= 0.0 {
(n_x / n.norm()).acos()
} else {
PI2 - (n_x / n.norm()).acos()
}
}
pub fn true_to_eccentric(&self, t_anom: f64) -> f64 {
let a = self.semi_major_axis();
let e = self.eccentricity();
let b = a * (1.0 - e.powi(2)).sqrt();
let p = self.orbital_parameter();
let r = p / (1.0 + e * t_anom.cos());
let c = (a * e + r * t_anom.cos()) / a;
let s = (r / b) * t_anom.sin();
return s.atan2(c);
}
pub fn true_anomaly_at_time(&self, time: f64) -> f64 {
let t_peri = self.time_since_periapsis();
let m_anom = self.mean_anomaly((time * DAYTOSEC) + t_peri);
let angle = self.eccentric_from_mean(m_anom);
return PI2 - self.eccentric_to_true_anomaly(angle);
}
/// The eccentric anomaly at a certain time
pub fn eccentric_from_mean(&self, m_anom: f64) -> f64 {
match self.kepler(m_anom) {
Ok(num) => num,
Err(e) => {
eprintln!("{}: {}\n", "Invalid Orbit".red(), e);
return std::f64::NAN;
}
}
}
/// Return the eccentric anomaly using the appropriate Kepler equation
pub fn kepler(&self, m_anom: f64) -> Result<f64, &str> {
let e = self.eccentricity();
match &self.orbit_type {
OrbitType::Elliptic => Ok(elliptic_kepler(m_anom, e)),
OrbitType::Hyperbolic => Ok(hyper_kepler(m_anom, e)),
OrbitType::Circular => Err("cannot use Keler's equation with a circular orbit."),
OrbitType::Parabolic => Err("cannot use Kepler's equation with a parabolic orbit."),
}
}
pub fn eccentric_to_true_anomaly(&self, e_anom: f64) -> f64 {
let e = self.eccentricity();
// let sqrt_val = ((1.0 + e) / (1.0 - e)).sqrt();
// 2.0 * (sqrt_val * (e_anom / 2.0).tan()).atan() + PI2
((e_anom.cos() - e) / (1.0 - e * e_anom.cos())).acos()
}
/// Return the mean anomaly at a certain time from current position
pub fn mean_anomaly(&self, t: f64) -> f64 {
let n = (SOLARGM / self.semi_major_axis().powi(3)).sqrt();
n * t
}
}
/**
* Some of the kepler functions below. Body matches on its orbit type
* and uses the correct function to return the correct eccentric anomaly
*/
fn elliptic_kepler(nt: f64, eccen: f64) -> f64 {
let tolerance = 1e-200;
let kep = |e: f64| e - eccen * e.sin() - nt;
let kep_d = |e: f64| 1.0 - eccen * e.cos();
let mut e_0 = 0.0;
let mut e = e_0 - (kep(e_0) / kep_d(e_0));
while (e - e_0).abs() > tolerance {
e_0 = e;
e = e_0 - (kep(e_0) / kep_d(e_0));
}
return e;
}
fn hyper_kepler(nt: f64, eccen: f64) -> f64 {
let tolerance = 1e-100;
let kep = |e: f64| eccen * e.sinh() - nt - e;
let kep_d = |e: f64| eccen * e.cosh() - 1.0;
let mut e_0 = nt;
let mut e = e_0 - kep(e_0) / kep_d(e_0);
while (e - e_0).abs() > tolerance {
e_0 = e;
e = e_0 - kep(e_0) / kep_d(e_0);
}
return e;
}
pub fn three_one_three_transform(
arg_of_peri: f64,
inclination: f64,
arg_of_AN: f64,
) -> Matrix3<f64> {
let omega = arg_of_peri;
let inc = inclination;
let tht = arg_of_AN;
let m_c = Matrix3::new(
omega.cos(),
omega.sin(),
0.0,
-omega.sin(),
omega.cos(),
0.0,
0.0,
0.0,
1.0,
);
let m_b = Matrix3::new(
1.0,
0.0,
0.0,
0.0,
inc.cos(),
inc.sin(),
0.0,
-inc.sin(),
inc.cos(),
);
let m_a = Matrix3::new(
tht.cos(),
tht.sin(),
0.0,
-tht.sin(),
tht.cos(),
0.0,
0.0,
0.0,
1.0,
);
return m_c * m_b * m_a;
}
| position_and_velocity | identifier_name |
body.rs | #![allow(dead_code)]
#![allow(unused_doc_comments)]
/**
* body.rs contains the Body struct and implements methods for it. A body struct contains only the
* position and velocity vectors of the body, other parameters are calculated using methods. A body
* is instantiated using using the Body::new() method, which also determines the type of orbit the
* body has at the same time. orbit_type does not have to be given manually.
*/
use nalgebra::{Matrix3, Vector3};
use std::f64::consts::PI;
use colored::*;
const DAYTOSEC: f64 = 24.0 * 3600.0;
const SOLARGM: f64 = 1.328905188132376e11;
const PI2: f64 = 2.0 * PI;
/**
* OrbitType is used to abstract away some of the functions that depend on
* the type of orbit the body is in, like kepler's equation. That way, you
* can call one function and it will return the correct value
*/
#[derive(Debug)]
pub enum OrbitType {
Circular,
Elliptic,
Parabolic,
Hyperbolic,
}
impl OrbitType {
/// Return the orbit type given the eccentricity, Body::new
/// uses this function to set the orbit type when an instance
/// is constructed
pub fn new(eccentricity: f64) -> OrbitType {
if eccentricity == 0.0 {
return OrbitType::Circular;
} else if eccentricity < 1.0 && eccentricity > 0.0 {
return OrbitType::Elliptic;
} else if eccentricity == 1.0 {
return OrbitType::Parabolic;
} else {
return OrbitType::Hyperbolic;
}
}
}
#[derive(Debug)]
pub struct Body {
pub position: Vector3<f64>,
pub velocity: Vector3<f64>,
pub orbit_type: OrbitType,
}
/* Adds methods to Body struct */
impl Body {
pub fn new(position: Vector3<f64>, velocity: Vector3<f64>) -> Body {
// h and e are used for determining what kind of orbit the body is currently in
let h = position.cross(&velocity);
let e = ((velocity.cross(&h) / SOLARGM) - position.normalize()).norm();
Body {
position: position,
velocity: velocity,
orbit_type: OrbitType::new(e),
}
}
pub fn radial_velocity(&self) -> Vector3<f64> {
(self.velocity.dot(&self.position) / self.position.norm_squared()) * self.position
}
pub fn tangential_velocity(&self) -> Vector3<f64> {
self.omega().cross(&self.position)
}
pub fn true_anomaly(&self) -> f64 {
let e_vec = self.eccentricity_vector();
let posit = self.position.normalize();
let val = e_vec.dot(&posit) / (e_vec.norm() * posit.norm());
if posit.dot(&self.velocity.normalize()) < 0.0 {
return 2.0 * PI - val.acos();
} else |
}
/* points from focus to perigee if I'm not mistaken */
pub fn eccentricity_vector(&self) -> Vector3<f64> {
let veloc = self.velocity;
let posit = self.position;
let h = self.angular_momentum();
(veloc.cross(&h) / SOLARGM) - posit.normalize()
}
pub fn angular_momentum(&self) -> Vector3<f64> {
self.position.cross(&self.velocity)
}
pub fn total_energy(&self) -> f64 {
let posit = self.position.norm();
let veloc = self.velocity.norm();
0.5 * veloc.powi(2) - (SOLARGM / posit)
}
pub fn omega(&self) -> Vector3<f64> {
self.angular_momentum() / self.position.norm_squared()
}
pub fn frame_rotation_rate(&self) -> f64 {
self.omega().norm()
}
pub fn position_at_angle(&self, angle: f64) -> Vector3<f64> {
let e = self.eccentricity();
let numer = self.angular_momentum().norm_squared() / SOLARGM;
let denom = 1_f64 + (e * (angle).cos());
let radius = numer / denom;
Vector3::new(radius, 0.0, 0.0)
}
pub fn velocity_at_angle(&self, angle: f64) -> Vector3<f64> {
let p = self.orbital_parameter();
let e = self.eccentricity();
let h = self.angular_momentum().norm_squared();
Vector3::new(
(h / p) * e * angle.sin(),
(h / p) * (1_f64 + e * angle.cos()),
0.0,
)
}
pub fn position_and_velocity(&self, angle: f64) -> (Vector3<f64>, Vector3<f64>) {
let r = self.position_at_angle(angle);
let v = self.velocity_at_angle(angle);
let tht = angle - self.true_anomaly();
let trans = Matrix3::from_rows(&[
Vector3::new(tht.cos(), -tht.sin(), 0.0).transpose(),
Vector3::new(tht.sin(), tht.cos(), 0.0).transpose(),
Vector3::new(0.0, 0.0, 1.0).transpose(),
]);
(trans * r, trans * v)
}
// Angle to other body, keep getting the wrong thing anyway, tried everything
pub fn angle_to(&self, other: &Body) -> f64 {
(self.position.dot(&other.position) / (self.position.norm() * other.position.norm())).acos()
}
/* Return a transformation matrix constructed from body's orbit in inertial frame */
pub fn make_frame(&self) -> Matrix3<f64> {
let e_r = self.position.normalize();
let e_h = self.angular_momentum().normalize();
let e_tht = e_h.cross(&e_r);
Matrix3::from_rows(&[e_r.transpose(), e_tht.transpose(), e_h.transpose()])
}
pub fn semi_major_axis(&self) -> f64 {
let ang_moment = self.angular_momentum().norm();
let e = self.eccentricity();
ang_moment.powi(2) / (SOLARGM * (1_f64 - e.powi(2)))
}
pub fn orbital_period(&self) -> f64 {
2_f64 * PI * (self.semi_major_axis().powi(3) / SOLARGM).sqrt()
}
pub fn orbital_parameter(&self) -> f64 {
let e = self.eccentricity();
self.semi_major_axis() * (1.0 - e.powi(2))
}
pub fn eccentric_anomaly(&self) -> f64 {
let e = self.eccentricity();
let theta = self.true_anomaly();
2.0 * ((theta / 2.0).tan() / ((1.0 + e) / (1.0 - e)).sqrt()).atan()
}
pub fn time_since_periapsis(&self) -> f64 {
let t_anom = self.true_anomaly();
let e_anom = self.true_to_eccentric(t_anom);
let a = self.semi_major_axis();
let e = self.eccentricity();
(a.powi(3) / SOLARGM).sqrt() * (e_anom - e * e_anom.sin())
}
pub fn eccentricity(&self) -> f64 {
self.eccentricity_vector().norm()
}
pub fn inclination(&self) -> f64 {
let h = self.angular_momentum();
(h[2] / h.norm()).acos() // h[2] is the z component of the vector
}
pub fn ascending_node(&self) -> Vector3<f64> {
let k = Vector3::new(0.0, 0.0, 1.0);
k.cross(&self.angular_momentum())
}
pub fn argument_of_periapsis(&self) -> f64 {
let n = self.ascending_node();
let e = self.eccentricity_vector();
let omega = (n.dot(&e) / (n.norm() * e.norm())).acos();
if e[2] < 0.0 {
PI2 - omega
} else {
omega
}
}
pub fn argument_of_ascending_node(&self) -> f64 {
let n = self.ascending_node();
let n_x = n[0];
let n_y = n[1];
if n_y >= 0.0 {
(n_x / n.norm()).acos()
} else {
PI2 - (n_x / n.norm()).acos()
}
}
pub fn true_to_eccentric(&self, t_anom: f64) -> f64 {
let a = self.semi_major_axis();
let e = self.eccentricity();
let b = a * (1.0 - e.powi(2)).sqrt();
let p = self.orbital_parameter();
let r = p / (1.0 + e * t_anom.cos());
let c = (a * e + r * t_anom.cos()) / a;
let s = (r / b) * t_anom.sin();
return s.atan2(c);
}
pub fn true_anomaly_at_time(&self, time: f64) -> f64 {
let t_peri = self.time_since_periapsis();
let m_anom = self.mean_anomaly((time * DAYTOSEC) + t_peri);
let angle = self.eccentric_from_mean(m_anom);
return PI2 - self.eccentric_to_true_anomaly(angle);
}
/// The eccentric anomaly at a certain time
pub fn eccentric_from_mean(&self, m_anom: f64) -> f64 {
match self.kepler(m_anom) {
Ok(num) => num,
Err(e) => {
eprintln!("{}: {}\n", "Invalid Orbit".red(), e);
return std::f64::NAN;
}
}
}
/// Return the eccentric anomaly using the appropriate Kepler equation
pub fn kepler(&self, m_anom: f64) -> Result<f64, &str> {
let e = self.eccentricity();
match &self.orbit_type {
OrbitType::Elliptic => Ok(elliptic_kepler(m_anom, e)),
OrbitType::Hyperbolic => Ok(hyper_kepler(m_anom, e)),
OrbitType::Circular => Err("cannot use Keler's equation with a circular orbit."),
OrbitType::Parabolic => Err("cannot use Kepler's equation with a parabolic orbit."),
}
}
pub fn eccentric_to_true_anomaly(&self, e_anom: f64) -> f64 {
let e = self.eccentricity();
// let sqrt_val = ((1.0 + e) / (1.0 - e)).sqrt();
// 2.0 * (sqrt_val * (e_anom / 2.0).tan()).atan() + PI2
((e_anom.cos() - e) / (1.0 - e * e_anom.cos())).acos()
}
/// Return the mean anomaly at a certain time from current position
pub fn mean_anomaly(&self, t: f64) -> f64 {
let n = (SOLARGM / self.semi_major_axis().powi(3)).sqrt();
n * t
}
}
/**
* Some of the kepler functions below. Body matches on its orbit type
* and uses the correct function to return the correct eccentric anomaly
*/
fn elliptic_kepler(nt: f64, eccen: f64) -> f64 {
let tolerance = 1e-200;
let kep = |e: f64| e - eccen * e.sin() - nt;
let kep_d = |e: f64| 1.0 - eccen * e.cos();
let mut e_0 = 0.0;
let mut e = e_0 - (kep(e_0) / kep_d(e_0));
while (e - e_0).abs() > tolerance {
e_0 = e;
e = e_0 - (kep(e_0) / kep_d(e_0));
}
return e;
}
fn hyper_kepler(nt: f64, eccen: f64) -> f64 {
let tolerance = 1e-100;
let kep = |e: f64| eccen * e.sinh() - nt - e;
let kep_d = |e: f64| eccen * e.cosh() - 1.0;
let mut e_0 = nt;
let mut e = e_0 - kep(e_0) / kep_d(e_0);
while (e - e_0).abs() > tolerance {
e_0 = e;
e = e_0 - kep(e_0) / kep_d(e_0);
}
return e;
}
pub fn three_one_three_transform(
arg_of_peri: f64,
inclination: f64,
arg_of_AN: f64,
) -> Matrix3<f64> {
let omega = arg_of_peri;
let inc = inclination;
let tht = arg_of_AN;
let m_c = Matrix3::new(
omega.cos(),
omega.sin(),
0.0,
-omega.sin(),
omega.cos(),
0.0,
0.0,
0.0,
1.0,
);
let m_b = Matrix3::new(
1.0,
0.0,
0.0,
0.0,
inc.cos(),
inc.sin(),
0.0,
-inc.sin(),
inc.cos(),
);
let m_a = Matrix3::new(
tht.cos(),
tht.sin(),
0.0,
-tht.sin(),
tht.cos(),
0.0,
0.0,
0.0,
1.0,
);
return m_c * m_b * m_a;
}
| {
return val.acos();
} | conditional_block |
body.rs | #![allow(dead_code)]
#![allow(unused_doc_comments)]
/**
* body.rs contains the Body struct and implements methods for it. A body struct contains only the
* position and velocity vectors of the body, other parameters are calculated using methods. A body
* is instantiated using using the Body::new() method, which also determines the type of orbit the
* body has at the same time. orbit_type does not have to be given manually.
*/
use nalgebra::{Matrix3, Vector3};
use std::f64::consts::PI;
use colored::*;
const DAYTOSEC: f64 = 24.0 * 3600.0;
const SOLARGM: f64 = 1.328905188132376e11;
const PI2: f64 = 2.0 * PI;
/**
* OrbitType is used to abstract away some of the functions that depend on
* the type of orbit the body is in, like kepler's equation. That way, you
* can call one function and it will return the correct value
*/
#[derive(Debug)]
pub enum OrbitType {
Circular,
Elliptic,
Parabolic,
Hyperbolic,
}
impl OrbitType {
/// Return the orbit type given the eccentricity, Body::new
/// uses this function to set the orbit type when an instance
/// is constructed
pub fn new(eccentricity: f64) -> OrbitType {
if eccentricity == 0.0 {
return OrbitType::Circular;
} else if eccentricity < 1.0 && eccentricity > 0.0 {
return OrbitType::Elliptic;
} else if eccentricity == 1.0 {
return OrbitType::Parabolic;
} else {
return OrbitType::Hyperbolic;
}
}
}
#[derive(Debug)]
pub struct Body {
pub position: Vector3<f64>, | impl Body {
pub fn new(position: Vector3<f64>, velocity: Vector3<f64>) -> Body {
// h and e are used for determining what kind of orbit the body is currently in
let h = position.cross(&velocity);
let e = ((velocity.cross(&h) / SOLARGM) - position.normalize()).norm();
Body {
position: position,
velocity: velocity,
orbit_type: OrbitType::new(e),
}
}
pub fn radial_velocity(&self) -> Vector3<f64> {
(self.velocity.dot(&self.position) / self.position.norm_squared()) * self.position
}
pub fn tangential_velocity(&self) -> Vector3<f64> {
self.omega().cross(&self.position)
}
pub fn true_anomaly(&self) -> f64 {
let e_vec = self.eccentricity_vector();
let posit = self.position.normalize();
let val = e_vec.dot(&posit) / (e_vec.norm() * posit.norm());
if posit.dot(&self.velocity.normalize()) < 0.0 {
return 2.0 * PI - val.acos();
} else {
return val.acos();
}
}
/* points from focus to perigee if I'm not mistaken */
pub fn eccentricity_vector(&self) -> Vector3<f64> {
let veloc = self.velocity;
let posit = self.position;
let h = self.angular_momentum();
(veloc.cross(&h) / SOLARGM) - posit.normalize()
}
pub fn angular_momentum(&self) -> Vector3<f64> {
self.position.cross(&self.velocity)
}
pub fn total_energy(&self) -> f64 {
let posit = self.position.norm();
let veloc = self.velocity.norm();
0.5 * veloc.powi(2) - (SOLARGM / posit)
}
pub fn omega(&self) -> Vector3<f64> {
self.angular_momentum() / self.position.norm_squared()
}
pub fn frame_rotation_rate(&self) -> f64 {
self.omega().norm()
}
pub fn position_at_angle(&self, angle: f64) -> Vector3<f64> {
let e = self.eccentricity();
let numer = self.angular_momentum().norm_squared() / SOLARGM;
let denom = 1_f64 + (e * (angle).cos());
let radius = numer / denom;
Vector3::new(radius, 0.0, 0.0)
}
pub fn velocity_at_angle(&self, angle: f64) -> Vector3<f64> {
let p = self.orbital_parameter();
let e = self.eccentricity();
let h = self.angular_momentum().norm_squared();
Vector3::new(
(h / p) * e * angle.sin(),
(h / p) * (1_f64 + e * angle.cos()),
0.0,
)
}
pub fn position_and_velocity(&self, angle: f64) -> (Vector3<f64>, Vector3<f64>) {
let r = self.position_at_angle(angle);
let v = self.velocity_at_angle(angle);
let tht = angle - self.true_anomaly();
let trans = Matrix3::from_rows(&[
Vector3::new(tht.cos(), -tht.sin(), 0.0).transpose(),
Vector3::new(tht.sin(), tht.cos(), 0.0).transpose(),
Vector3::new(0.0, 0.0, 1.0).transpose(),
]);
(trans * r, trans * v)
}
// Angle to other body, keep getting the wrong thing anyway, tried everything
pub fn angle_to(&self, other: &Body) -> f64 {
(self.position.dot(&other.position) / (self.position.norm() * other.position.norm())).acos()
}
/* Return a transformation matrix constructed from body's orbit in inertial frame */
pub fn make_frame(&self) -> Matrix3<f64> {
let e_r = self.position.normalize();
let e_h = self.angular_momentum().normalize();
let e_tht = e_h.cross(&e_r);
Matrix3::from_rows(&[e_r.transpose(), e_tht.transpose(), e_h.transpose()])
}
pub fn semi_major_axis(&self) -> f64 {
let ang_moment = self.angular_momentum().norm();
let e = self.eccentricity();
ang_moment.powi(2) / (SOLARGM * (1_f64 - e.powi(2)))
}
pub fn orbital_period(&self) -> f64 {
2_f64 * PI * (self.semi_major_axis().powi(3) / SOLARGM).sqrt()
}
pub fn orbital_parameter(&self) -> f64 {
let e = self.eccentricity();
self.semi_major_axis() * (1.0 - e.powi(2))
}
pub fn eccentric_anomaly(&self) -> f64 {
let e = self.eccentricity();
let theta = self.true_anomaly();
2.0 * ((theta / 2.0).tan() / ((1.0 + e) / (1.0 - e)).sqrt()).atan()
}
pub fn time_since_periapsis(&self) -> f64 {
let t_anom = self.true_anomaly();
let e_anom = self.true_to_eccentric(t_anom);
let a = self.semi_major_axis();
let e = self.eccentricity();
(a.powi(3) / SOLARGM).sqrt() * (e_anom - e * e_anom.sin())
}
pub fn eccentricity(&self) -> f64 {
self.eccentricity_vector().norm()
}
pub fn inclination(&self) -> f64 {
let h = self.angular_momentum();
(h[2] / h.norm()).acos() // h[2] is the z component of the vector
}
pub fn ascending_node(&self) -> Vector3<f64> {
let k = Vector3::new(0.0, 0.0, 1.0);
k.cross(&self.angular_momentum())
}
pub fn argument_of_periapsis(&self) -> f64 {
let n = self.ascending_node();
let e = self.eccentricity_vector();
let omega = (n.dot(&e) / (n.norm() * e.norm())).acos();
if e[2] < 0.0 {
PI2 - omega
} else {
omega
}
}
pub fn argument_of_ascending_node(&self) -> f64 {
let n = self.ascending_node();
let n_x = n[0];
let n_y = n[1];
if n_y >= 0.0 {
(n_x / n.norm()).acos()
} else {
PI2 - (n_x / n.norm()).acos()
}
}
pub fn true_to_eccentric(&self, t_anom: f64) -> f64 {
let a = self.semi_major_axis();
let e = self.eccentricity();
let b = a * (1.0 - e.powi(2)).sqrt();
let p = self.orbital_parameter();
let r = p / (1.0 + e * t_anom.cos());
let c = (a * e + r * t_anom.cos()) / a;
let s = (r / b) * t_anom.sin();
return s.atan2(c);
}
pub fn true_anomaly_at_time(&self, time: f64) -> f64 {
let t_peri = self.time_since_periapsis();
let m_anom = self.mean_anomaly((time * DAYTOSEC) + t_peri);
let angle = self.eccentric_from_mean(m_anom);
return PI2 - self.eccentric_to_true_anomaly(angle);
}
/// The eccentric anomaly at a certain time
pub fn eccentric_from_mean(&self, m_anom: f64) -> f64 {
match self.kepler(m_anom) {
Ok(num) => num,
Err(e) => {
eprintln!("{}: {}\n", "Invalid Orbit".red(), e);
return std::f64::NAN;
}
}
}
/// Return the eccentric anomaly using the appropriate Kepler equation
pub fn kepler(&self, m_anom: f64) -> Result<f64, &str> {
let e = self.eccentricity();
match &self.orbit_type {
OrbitType::Elliptic => Ok(elliptic_kepler(m_anom, e)),
OrbitType::Hyperbolic => Ok(hyper_kepler(m_anom, e)),
OrbitType::Circular => Err("cannot use Keler's equation with a circular orbit."),
OrbitType::Parabolic => Err("cannot use Kepler's equation with a parabolic orbit."),
}
}
pub fn eccentric_to_true_anomaly(&self, e_anom: f64) -> f64 {
let e = self.eccentricity();
// let sqrt_val = ((1.0 + e) / (1.0 - e)).sqrt();
// 2.0 * (sqrt_val * (e_anom / 2.0).tan()).atan() + PI2
((e_anom.cos() - e) / (1.0 - e * e_anom.cos())).acos()
}
/// Return the mean anomaly at a certain time from current position
pub fn mean_anomaly(&self, t: f64) -> f64 {
let n = (SOLARGM / self.semi_major_axis().powi(3)).sqrt();
n * t
}
}
/**
* Some of the kepler functions below. Body matches on its orbit type
* and uses the correct function to return the correct eccentric anomaly
*/
fn elliptic_kepler(nt: f64, eccen: f64) -> f64 {
let tolerance = 1e-200;
let kep = |e: f64| e - eccen * e.sin() - nt;
let kep_d = |e: f64| 1.0 - eccen * e.cos();
let mut e_0 = 0.0;
let mut e = e_0 - (kep(e_0) / kep_d(e_0));
while (e - e_0).abs() > tolerance {
e_0 = e;
e = e_0 - (kep(e_0) / kep_d(e_0));
}
return e;
}
fn hyper_kepler(nt: f64, eccen: f64) -> f64 {
let tolerance = 1e-100;
let kep = |e: f64| eccen * e.sinh() - nt - e;
let kep_d = |e: f64| eccen * e.cosh() - 1.0;
let mut e_0 = nt;
let mut e = e_0 - kep(e_0) / kep_d(e_0);
while (e - e_0).abs() > tolerance {
e_0 = e;
e = e_0 - kep(e_0) / kep_d(e_0);
}
return e;
}
pub fn three_one_three_transform(
arg_of_peri: f64,
inclination: f64,
arg_of_AN: f64,
) -> Matrix3<f64> {
let omega = arg_of_peri;
let inc = inclination;
let tht = arg_of_AN;
let m_c = Matrix3::new(
omega.cos(),
omega.sin(),
0.0,
-omega.sin(),
omega.cos(),
0.0,
0.0,
0.0,
1.0,
);
let m_b = Matrix3::new(
1.0,
0.0,
0.0,
0.0,
inc.cos(),
inc.sin(),
0.0,
-inc.sin(),
inc.cos(),
);
let m_a = Matrix3::new(
tht.cos(),
tht.sin(),
0.0,
-tht.sin(),
tht.cos(),
0.0,
0.0,
0.0,
1.0,
);
return m_c * m_b * m_a;
} | pub velocity: Vector3<f64>,
pub orbit_type: OrbitType,
}
/* Adds methods to Body struct */ | random_line_split |
body.rs | #![allow(dead_code)]
#![allow(unused_doc_comments)]
/**
* body.rs contains the Body struct and implements methods for it. A body struct contains only the
* position and velocity vectors of the body, other parameters are calculated using methods. A body
* is instantiated using using the Body::new() method, which also determines the type of orbit the
* body has at the same time. orbit_type does not have to be given manually.
*/
use nalgebra::{Matrix3, Vector3};
use std::f64::consts::PI;
use colored::*;
const DAYTOSEC: f64 = 24.0 * 3600.0;
const SOLARGM: f64 = 1.328905188132376e11;
const PI2: f64 = 2.0 * PI;
/**
* OrbitType is used to abstract away some of the functions that depend on
* the type of orbit the body is in, like kepler's equation. That way, you
* can call one function and it will return the correct value
*/
#[derive(Debug)]
pub enum OrbitType {
Circular,
Elliptic,
Parabolic,
Hyperbolic,
}
impl OrbitType {
/// Return the orbit type given the eccentricity, Body::new
/// uses this function to set the orbit type when an instance
/// is constructed
pub fn new(eccentricity: f64) -> OrbitType {
if eccentricity == 0.0 {
return OrbitType::Circular;
} else if eccentricity < 1.0 && eccentricity > 0.0 {
return OrbitType::Elliptic;
} else if eccentricity == 1.0 {
return OrbitType::Parabolic;
} else {
return OrbitType::Hyperbolic;
}
}
}
#[derive(Debug)]
pub struct Body {
pub position: Vector3<f64>,
pub velocity: Vector3<f64>,
pub orbit_type: OrbitType,
}
/* Adds methods to Body struct */
impl Body {
pub fn new(position: Vector3<f64>, velocity: Vector3<f64>) -> Body {
// h and e are used for determining what kind of orbit the body is currently in
let h = position.cross(&velocity);
let e = ((velocity.cross(&h) / SOLARGM) - position.normalize()).norm();
Body {
position: position,
velocity: velocity,
orbit_type: OrbitType::new(e),
}
}
pub fn radial_velocity(&self) -> Vector3<f64> {
(self.velocity.dot(&self.position) / self.position.norm_squared()) * self.position
}
pub fn tangential_velocity(&self) -> Vector3<f64> |
pub fn true_anomaly(&self) -> f64 {
let e_vec = self.eccentricity_vector();
let posit = self.position.normalize();
let val = e_vec.dot(&posit) / (e_vec.norm() * posit.norm());
if posit.dot(&self.velocity.normalize()) < 0.0 {
return 2.0 * PI - val.acos();
} else {
return val.acos();
}
}
/* points from focus to perigee if I'm not mistaken */
pub fn eccentricity_vector(&self) -> Vector3<f64> {
let veloc = self.velocity;
let posit = self.position;
let h = self.angular_momentum();
(veloc.cross(&h) / SOLARGM) - posit.normalize()
}
pub fn angular_momentum(&self) -> Vector3<f64> {
self.position.cross(&self.velocity)
}
pub fn total_energy(&self) -> f64 {
let posit = self.position.norm();
let veloc = self.velocity.norm();
0.5 * veloc.powi(2) - (SOLARGM / posit)
}
pub fn omega(&self) -> Vector3<f64> {
self.angular_momentum() / self.position.norm_squared()
}
pub fn frame_rotation_rate(&self) -> f64 {
self.omega().norm()
}
pub fn position_at_angle(&self, angle: f64) -> Vector3<f64> {
let e = self.eccentricity();
let numer = self.angular_momentum().norm_squared() / SOLARGM;
let denom = 1_f64 + (e * (angle).cos());
let radius = numer / denom;
Vector3::new(radius, 0.0, 0.0)
}
pub fn velocity_at_angle(&self, angle: f64) -> Vector3<f64> {
let p = self.orbital_parameter();
let e = self.eccentricity();
let h = self.angular_momentum().norm_squared();
Vector3::new(
(h / p) * e * angle.sin(),
(h / p) * (1_f64 + e * angle.cos()),
0.0,
)
}
pub fn position_and_velocity(&self, angle: f64) -> (Vector3<f64>, Vector3<f64>) {
let r = self.position_at_angle(angle);
let v = self.velocity_at_angle(angle);
let tht = angle - self.true_anomaly();
let trans = Matrix3::from_rows(&[
Vector3::new(tht.cos(), -tht.sin(), 0.0).transpose(),
Vector3::new(tht.sin(), tht.cos(), 0.0).transpose(),
Vector3::new(0.0, 0.0, 1.0).transpose(),
]);
(trans * r, trans * v)
}
// Angle to other body, keep getting the wrong thing anyway, tried everything
pub fn angle_to(&self, other: &Body) -> f64 {
(self.position.dot(&other.position) / (self.position.norm() * other.position.norm())).acos()
}
/* Return a transformation matrix constructed from body's orbit in inertial frame */
pub fn make_frame(&self) -> Matrix3<f64> {
let e_r = self.position.normalize();
let e_h = self.angular_momentum().normalize();
let e_tht = e_h.cross(&e_r);
Matrix3::from_rows(&[e_r.transpose(), e_tht.transpose(), e_h.transpose()])
}
pub fn semi_major_axis(&self) -> f64 {
let ang_moment = self.angular_momentum().norm();
let e = self.eccentricity();
ang_moment.powi(2) / (SOLARGM * (1_f64 - e.powi(2)))
}
pub fn orbital_period(&self) -> f64 {
2_f64 * PI * (self.semi_major_axis().powi(3) / SOLARGM).sqrt()
}
pub fn orbital_parameter(&self) -> f64 {
let e = self.eccentricity();
self.semi_major_axis() * (1.0 - e.powi(2))
}
pub fn eccentric_anomaly(&self) -> f64 {
let e = self.eccentricity();
let theta = self.true_anomaly();
2.0 * ((theta / 2.0).tan() / ((1.0 + e) / (1.0 - e)).sqrt()).atan()
}
pub fn time_since_periapsis(&self) -> f64 {
let t_anom = self.true_anomaly();
let e_anom = self.true_to_eccentric(t_anom);
let a = self.semi_major_axis();
let e = self.eccentricity();
(a.powi(3) / SOLARGM).sqrt() * (e_anom - e * e_anom.sin())
}
pub fn eccentricity(&self) -> f64 {
self.eccentricity_vector().norm()
}
pub fn inclination(&self) -> f64 {
let h = self.angular_momentum();
(h[2] / h.norm()).acos() // h[2] is the z component of the vector
}
pub fn ascending_node(&self) -> Vector3<f64> {
let k = Vector3::new(0.0, 0.0, 1.0);
k.cross(&self.angular_momentum())
}
pub fn argument_of_periapsis(&self) -> f64 {
let n = self.ascending_node();
let e = self.eccentricity_vector();
let omega = (n.dot(&e) / (n.norm() * e.norm())).acos();
if e[2] < 0.0 {
PI2 - omega
} else {
omega
}
}
pub fn argument_of_ascending_node(&self) -> f64 {
let n = self.ascending_node();
let n_x = n[0];
let n_y = n[1];
if n_y >= 0.0 {
(n_x / n.norm()).acos()
} else {
PI2 - (n_x / n.norm()).acos()
}
}
pub fn true_to_eccentric(&self, t_anom: f64) -> f64 {
let a = self.semi_major_axis();
let e = self.eccentricity();
let b = a * (1.0 - e.powi(2)).sqrt();
let p = self.orbital_parameter();
let r = p / (1.0 + e * t_anom.cos());
let c = (a * e + r * t_anom.cos()) / a;
let s = (r / b) * t_anom.sin();
return s.atan2(c);
}
pub fn true_anomaly_at_time(&self, time: f64) -> f64 {
let t_peri = self.time_since_periapsis();
let m_anom = self.mean_anomaly((time * DAYTOSEC) + t_peri);
let angle = self.eccentric_from_mean(m_anom);
return PI2 - self.eccentric_to_true_anomaly(angle);
}
/// The eccentric anomaly at a certain time
pub fn eccentric_from_mean(&self, m_anom: f64) -> f64 {
match self.kepler(m_anom) {
Ok(num) => num,
Err(e) => {
eprintln!("{}: {}\n", "Invalid Orbit".red(), e);
return std::f64::NAN;
}
}
}
/// Return the eccentric anomaly using the appropriate Kepler equation
pub fn kepler(&self, m_anom: f64) -> Result<f64, &str> {
let e = self.eccentricity();
match &self.orbit_type {
OrbitType::Elliptic => Ok(elliptic_kepler(m_anom, e)),
OrbitType::Hyperbolic => Ok(hyper_kepler(m_anom, e)),
OrbitType::Circular => Err("cannot use Keler's equation with a circular orbit."),
OrbitType::Parabolic => Err("cannot use Kepler's equation with a parabolic orbit."),
}
}
pub fn eccentric_to_true_anomaly(&self, e_anom: f64) -> f64 {
let e = self.eccentricity();
// let sqrt_val = ((1.0 + e) / (1.0 - e)).sqrt();
// 2.0 * (sqrt_val * (e_anom / 2.0).tan()).atan() + PI2
((e_anom.cos() - e) / (1.0 - e * e_anom.cos())).acos()
}
/// Return the mean anomaly at a certain time from current position
pub fn mean_anomaly(&self, t: f64) -> f64 {
let n = (SOLARGM / self.semi_major_axis().powi(3)).sqrt();
n * t
}
}
/**
* Some of the kepler functions below. Body matches on its orbit type
* and uses the correct function to return the correct eccentric anomaly
*/
fn elliptic_kepler(nt: f64, eccen: f64) -> f64 {
let tolerance = 1e-200;
let kep = |e: f64| e - eccen * e.sin() - nt;
let kep_d = |e: f64| 1.0 - eccen * e.cos();
let mut e_0 = 0.0;
let mut e = e_0 - (kep(e_0) / kep_d(e_0));
while (e - e_0).abs() > tolerance {
e_0 = e;
e = e_0 - (kep(e_0) / kep_d(e_0));
}
return e;
}
fn hyper_kepler(nt: f64, eccen: f64) -> f64 {
let tolerance = 1e-100;
let kep = |e: f64| eccen * e.sinh() - nt - e;
let kep_d = |e: f64| eccen * e.cosh() - 1.0;
let mut e_0 = nt;
let mut e = e_0 - kep(e_0) / kep_d(e_0);
while (e - e_0).abs() > tolerance {
e_0 = e;
e = e_0 - kep(e_0) / kep_d(e_0);
}
return e;
}
pub fn three_one_three_transform(
arg_of_peri: f64,
inclination: f64,
arg_of_AN: f64,
) -> Matrix3<f64> {
let omega = arg_of_peri;
let inc = inclination;
let tht = arg_of_AN;
let m_c = Matrix3::new(
omega.cos(),
omega.sin(),
0.0,
-omega.sin(),
omega.cos(),
0.0,
0.0,
0.0,
1.0,
);
let m_b = Matrix3::new(
1.0,
0.0,
0.0,
0.0,
inc.cos(),
inc.sin(),
0.0,
-inc.sin(),
inc.cos(),
);
let m_a = Matrix3::new(
tht.cos(),
tht.sin(),
0.0,
-tht.sin(),
tht.cos(),
0.0,
0.0,
0.0,
1.0,
);
return m_c * m_b * m_a;
}
| {
self.omega().cross(&self.position)
} | identifier_body |
object_ptr.rs | /*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
use std::convert::TryFrom;
use std::ffi::CString;
use std::ptr::NonNull;
use std::sync::atomic::AtomicI32;
use tvm_sys::ffi::{self, TVMObjectFree, TVMObjectRetain, TVMObjectTypeKey2Index};
use tvm_sys::{ArgValue, RetValue};
use crate::errors::Error;
type Deleter = unsafe extern "C" fn(object: *mut Object) -> ();
#[derive(Debug)]
#[repr(C)]
pub struct Object {
pub type_index: u32,
// TODO(@jroesch): pretty sure Rust and C++ atomics are the same, but not sure.
// NB: in general we should not touch this in Rust.
pub(self) ref_count: AtomicI32,
pub fdeleter: Deleter,
}
unsafe extern "C" fn delete<T: IsObject>(object: *mut Object) {
let typed_object: *mut T = std::mem::transmute(object);
T::typed_delete(typed_object);
}
fn derived_from(child_type_index: u32, parent_type_index: u32) -> bool {
let mut is_derived = 0;
crate::check_call!(ffi::TVMObjectDerivedFrom(
child_type_index,
parent_type_index,
&mut is_derived
));
if is_derived == 0 {
false
} else {
true
}
}
impl Object {
fn new(type_index: u32, deleter: Deleter) -> Object {
Object {
type_index,
// Note: do not touch this field directly again, this is
// a critical section, we write a 1 to the atomic which will now
// be managed by the C++ atomics.
// In the future we should probably use C-atomcis.
ref_count: AtomicI32::new(0),
fdeleter: deleter,
}
}
fn get_type_index<T: IsObject>() -> u32 {
let type_key = T::TYPE_KEY;
let cstring = CString::new(type_key).expect("type key must not contain null characters");
if type_key == "Object" {
return 0;
} else {
let mut index = 0;
unsafe {
let index_ptr = std::mem::transmute(&mut index);
if TVMObjectTypeKey2Index(cstring.as_ptr(), index_ptr)!= 0 {
panic!(crate::get_last_error())
}
}
return index;
}
}
pub fn base_object<T: IsObject>() -> Object {
let index = Object::get_type_index::<T>();
Object::new(index, delete::<T>)
}
pub(self) fn inc_ref(&self) {
unsafe {
let raw_ptr = std::mem::transmute(self);
assert_eq!(TVMObjectRetain(raw_ptr), 0);
}
}
pub(self) fn dec_ref(&self) {
unsafe {
let raw_ptr = std::mem::transmute(self);
assert_eq!(TVMObjectFree(raw_ptr), 0);
}
}
}
pub unsafe trait IsObject {
const TYPE_KEY: &'static str;
fn as_object<'s>(&'s self) -> &'s Object;
unsafe extern "C" fn typed_delete(object: *mut Self) {
let object = Box::from_raw(object);
drop(object)
}
}
unsafe impl IsObject for Object {
const TYPE_KEY: &'static str = "Object";
fn as_object<'s>(&'s self) -> &'s Object {
self
}
}
#[repr(C)]
pub struct ObjectPtr<T: IsObject> {
pub ptr: NonNull<T>,
}
fn inc_ref<T: IsObject>(ptr: NonNull<T>) {
unsafe { ptr.as_ref().as_object().inc_ref() }
}
fn dec_ref<T: IsObject>(ptr: NonNull<T>) {
unsafe { ptr.as_ref().as_object().dec_ref() }
}
impl ObjectPtr<Object> {
fn from_raw(object_ptr: *mut Object) -> Option<ObjectPtr<Object>> {
let non_null = NonNull::new(object_ptr);
non_null.map(|ptr| ObjectPtr { ptr })
}
}
impl<T: IsObject> Clone for ObjectPtr<T> {
fn clone(&self) -> Self {
inc_ref(self.ptr);
ObjectPtr { ptr: self.ptr }
}
}
impl<T: IsObject> Drop for ObjectPtr<T> {
fn drop(&mut self) {
dec_ref(self.ptr);
}
}
impl<T: IsObject> ObjectPtr<T> {
pub fn leak<'a>(object_ptr: ObjectPtr<T>) -> &'a mut T
where
T: 'a,
{
unsafe { &mut *std::mem::ManuallyDrop::new(object_ptr).ptr.as_ptr() }
}
pub fn new(object: T) -> ObjectPtr<T> {
let object_ptr = Box::new(object);
let object_ptr = Box::leak(object_ptr);
let ptr = NonNull::from(object_ptr);
inc_ref(ptr);
ObjectPtr { ptr }
}
pub fn count(&self) -> i32 {
// need to do atomic read in C++
// ABI compatible atomics is funky/hard.
self.as_object()
.ref_count
.load(std::sync::atomic::Ordering::SeqCst)
}
fn as_object<'s>(&'s self) -> &'s Object {
unsafe { self.ptr.as_ref().as_object() }
}
pub fn upcast(&self) -> ObjectPtr<Object> {
ObjectPtr {
ptr: self.ptr.cast(),
}
}
pub fn | <U: IsObject>(&self) -> Result<ObjectPtr<U>, Error> {
let child_index = Object::get_type_index::<U>();
let object_index = self.as_object().type_index;
let is_derived = if child_index == object_index {
true
} else {
// TODO(@jroesch): write tests
derived_from(object_index, child_index)
};
if is_derived {
Ok(ObjectPtr {
ptr: self.ptr.cast(),
})
} else {
Err(Error::downcast("TODOget_type_key".into(), U::TYPE_KEY))
}
}
}
impl<T: IsObject> std::ops::Deref for ObjectPtr<T> {
type Target = T;
fn deref(&self) -> &Self::Target {
unsafe { self.ptr.as_ref() }
}
}
impl<'a, T: IsObject> From<ObjectPtr<T>> for RetValue {
fn from(object_ptr: ObjectPtr<T>) -> RetValue {
let raw_object_ptr = ObjectPtr::leak(object_ptr);
let void_ptr = unsafe { std::mem::transmute(raw_object_ptr) };
RetValue::ObjectHandle(void_ptr)
}
}
impl<'a, T: IsObject> TryFrom<RetValue> for ObjectPtr<T> {
type Error = Error;
fn try_from(ret_value: RetValue) -> Result<ObjectPtr<T>, Self::Error> {
match ret_value {
RetValue::ObjectHandle(handle) => {
let handle: *mut Object = unsafe { std::mem::transmute(handle) };
let optr = ObjectPtr::from_raw(handle).ok_or(Error::Null)?;
optr.downcast()
}
_ => Err(Error::downcast(format!("{:?}", ret_value), "ObjectHandle")),
}
}
}
impl<'a, T: IsObject> From<ObjectPtr<T>> for ArgValue<'a> {
fn from(object_ptr: ObjectPtr<T>) -> ArgValue<'a> {
let raw_object_ptr = ObjectPtr::leak(object_ptr);
let void_ptr = unsafe { std::mem::transmute(raw_object_ptr) };
ArgValue::ObjectHandle(void_ptr)
}
}
impl<'a, T: IsObject> TryFrom<ArgValue<'a>> for ObjectPtr<T> {
type Error = Error;
fn try_from(arg_value: ArgValue<'a>) -> Result<ObjectPtr<T>, Self::Error> {
match arg_value {
ArgValue::ObjectHandle(handle) => {
let handle = unsafe { std::mem::transmute(handle) };
let optr = ObjectPtr::from_raw(handle).ok_or(Error::Null)?;
optr.downcast()
}
_ => Err(Error::downcast(format!("{:?}", arg_value), "ObjectHandle")),
}
}
}
impl<'a, T: IsObject> TryFrom<&ArgValue<'a>> for ObjectPtr<T> {
type Error = Error;
fn try_from(arg_value: &ArgValue<'a>) -> Result<ObjectPtr<T>, Self::Error> {
match arg_value {
ArgValue::ObjectHandle(handle) => {
let handle = unsafe { std::mem::transmute(handle) };
let optr = ObjectPtr::from_raw(handle).ok_or(Error::Null)?;
optr.downcast()
}
_ => Err(Error::downcast(format!("{:?}", arg_value), "ObjectHandle")),
}
}
}
#[cfg(test)]
mod tests {
use super::{Object, ObjectPtr};
use anyhow::{ensure, Result};
use std::convert::TryInto;
use tvm_sys::{ArgValue, RetValue};
#[test]
fn test_new_object() -> anyhow::Result<()> {
let object = Object::base_object::<Object>();
let ptr = ObjectPtr::new(object);
assert_eq!(ptr.count(), 1);
Ok(())
}
#[test]
fn roundtrip_retvalue() -> Result<()> {
let ptr = ObjectPtr::new(Object::base_object::<Object>());
let ret_value: RetValue = ptr.clone().into();
let ptr2: ObjectPtr<Object> = ret_value.try_into()?;
ensure!(
ptr.type_index == ptr2.type_index,
"type indices do not match"
);
ensure!(
ptr.fdeleter == ptr2.fdeleter,
"objects have different deleters"
);
Ok(())
}
#[test]
fn roundtrip_argvalue() -> Result<()> {
let ptr = ObjectPtr::new(Object::base_object::<Object>());
let arg_value: ArgValue = ptr.clone().into();
let ptr2: ObjectPtr<Object> = arg_value.try_into()?;
ensure!(
ptr.type_index == ptr2.type_index,
"type indices do not match"
);
ensure!(
ptr.fdeleter == ptr2.fdeleter,
"objects have different deleters"
);
Ok(())
}
fn test_fn(o: ObjectPtr<Object>) -> ObjectPtr<Object> {
assert_eq!(o.count(), 2);
return o;
}
#[test]
fn test_ref_count_boundary() {
use super::*;
use crate::function::{register, Function, Result};
let ptr = ObjectPtr::new(Object::base_object::<Object>());
let stay = ptr.clone();
assert_eq!(ptr.count(), 2);
register(test_fn, "my_func").unwrap();
let func = Function::get("my_func").unwrap();
let func = func.to_boxed_fn::<dyn Fn(ObjectPtr<Object>) -> Result<ObjectPtr<Object>>>();
func(ptr).unwrap();
assert_eq!(stay.count(), 1);
}
}
| downcast | identifier_name |
object_ptr.rs | /*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
use std::convert::TryFrom;
use std::ffi::CString;
use std::ptr::NonNull;
use std::sync::atomic::AtomicI32;
use tvm_sys::ffi::{self, TVMObjectFree, TVMObjectRetain, TVMObjectTypeKey2Index};
use tvm_sys::{ArgValue, RetValue};
use crate::errors::Error;
type Deleter = unsafe extern "C" fn(object: *mut Object) -> ();
#[derive(Debug)]
#[repr(C)]
pub struct Object {
pub type_index: u32,
// TODO(@jroesch): pretty sure Rust and C++ atomics are the same, but not sure.
// NB: in general we should not touch this in Rust.
pub(self) ref_count: AtomicI32,
pub fdeleter: Deleter,
}
unsafe extern "C" fn delete<T: IsObject>(object: *mut Object) {
let typed_object: *mut T = std::mem::transmute(object);
T::typed_delete(typed_object);
}
fn derived_from(child_type_index: u32, parent_type_index: u32) -> bool {
let mut is_derived = 0;
crate::check_call!(ffi::TVMObjectDerivedFrom(
child_type_index,
parent_type_index,
&mut is_derived
));
if is_derived == 0 {
false
} else {
true
}
}
impl Object {
fn new(type_index: u32, deleter: Deleter) -> Object {
Object {
type_index,
// Note: do not touch this field directly again, this is
// a critical section, we write a 1 to the atomic which will now
// be managed by the C++ atomics.
// In the future we should probably use C-atomcis.
ref_count: AtomicI32::new(0),
fdeleter: deleter,
}
}
fn get_type_index<T: IsObject>() -> u32 {
let type_key = T::TYPE_KEY;
let cstring = CString::new(type_key).expect("type key must not contain null characters");
if type_key == "Object" {
return 0;
} else {
let mut index = 0;
unsafe {
let index_ptr = std::mem::transmute(&mut index);
if TVMObjectTypeKey2Index(cstring.as_ptr(), index_ptr)!= 0 {
panic!(crate::get_last_error())
}
}
return index;
}
}
pub fn base_object<T: IsObject>() -> Object {
let index = Object::get_type_index::<T>();
Object::new(index, delete::<T>)
}
pub(self) fn inc_ref(&self) {
unsafe {
let raw_ptr = std::mem::transmute(self);
assert_eq!(TVMObjectRetain(raw_ptr), 0);
}
}
pub(self) fn dec_ref(&self) {
unsafe {
let raw_ptr = std::mem::transmute(self);
assert_eq!(TVMObjectFree(raw_ptr), 0);
}
}
}
pub unsafe trait IsObject {
const TYPE_KEY: &'static str;
fn as_object<'s>(&'s self) -> &'s Object;
unsafe extern "C" fn typed_delete(object: *mut Self) {
let object = Box::from_raw(object);
drop(object)
}
}
unsafe impl IsObject for Object {
const TYPE_KEY: &'static str = "Object";
fn as_object<'s>(&'s self) -> &'s Object {
self
}
}
#[repr(C)]
pub struct ObjectPtr<T: IsObject> {
pub ptr: NonNull<T>,
}
fn inc_ref<T: IsObject>(ptr: NonNull<T>) {
unsafe { ptr.as_ref().as_object().inc_ref() }
}
fn dec_ref<T: IsObject>(ptr: NonNull<T>) {
unsafe { ptr.as_ref().as_object().dec_ref() }
}
impl ObjectPtr<Object> {
fn from_raw(object_ptr: *mut Object) -> Option<ObjectPtr<Object>> {
let non_null = NonNull::new(object_ptr);
non_null.map(|ptr| ObjectPtr { ptr })
}
}
impl<T: IsObject> Clone for ObjectPtr<T> {
fn clone(&self) -> Self {
inc_ref(self.ptr);
ObjectPtr { ptr: self.ptr }
}
}
impl<T: IsObject> Drop for ObjectPtr<T> {
fn drop(&mut self) {
dec_ref(self.ptr);
}
}
impl<T: IsObject> ObjectPtr<T> {
pub fn leak<'a>(object_ptr: ObjectPtr<T>) -> &'a mut T
where
T: 'a,
{
unsafe { &mut *std::mem::ManuallyDrop::new(object_ptr).ptr.as_ptr() }
}
pub fn new(object: T) -> ObjectPtr<T> {
let object_ptr = Box::new(object);
let object_ptr = Box::leak(object_ptr);
let ptr = NonNull::from(object_ptr);
inc_ref(ptr);
ObjectPtr { ptr }
}
pub fn count(&self) -> i32 {
// need to do atomic read in C++
// ABI compatible atomics is funky/hard.
self.as_object()
.ref_count
.load(std::sync::atomic::Ordering::SeqCst)
}
fn as_object<'s>(&'s self) -> &'s Object |
pub fn upcast(&self) -> ObjectPtr<Object> {
ObjectPtr {
ptr: self.ptr.cast(),
}
}
pub fn downcast<U: IsObject>(&self) -> Result<ObjectPtr<U>, Error> {
let child_index = Object::get_type_index::<U>();
let object_index = self.as_object().type_index;
let is_derived = if child_index == object_index {
true
} else {
// TODO(@jroesch): write tests
derived_from(object_index, child_index)
};
if is_derived {
Ok(ObjectPtr {
ptr: self.ptr.cast(),
})
} else {
Err(Error::downcast("TODOget_type_key".into(), U::TYPE_KEY))
}
}
}
impl<T: IsObject> std::ops::Deref for ObjectPtr<T> {
type Target = T;
fn deref(&self) -> &Self::Target {
unsafe { self.ptr.as_ref() }
}
}
impl<'a, T: IsObject> From<ObjectPtr<T>> for RetValue {
fn from(object_ptr: ObjectPtr<T>) -> RetValue {
let raw_object_ptr = ObjectPtr::leak(object_ptr);
let void_ptr = unsafe { std::mem::transmute(raw_object_ptr) };
RetValue::ObjectHandle(void_ptr)
}
}
impl<'a, T: IsObject> TryFrom<RetValue> for ObjectPtr<T> {
type Error = Error;
fn try_from(ret_value: RetValue) -> Result<ObjectPtr<T>, Self::Error> {
match ret_value {
RetValue::ObjectHandle(handle) => {
let handle: *mut Object = unsafe { std::mem::transmute(handle) };
let optr = ObjectPtr::from_raw(handle).ok_or(Error::Null)?;
optr.downcast()
}
_ => Err(Error::downcast(format!("{:?}", ret_value), "ObjectHandle")),
}
}
}
impl<'a, T: IsObject> From<ObjectPtr<T>> for ArgValue<'a> {
fn from(object_ptr: ObjectPtr<T>) -> ArgValue<'a> {
let raw_object_ptr = ObjectPtr::leak(object_ptr);
let void_ptr = unsafe { std::mem::transmute(raw_object_ptr) };
ArgValue::ObjectHandle(void_ptr)
}
}
impl<'a, T: IsObject> TryFrom<ArgValue<'a>> for ObjectPtr<T> {
type Error = Error;
fn try_from(arg_value: ArgValue<'a>) -> Result<ObjectPtr<T>, Self::Error> {
match arg_value {
ArgValue::ObjectHandle(handle) => {
let handle = unsafe { std::mem::transmute(handle) };
let optr = ObjectPtr::from_raw(handle).ok_or(Error::Null)?;
optr.downcast()
}
_ => Err(Error::downcast(format!("{:?}", arg_value), "ObjectHandle")),
}
}
}
impl<'a, T: IsObject> TryFrom<&ArgValue<'a>> for ObjectPtr<T> {
type Error = Error;
fn try_from(arg_value: &ArgValue<'a>) -> Result<ObjectPtr<T>, Self::Error> {
match arg_value {
ArgValue::ObjectHandle(handle) => {
let handle = unsafe { std::mem::transmute(handle) };
let optr = ObjectPtr::from_raw(handle).ok_or(Error::Null)?;
optr.downcast()
}
_ => Err(Error::downcast(format!("{:?}", arg_value), "ObjectHandle")),
}
}
}
#[cfg(test)]
mod tests {
use super::{Object, ObjectPtr};
use anyhow::{ensure, Result};
use std::convert::TryInto;
use tvm_sys::{ArgValue, RetValue};
#[test]
fn test_new_object() -> anyhow::Result<()> {
let object = Object::base_object::<Object>();
let ptr = ObjectPtr::new(object);
assert_eq!(ptr.count(), 1);
Ok(())
}
#[test]
fn roundtrip_retvalue() -> Result<()> {
let ptr = ObjectPtr::new(Object::base_object::<Object>());
let ret_value: RetValue = ptr.clone().into();
let ptr2: ObjectPtr<Object> = ret_value.try_into()?;
ensure!(
ptr.type_index == ptr2.type_index,
"type indices do not match"
);
ensure!(
ptr.fdeleter == ptr2.fdeleter,
"objects have different deleters"
);
Ok(())
}
#[test]
fn roundtrip_argvalue() -> Result<()> {
let ptr = ObjectPtr::new(Object::base_object::<Object>());
let arg_value: ArgValue = ptr.clone().into();
let ptr2: ObjectPtr<Object> = arg_value.try_into()?;
ensure!(
ptr.type_index == ptr2.type_index,
"type indices do not match"
);
ensure!(
ptr.fdeleter == ptr2.fdeleter,
"objects have different deleters"
);
Ok(())
}
fn test_fn(o: ObjectPtr<Object>) -> ObjectPtr<Object> {
assert_eq!(o.count(), 2);
return o;
}
#[test]
fn test_ref_count_boundary() {
use super::*;
use crate::function::{register, Function, Result};
let ptr = ObjectPtr::new(Object::base_object::<Object>());
let stay = ptr.clone();
assert_eq!(ptr.count(), 2);
register(test_fn, "my_func").unwrap();
let func = Function::get("my_func").unwrap();
let func = func.to_boxed_fn::<dyn Fn(ObjectPtr<Object>) -> Result<ObjectPtr<Object>>>();
func(ptr).unwrap();
assert_eq!(stay.count(), 1);
}
}
| {
unsafe { self.ptr.as_ref().as_object() }
} | identifier_body |
object_ptr.rs | /*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
use std::convert::TryFrom;
use std::ffi::CString;
use std::ptr::NonNull;
use std::sync::atomic::AtomicI32;
use tvm_sys::ffi::{self, TVMObjectFree, TVMObjectRetain, TVMObjectTypeKey2Index};
use tvm_sys::{ArgValue, RetValue};
use crate::errors::Error;
type Deleter = unsafe extern "C" fn(object: *mut Object) -> ();
#[derive(Debug)]
#[repr(C)]
pub struct Object {
pub type_index: u32,
// TODO(@jroesch): pretty sure Rust and C++ atomics are the same, but not sure.
// NB: in general we should not touch this in Rust.
pub(self) ref_count: AtomicI32,
pub fdeleter: Deleter,
}
unsafe extern "C" fn delete<T: IsObject>(object: *mut Object) {
let typed_object: *mut T = std::mem::transmute(object);
T::typed_delete(typed_object);
}
fn derived_from(child_type_index: u32, parent_type_index: u32) -> bool {
let mut is_derived = 0;
crate::check_call!(ffi::TVMObjectDerivedFrom(
child_type_index,
parent_type_index,
&mut is_derived
));
if is_derived == 0 {
false
} else {
true
}
}
impl Object {
fn new(type_index: u32, deleter: Deleter) -> Object {
Object {
type_index,
// Note: do not touch this field directly again, this is
// a critical section, we write a 1 to the atomic which will now
// be managed by the C++ atomics.
// In the future we should probably use C-atomcis.
ref_count: AtomicI32::new(0),
fdeleter: deleter,
}
}
fn get_type_index<T: IsObject>() -> u32 {
let type_key = T::TYPE_KEY;
let cstring = CString::new(type_key).expect("type key must not contain null characters");
if type_key == "Object" {
return 0;
} else {
let mut index = 0;
unsafe {
let index_ptr = std::mem::transmute(&mut index);
if TVMObjectTypeKey2Index(cstring.as_ptr(), index_ptr)!= 0 {
panic!(crate::get_last_error())
}
}
return index;
}
}
pub fn base_object<T: IsObject>() -> Object {
let index = Object::get_type_index::<T>();
Object::new(index, delete::<T>)
}
pub(self) fn inc_ref(&self) {
unsafe {
let raw_ptr = std::mem::transmute(self);
assert_eq!(TVMObjectRetain(raw_ptr), 0);
}
}
pub(self) fn dec_ref(&self) {
unsafe {
let raw_ptr = std::mem::transmute(self);
assert_eq!(TVMObjectFree(raw_ptr), 0);
}
}
}
pub unsafe trait IsObject {
const TYPE_KEY: &'static str;
fn as_object<'s>(&'s self) -> &'s Object;
unsafe extern "C" fn typed_delete(object: *mut Self) {
let object = Box::from_raw(object);
drop(object)
}
}
unsafe impl IsObject for Object {
const TYPE_KEY: &'static str = "Object";
fn as_object<'s>(&'s self) -> &'s Object {
self
}
}
#[repr(C)]
pub struct ObjectPtr<T: IsObject> {
pub ptr: NonNull<T>,
}
fn inc_ref<T: IsObject>(ptr: NonNull<T>) {
unsafe { ptr.as_ref().as_object().inc_ref() }
}
fn dec_ref<T: IsObject>(ptr: NonNull<T>) {
unsafe { ptr.as_ref().as_object().dec_ref() }
}
impl ObjectPtr<Object> {
fn from_raw(object_ptr: *mut Object) -> Option<ObjectPtr<Object>> {
let non_null = NonNull::new(object_ptr);
non_null.map(|ptr| ObjectPtr { ptr })
}
}
impl<T: IsObject> Clone for ObjectPtr<T> {
fn clone(&self) -> Self {
inc_ref(self.ptr);
ObjectPtr { ptr: self.ptr }
}
}
impl<T: IsObject> Drop for ObjectPtr<T> {
fn drop(&mut self) {
dec_ref(self.ptr);
}
}
impl<T: IsObject> ObjectPtr<T> {
pub fn leak<'a>(object_ptr: ObjectPtr<T>) -> &'a mut T
where
T: 'a,
{
unsafe { &mut *std::mem::ManuallyDrop::new(object_ptr).ptr.as_ptr() }
}
pub fn new(object: T) -> ObjectPtr<T> {
let object_ptr = Box::new(object);
let object_ptr = Box::leak(object_ptr);
let ptr = NonNull::from(object_ptr);
inc_ref(ptr);
ObjectPtr { ptr }
}
pub fn count(&self) -> i32 {
// need to do atomic read in C++
// ABI compatible atomics is funky/hard.
self.as_object()
.ref_count
.load(std::sync::atomic::Ordering::SeqCst)
}
fn as_object<'s>(&'s self) -> &'s Object {
unsafe { self.ptr.as_ref().as_object() }
}
pub fn upcast(&self) -> ObjectPtr<Object> {
ObjectPtr {
ptr: self.ptr.cast(),
}
}
pub fn downcast<U: IsObject>(&self) -> Result<ObjectPtr<U>, Error> {
let child_index = Object::get_type_index::<U>();
let object_index = self.as_object().type_index;
let is_derived = if child_index == object_index {
true
} else {
// TODO(@jroesch): write tests
derived_from(object_index, child_index)
};
if is_derived {
Ok(ObjectPtr {
ptr: self.ptr.cast(),
})
} else {
Err(Error::downcast("TODOget_type_key".into(), U::TYPE_KEY))
}
}
}
impl<T: IsObject> std::ops::Deref for ObjectPtr<T> {
type Target = T;
fn deref(&self) -> &Self::Target {
unsafe { self.ptr.as_ref() }
}
}
impl<'a, T: IsObject> From<ObjectPtr<T>> for RetValue {
fn from(object_ptr: ObjectPtr<T>) -> RetValue {
let raw_object_ptr = ObjectPtr::leak(object_ptr);
let void_ptr = unsafe { std::mem::transmute(raw_object_ptr) };
RetValue::ObjectHandle(void_ptr)
}
}
impl<'a, T: IsObject> TryFrom<RetValue> for ObjectPtr<T> {
type Error = Error;
fn try_from(ret_value: RetValue) -> Result<ObjectPtr<T>, Self::Error> {
match ret_value {
RetValue::ObjectHandle(handle) => {
let handle: *mut Object = unsafe { std::mem::transmute(handle) };
let optr = ObjectPtr::from_raw(handle).ok_or(Error::Null)?;
optr.downcast()
}
_ => Err(Error::downcast(format!("{:?}", ret_value), "ObjectHandle")),
}
}
}
impl<'a, T: IsObject> From<ObjectPtr<T>> for ArgValue<'a> {
fn from(object_ptr: ObjectPtr<T>) -> ArgValue<'a> {
let raw_object_ptr = ObjectPtr::leak(object_ptr);
let void_ptr = unsafe { std::mem::transmute(raw_object_ptr) };
ArgValue::ObjectHandle(void_ptr)
}
}
impl<'a, T: IsObject> TryFrom<ArgValue<'a>> for ObjectPtr<T> {
type Error = Error;
fn try_from(arg_value: ArgValue<'a>) -> Result<ObjectPtr<T>, Self::Error> {
match arg_value {
ArgValue::ObjectHandle(handle) => {
let handle = unsafe { std::mem::transmute(handle) };
let optr = ObjectPtr::from_raw(handle).ok_or(Error::Null)?;
optr.downcast()
}
_ => Err(Error::downcast(format!("{:?}", arg_value), "ObjectHandle")),
}
}
}
impl<'a, T: IsObject> TryFrom<&ArgValue<'a>> for ObjectPtr<T> {
type Error = Error;
fn try_from(arg_value: &ArgValue<'a>) -> Result<ObjectPtr<T>, Self::Error> {
match arg_value {
ArgValue::ObjectHandle(handle) => {
let handle = unsafe { std::mem::transmute(handle) };
let optr = ObjectPtr::from_raw(handle).ok_or(Error::Null)?;
optr.downcast()
}
_ => Err(Error::downcast(format!("{:?}", arg_value), "ObjectHandle")),
}
}
}
#[cfg(test)]
mod tests {
use super::{Object, ObjectPtr};
use anyhow::{ensure, Result};
use std::convert::TryInto;
use tvm_sys::{ArgValue, RetValue};
#[test]
fn test_new_object() -> anyhow::Result<()> { | let object = Object::base_object::<Object>();
let ptr = ObjectPtr::new(object);
assert_eq!(ptr.count(), 1);
Ok(())
}
#[test]
fn roundtrip_retvalue() -> Result<()> {
let ptr = ObjectPtr::new(Object::base_object::<Object>());
let ret_value: RetValue = ptr.clone().into();
let ptr2: ObjectPtr<Object> = ret_value.try_into()?;
ensure!(
ptr.type_index == ptr2.type_index,
"type indices do not match"
);
ensure!(
ptr.fdeleter == ptr2.fdeleter,
"objects have different deleters"
);
Ok(())
}
#[test]
fn roundtrip_argvalue() -> Result<()> {
let ptr = ObjectPtr::new(Object::base_object::<Object>());
let arg_value: ArgValue = ptr.clone().into();
let ptr2: ObjectPtr<Object> = arg_value.try_into()?;
ensure!(
ptr.type_index == ptr2.type_index,
"type indices do not match"
);
ensure!(
ptr.fdeleter == ptr2.fdeleter,
"objects have different deleters"
);
Ok(())
}
fn test_fn(o: ObjectPtr<Object>) -> ObjectPtr<Object> {
assert_eq!(o.count(), 2);
return o;
}
#[test]
fn test_ref_count_boundary() {
use super::*;
use crate::function::{register, Function, Result};
let ptr = ObjectPtr::new(Object::base_object::<Object>());
let stay = ptr.clone();
assert_eq!(ptr.count(), 2);
register(test_fn, "my_func").unwrap();
let func = Function::get("my_func").unwrap();
let func = func.to_boxed_fn::<dyn Fn(ObjectPtr<Object>) -> Result<ObjectPtr<Object>>>();
func(ptr).unwrap();
assert_eq!(stay.count(), 1);
}
} | random_line_split |
|
object_ptr.rs | /*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
use std::convert::TryFrom;
use std::ffi::CString;
use std::ptr::NonNull;
use std::sync::atomic::AtomicI32;
use tvm_sys::ffi::{self, TVMObjectFree, TVMObjectRetain, TVMObjectTypeKey2Index};
use tvm_sys::{ArgValue, RetValue};
use crate::errors::Error;
type Deleter = unsafe extern "C" fn(object: *mut Object) -> ();
#[derive(Debug)]
#[repr(C)]
pub struct Object {
pub type_index: u32,
// TODO(@jroesch): pretty sure Rust and C++ atomics are the same, but not sure.
// NB: in general we should not touch this in Rust.
pub(self) ref_count: AtomicI32,
pub fdeleter: Deleter,
}
unsafe extern "C" fn delete<T: IsObject>(object: *mut Object) {
let typed_object: *mut T = std::mem::transmute(object);
T::typed_delete(typed_object);
}
fn derived_from(child_type_index: u32, parent_type_index: u32) -> bool {
let mut is_derived = 0;
crate::check_call!(ffi::TVMObjectDerivedFrom(
child_type_index,
parent_type_index,
&mut is_derived
));
if is_derived == 0 {
false
} else |
}
impl Object {
fn new(type_index: u32, deleter: Deleter) -> Object {
Object {
type_index,
// Note: do not touch this field directly again, this is
// a critical section, we write a 1 to the atomic which will now
// be managed by the C++ atomics.
// In the future we should probably use C-atomcis.
ref_count: AtomicI32::new(0),
fdeleter: deleter,
}
}
fn get_type_index<T: IsObject>() -> u32 {
let type_key = T::TYPE_KEY;
let cstring = CString::new(type_key).expect("type key must not contain null characters");
if type_key == "Object" {
return 0;
} else {
let mut index = 0;
unsafe {
let index_ptr = std::mem::transmute(&mut index);
if TVMObjectTypeKey2Index(cstring.as_ptr(), index_ptr)!= 0 {
panic!(crate::get_last_error())
}
}
return index;
}
}
pub fn base_object<T: IsObject>() -> Object {
let index = Object::get_type_index::<T>();
Object::new(index, delete::<T>)
}
pub(self) fn inc_ref(&self) {
unsafe {
let raw_ptr = std::mem::transmute(self);
assert_eq!(TVMObjectRetain(raw_ptr), 0);
}
}
pub(self) fn dec_ref(&self) {
unsafe {
let raw_ptr = std::mem::transmute(self);
assert_eq!(TVMObjectFree(raw_ptr), 0);
}
}
}
pub unsafe trait IsObject {
const TYPE_KEY: &'static str;
fn as_object<'s>(&'s self) -> &'s Object;
unsafe extern "C" fn typed_delete(object: *mut Self) {
let object = Box::from_raw(object);
drop(object)
}
}
unsafe impl IsObject for Object {
const TYPE_KEY: &'static str = "Object";
fn as_object<'s>(&'s self) -> &'s Object {
self
}
}
#[repr(C)]
pub struct ObjectPtr<T: IsObject> {
pub ptr: NonNull<T>,
}
fn inc_ref<T: IsObject>(ptr: NonNull<T>) {
unsafe { ptr.as_ref().as_object().inc_ref() }
}
fn dec_ref<T: IsObject>(ptr: NonNull<T>) {
unsafe { ptr.as_ref().as_object().dec_ref() }
}
impl ObjectPtr<Object> {
fn from_raw(object_ptr: *mut Object) -> Option<ObjectPtr<Object>> {
let non_null = NonNull::new(object_ptr);
non_null.map(|ptr| ObjectPtr { ptr })
}
}
impl<T: IsObject> Clone for ObjectPtr<T> {
fn clone(&self) -> Self {
inc_ref(self.ptr);
ObjectPtr { ptr: self.ptr }
}
}
impl<T: IsObject> Drop for ObjectPtr<T> {
fn drop(&mut self) {
dec_ref(self.ptr);
}
}
impl<T: IsObject> ObjectPtr<T> {
pub fn leak<'a>(object_ptr: ObjectPtr<T>) -> &'a mut T
where
T: 'a,
{
unsafe { &mut *std::mem::ManuallyDrop::new(object_ptr).ptr.as_ptr() }
}
pub fn new(object: T) -> ObjectPtr<T> {
let object_ptr = Box::new(object);
let object_ptr = Box::leak(object_ptr);
let ptr = NonNull::from(object_ptr);
inc_ref(ptr);
ObjectPtr { ptr }
}
pub fn count(&self) -> i32 {
// need to do atomic read in C++
// ABI compatible atomics is funky/hard.
self.as_object()
.ref_count
.load(std::sync::atomic::Ordering::SeqCst)
}
fn as_object<'s>(&'s self) -> &'s Object {
unsafe { self.ptr.as_ref().as_object() }
}
pub fn upcast(&self) -> ObjectPtr<Object> {
ObjectPtr {
ptr: self.ptr.cast(),
}
}
pub fn downcast<U: IsObject>(&self) -> Result<ObjectPtr<U>, Error> {
let child_index = Object::get_type_index::<U>();
let object_index = self.as_object().type_index;
let is_derived = if child_index == object_index {
true
} else {
// TODO(@jroesch): write tests
derived_from(object_index, child_index)
};
if is_derived {
Ok(ObjectPtr {
ptr: self.ptr.cast(),
})
} else {
Err(Error::downcast("TODOget_type_key".into(), U::TYPE_KEY))
}
}
}
impl<T: IsObject> std::ops::Deref for ObjectPtr<T> {
type Target = T;
fn deref(&self) -> &Self::Target {
unsafe { self.ptr.as_ref() }
}
}
impl<'a, T: IsObject> From<ObjectPtr<T>> for RetValue {
fn from(object_ptr: ObjectPtr<T>) -> RetValue {
let raw_object_ptr = ObjectPtr::leak(object_ptr);
let void_ptr = unsafe { std::mem::transmute(raw_object_ptr) };
RetValue::ObjectHandle(void_ptr)
}
}
impl<'a, T: IsObject> TryFrom<RetValue> for ObjectPtr<T> {
type Error = Error;
fn try_from(ret_value: RetValue) -> Result<ObjectPtr<T>, Self::Error> {
match ret_value {
RetValue::ObjectHandle(handle) => {
let handle: *mut Object = unsafe { std::mem::transmute(handle) };
let optr = ObjectPtr::from_raw(handle).ok_or(Error::Null)?;
optr.downcast()
}
_ => Err(Error::downcast(format!("{:?}", ret_value), "ObjectHandle")),
}
}
}
impl<'a, T: IsObject> From<ObjectPtr<T>> for ArgValue<'a> {
fn from(object_ptr: ObjectPtr<T>) -> ArgValue<'a> {
let raw_object_ptr = ObjectPtr::leak(object_ptr);
let void_ptr = unsafe { std::mem::transmute(raw_object_ptr) };
ArgValue::ObjectHandle(void_ptr)
}
}
impl<'a, T: IsObject> TryFrom<ArgValue<'a>> for ObjectPtr<T> {
type Error = Error;
fn try_from(arg_value: ArgValue<'a>) -> Result<ObjectPtr<T>, Self::Error> {
match arg_value {
ArgValue::ObjectHandle(handle) => {
let handle = unsafe { std::mem::transmute(handle) };
let optr = ObjectPtr::from_raw(handle).ok_or(Error::Null)?;
optr.downcast()
}
_ => Err(Error::downcast(format!("{:?}", arg_value), "ObjectHandle")),
}
}
}
impl<'a, T: IsObject> TryFrom<&ArgValue<'a>> for ObjectPtr<T> {
type Error = Error;
fn try_from(arg_value: &ArgValue<'a>) -> Result<ObjectPtr<T>, Self::Error> {
match arg_value {
ArgValue::ObjectHandle(handle) => {
let handle = unsafe { std::mem::transmute(handle) };
let optr = ObjectPtr::from_raw(handle).ok_or(Error::Null)?;
optr.downcast()
}
_ => Err(Error::downcast(format!("{:?}", arg_value), "ObjectHandle")),
}
}
}
#[cfg(test)]
mod tests {
use super::{Object, ObjectPtr};
use anyhow::{ensure, Result};
use std::convert::TryInto;
use tvm_sys::{ArgValue, RetValue};
#[test]
fn test_new_object() -> anyhow::Result<()> {
let object = Object::base_object::<Object>();
let ptr = ObjectPtr::new(object);
assert_eq!(ptr.count(), 1);
Ok(())
}
#[test]
fn roundtrip_retvalue() -> Result<()> {
let ptr = ObjectPtr::new(Object::base_object::<Object>());
let ret_value: RetValue = ptr.clone().into();
let ptr2: ObjectPtr<Object> = ret_value.try_into()?;
ensure!(
ptr.type_index == ptr2.type_index,
"type indices do not match"
);
ensure!(
ptr.fdeleter == ptr2.fdeleter,
"objects have different deleters"
);
Ok(())
}
#[test]
fn roundtrip_argvalue() -> Result<()> {
let ptr = ObjectPtr::new(Object::base_object::<Object>());
let arg_value: ArgValue = ptr.clone().into();
let ptr2: ObjectPtr<Object> = arg_value.try_into()?;
ensure!(
ptr.type_index == ptr2.type_index,
"type indices do not match"
);
ensure!(
ptr.fdeleter == ptr2.fdeleter,
"objects have different deleters"
);
Ok(())
}
fn test_fn(o: ObjectPtr<Object>) -> ObjectPtr<Object> {
assert_eq!(o.count(), 2);
return o;
}
#[test]
fn test_ref_count_boundary() {
use super::*;
use crate::function::{register, Function, Result};
let ptr = ObjectPtr::new(Object::base_object::<Object>());
let stay = ptr.clone();
assert_eq!(ptr.count(), 2);
register(test_fn, "my_func").unwrap();
let func = Function::get("my_func").unwrap();
let func = func.to_boxed_fn::<dyn Fn(ObjectPtr<Object>) -> Result<ObjectPtr<Object>>>();
func(ptr).unwrap();
assert_eq!(stay.count(), 1);
}
}
| {
true
} | conditional_block |
main.rs | fn main() {
//scope()
//moves_and_mem();
//refs()
slices()
}
////////////////////////////////////////////////////////////////////////////////
// What is Ownership?
////////////////////////////////////////////////////////////////////////////////
// Ownership is Rust's central feature.
// All programs have to manage the way they use a computer's memory while
// running. Some have garbage collection that constantly looks for no
// longer used memory as the program runs; in others, the programmer must
// explicitly allocate and release the memory. Rust uses a third approach:
//
// Memory is managed through a system of ownership with a set of rules
// that the compiler checks at compile time. No run-time costs are
// incurred for any of the ownership features.
//
// [Stack and Heap]
// In most langs, you really don't need to think about the stack and
// the heap often. In systems langs, whether the value is on the stack
// or on the heap has more of an effect on how the language behaves and
// why we have to make certain decisions.
//
// The stack and the heap are parts of memory that are available to your
// code to use at runtime, but they are structured differently. The stack
// stores values in the order it gets them and removes the values in
// the opposite order (LIFO). This is referred to as
// *pushing onto the stack* and *popping off of the stack*
//
// It's fast because of the way it accesses the data: it never has to
// search for a place to put new data or a place to get data from because
// that place is *always* the top of the stack. Another propery is that
// all data on the stack must take up a known, fixed size.
//
// For data that is an unknown size at compile time or a size that may
// changeo ver time, we can store that data on the heap instead. The heap
// is less organized; we just ask for some amount of space. The OS
// finds an empty spot somewhere that's big enough for the request, marks
// it as in use, and returns a pointer to that location. It's called
// *allocating on the heap*. Pushing onto the stack is not considered
// allocation. A pointer is a known, fixed size, so it can sit on the
// heap, but for actual data, we have to follow the pointer.
//
// The heap is slower than the stack because we have to follow a pointer
// to get there (a level of indirection). Processors are faster due to
// temporal and spacial locality and caching if they have to jump around
// less.
//
// When a function is called, the values passed into the function
// (including, potentially pointers to data on the heap) and the fns
// local vars get pushed onto the stack. When its over, the vals get
// popped off the stack.
//
//!!
// Keeping track of what code is using what data on the heap, minimizing
// duplicate data on the heap, and cleaning up unused data on the heap
// so we don't run out of space are all problems that ownership helps.
// Once ownership is understood, you won't have to think about the stack
// and the heap often, but knowing that managing heap data is why
// ownership exists can help explain why it works the way that it does.
//!!
//
// [Ownership Rules]
// There exist 3 very important rules to ownership in Rust:
//
// 1) Each value in Rust has a variable that's called its *owner*
// 2) There can only be one owner at a time (the highlander rule)
// 3) When the owner goes out of scope, the value will be dropped
//
// + Variable Scope
// See scope()
//
// + Memory and Allocation
// See moves_and_mem
//
// + References
// See refs()
fn scope() {
// First example of ownership, we'll look at the *scope* of some
// variables. Scope is the range within a program for which an item
// is valid.
// s is a string literal. the value of the string is hardcoded into
// the text of the program. The variable is valid from the point
// at which it's declared until the end of the current *scope*.
{ // s is not valid here, it's not yet declared
let s = "hello"; // s is valid from this point forwards
// do some stuff with s
} // this scope is now over, and s is no longer valid
// There are two important points in time here:
// 1) When s comes *into* scope, it is valid.
// 2) s remains valid until it is *out of scope*
//
// // The relationship between scopes and when variables are valid
// is similar to other programming langs. Let's build on top
// of this introducing the String type.
//
// + String type
// We're going to illustrate the rules of ownership using a data type
// that's more complex than the ones we've seen before. All the data
// types we've seen before are stored on the stack and popped off the
// stack when their scope is over, but we want to look at data
// that's on the heap and explore how Rust knows to clean that up.
//
// We'll concentrate on the parts of String that relate to ownership.
// They also apply to other complex data types provided by the
// stdlib and those that you create.
//
// We've seen string literals hardcoded into the program. They're
// convenient, but they aren't suitable for every situation in which
// you want to use text. For one reason, they're immutable. Also, not
// every string value is known when we write our code. The other type
// is a String, which is allocated on the heap. It's able to store an
// amount of text that is unknown at compile time. It's created from
// a literal with a `from` function:
let s = String::from("hello");
// Again, double colon (::) is an op that allows us to namespace
// this from function under the String type rather than using a name
// like string_from. It can be mutated:
let mut s = String::from("hello");
s.push_str(", world!"); // appends a literal to a String
println!("{}", s); // Will print the full string.
// Why can Strings be mutated but literals cannot? Difference is
// how they deal with memory.
}
fn moves_and_mem() {
// With string literals, we know the contents of the string at compile
// time, so the text is literally hardcoded into the executable,
// making them extremely fast and efficient. This property only comes
// from its immutability. We can't put a blob of memory into the binary
// for each piece of text whose size is unknown at compile time and
// whose size might change while running the program.
//
// To support a mutable, growing piece of text, need to allocate an
// amount of mem on the heap, unknown at compile time, to hold the
// contents. This means:
//
// 1) The memory must be requested from the OS at runtime.
// 2) Need a way of returning the mem to the OS when we're done with
// the allocated string.
//
// First part is done by us: the String::from implementation requests
// the memory it needs from the OS. This is pretty standard for most
// langs.
//
// The second part is different. In langs with GCs, it will keep track
// and clean up mem that isn't used anymore, and the programmer doesn't
// need to think about it. Without a GC, it's the programmer's
// responsibility to know when that memory is no longer being used
// and call code to explicitly return it.
//
// This has historically been a *very* difficult problem to solve.
// If you forget to, we'll waste memory and leak it.
// If we do it too early, we'll have an invalid variable (use after free)
// If we do it twice, that's a bug too.
//
// We need to pair exactly one allocation with one free.
//
// Rust takes its own unique path: the memory is automatically
// returned once the variable that owns it goes out of scope.
// When a variable goes out of scope, Rust calls a special function
// for us. The function is called drop, and it's where the author
// of String can put the code to return the memory. Rust calls
// `drop` automatically at the closing }.
//
// NOTE: C++ calls this pattern of deallocation at the end of its
// lifetime RAII. The drop function in Rust is similar to a dtor
//
// The pattern has a profound impact on the way that Rust code is
// written. Might seem simple, but the behavior of code can be
// unexpected in more complicated situations when we want to
// have multiple variables use the data that's been allocated
// on the heap.
//
// + Ways variables and data interact: Move
// Multiple variables can interact with the same data in different
// ways in rust:
// let x = 5;
// let y = x;
//
// So here, we bind the value of 5 to x, then we make a copy
// of the value in x and bind it to y.
// We now have to vars x and y and both equal 5.
// This is exactly what's happening because integers are simple
// values with a known, fixed size, and these two 5 vals are
// pushed onto the stack.
//
// let a = String::from("hello);
// let b = a;
//
// This looks similar and you'd probably assume it behaves the same
// way; the second would make a copy of the val in a and bind it to b.
// This is not what happens.
//
// Under the covers, a String is actually a type with a few values:
// ptr to some memory holding the string, a length, and a capacity.
// This group is stored on the stack. The length is how much memory
// in bytes the contents of the String is curreently using.
// The capacity is the total amount of memory, in bytes, the String
// has received from the OS. Difference between len and cap matters,
// but not the point of this.
//
// When a is assigned to b, the String data is copied, meaning we copy
// the pointer, the len, and the cap on the stack. The heap data is
// not copied, so b's pointer is going to refer to the same heap
// data that a does.
//
// Earlier we said when a variable goes out of scope, Rust will
// automatically call the drop function and clean up the heap mem.
// But in this case, both pointers are pointing to the same heap
// memory. Thiis is a problem. When a and b go out of scope, they
// will both attempt to free the same memory. This is a *double free*
// error and is one of the memory safety bugs we mentioned previously.
// Freeing mem twice can lead to mem corruption, which can lead
// to security vulnerabilities.
//
// To ensure mem safety, there's another detail to what happens in
// this situation in Rust. Instead of trying to copy the allocated
// memory, Rust considers a to no longer be valid and therefore, Rust
// doesn't need to free anything when a goes out of scope.
//
// If you were to try to use a after copying it to b, an error
// is thrown at compile time.
//
// The ideas of "shallow copy" and "deep copy" apply here. The
// concept of copying the pointer, length, and capacity without
// copying the data psounds like a shallow copy. But because rust
// also invalidates the first variable, instead of calling this
// a shallow copy, it's known as a *move*. We would read this by
// saying that a was *moved* into b.
//
// This solves the problem, because with only b as valid, when it
// goes out of scope, it alone will free the mem.
//
// There is an additional design choice implied by this:
// ** Rust will never automatically create "deep" copies of data. **
// Therefore, any *automatic* copying can be assumed to be
// inexpensive in terms of runtime performance.
//
// + Ways variables and data interact: Clones
//
// If you *do* wawnt to deeply copy the heap data of a String, not
// just the stack data, a common method can be used called a *clone*.
//
// let a = String::from("hello");
// let b = a.clone();
//
// When you see a clone call, you know some arbitrary code is being
// executed and that code may be expensive. It's an indiator that
// something different is going on.
//
// + Stack only data: clone
// There's another wrinkle we haven't talked about yet. This code
// is using integers:
// let x = 5;
// let y = x;
//
// println!("x = {}, y = {}", x, y);
//
// This seems to contradict what we said; we don't have to call
// clone, but x is still valid and wasn't moved to y.
//
// The reason is types like integers that have a known size at
// compile time are stored *entirely* on the stack, so copies
// of the actual values are very quick to make. There's no reason
// we would want to prevent x from being valid after we create the
// variable y.
//
// In other words, there's no different between deep and shallow
// copying here, so calling clone wouldn't do anything different
// from the usual shallow copying and we can leave it out.
//
// Rust has a special annotation called the Copy trait that can
// be placed on types like integres that are stored on the stack.
// If a type has the Copy trait, an older variable is still usable
// after assignment. Rust won't let us annotate a type with the Copy
// trait if the type, or any of its parts, has implemented the Drop
// trait.
//
// If the type needs something special to happen when the value goes
// out of scope and we add the Copy annotation to that type, we'll get
// a compile time error.
//
// What types are Copy? Docs can/should be read, but as a general rule,
// any group of simple scalar values can be Copy, and nothing that
// requires allocation or is some form of resource is Copy.
// -> ints, bools, floats, tuples (only if they contain also Copys).
//
// + Ownership and Functions]
// Semantics for passing a value to a function are similar to
// assigning a value to a variable. Passing a variable to a func
// will move or copy just like assignment.
let s = String::from("derpyfoobar"); // s comes into scope
takes_ownership(s); // s's value moves into the function...
//... and so is no longer valid here.
//println!("{}", s); // COMPILE ERROR!
let x = 5;
makes_copy(x);
println!("{} ", x); // This is fine, because it was a copy.
// [Return values and scope]
// Returning values can also transfer ownership. Here's an ex
// with similar annotations to previous examples:
{
let baz = gives_ownership(); // gives ownership moves its return
// value into baz
let duder = String::from("duder"); // duder comes into scope
let lucha = takes_and_gives_back(duder); // duder is moves into
// takes_and_gives_back, which also moves its return value into lucha
println!("lucha! {}", lucha);
} // Here lucha goes out of scope and is dropped. duder goes out of
// scope but was moved
// The ownership of a variable follows the same pattern every time:
// **assigning a value to another variable moves it**. When a variable
// that includes data on the heap goes out of scope, the value will be
// cleaned up by `drop` unless the data has been moved to be owned by
// another variable.
//
// Taking ownership and then returning ownership with every fn is
// tedious. What if we need to let a function use a value but not take
// ownership? It's quite annoying that anything we pass in also needs
// to be passed back if we want to use it again, in addition to any
// data resulting from the body of the fn that we may want to return
// as well.
//
// It's possible to return multiple values using a tuple.
// But it's still obnoxious to constantly pass back a ton of stuff.
//
// Rust has a way to address this, and its called references!!!
//
////////////////////////////////////////////////////////////////////////////////
}
fn refs() | // some value without taking ownership over it.
// s inside of calc_len becomes a pointer to the String struct bound
// to duderington, which itself contains a ptr to the actual string
// data on the heap.
//
// A closer look at the function call:
//
// let ano = String::from("ano");
// let len = calc_len(&ano);
//
// The &ano syntax lets us create a ref that refers to the value
// of ano, but does not own it. Because it does no own it, the val
// it points to will not be dropped when the ref goes out of scope.
//
// Likewise, the sig of the fn uses & to indicate the type of the
// param s is a ref.
//
// The scope in which the variable s is valid is the same as any
// fn param scope, but we don't drop what the ref points to when it
// goes out of scope because we don't have ownership.
// Functions that have refs as params instead of vals mean we don't
// need to return the vals in order to give back ownership, since
// we never had ownership in the first place.
//
// What happens if we try to mutate something we borrowed?
// Compiler errors. As vars are immutable by default, so are refs.
// We are not allowed to modify a vanilla ref.
//
// + Mutable refs
let change = |some_str: &mut String| {
some_str.push_str(" fu.");
};
let mut s = String::from("mutref");
change(&mut s);
println!("mutref string after stuff: [{}]", s);
// Mutable refs have a big caveat: you can only have one mutable ref
// to a particular piece of data in a particular scope. This will fail:
//
// let mut s = String::from("fert");
//
// let r1 = &mut s;
// let r2 = &mut s;
//
// This restriction allows for mutation but in a very controlled
// fashion. It's something that new Rusters struggle with, because most
// langs let you mutate whenever you'd like. Benefit is that Rust
// can prevent data races at compile time.
//
// A data race is a particular type of race condition in which these
// three behaviors occur:
// 1) Two or more pointers access the same data at the same time
// 2) At least one of the pointeres is being used to write to the data
// 3) No mechanism being used to sync the access to the data
//
// Data races cause undefined behavior and can be very difficult to
// diagnose and solve when you're trying to figure out what's
// happening at runtime. Rust won't even let you compile it.
//
// Rust will let you create a new scope allowing for multiple mut
// refs, but just not *simultaneous* ones!
//
let mut s = String::from("fert");
{
let r1 = &mut s;
} //r1 goes out of scope here, so we can make a new ref with no prob
let r2 = &mut s;
// A similar rule exists for combining immutable refs.
// This will error out.
//let r1 = &s;
//let r2 = &s;
//let r3 = &mut s;
//
// Rust *also* does not allow for a mut ref while we have an
// immutable one. Users of an immutable ref don't expect the vals
// to change from under them. Multiple immutable refs are okay
// because no one who is reading the data has the ability to modify
// anybody else's data.
//
// + Dangling refs
// In langs with pointers, it's easy to erroneously create dangling
// pointers, or pointers that ref memory that may have been given
// to someone else, by freeing some meory while preserving a pointer
// to that memory. This is called a use after free.
//
// If we have a ref to some data, the compiler ensures that the data
// will not go out of scope before the ref to the data does.
// This will error out:
//
/*
let dangle = || -> &String{
let s = String::from("hello");
&s
}
let ref_to_nothign = dangle();
*/
// This will complain about something we haven't covered yet:
// *lifetimes*.
// The key here is the return type contains a borrowed value, but there
// is no value for it to be borrowed from.
// Because s is created inside dangle, when the code of dangle is
// finished, s will be deallocated. But we tried to return a ref
// to it. That means this ref would be pointing to an invaild String.
// Rust won't let us do that.
// The solution here is to return the String directly, thus transfering
// ownership via a move to the caller of the fn.
let no_dangle = || -> String {
let s = String::from("no dangle");
s
};
let ndstr = no_dangle();
println!("{}", ndstr);
// Recapping the rules of refs:
// 1) At any given time you can have *either* but not both of:
// -> One mutable ref
// -> Any number of immutable refs
// 2) References must always be valid
}
fn takes_ownership(some_string: String) { // some_string comes into scope
println!("{}", some_string);
} // here some string goes out of scope and `drop` is called. The
// backing memory is freed.
fn makes_copy(some_integer: i32) { // some integer comes into scope.
println!("{}", some_integer);
} // Here, some_integer goes out of scope. Nothing special happens.
fn gives_ownership() -> String { // Gives ownership will move its return
// value into the function that calls it
let derp = String::from("derp"); // derp comes into scope
derp // derp is returned and moves out to the calling function
}
// takes_and_gives_back will take a String and return one
fn takes_and_gives_back(foo: String) -> String { // foo comes into scope
foo // foo is returned and moves out to the calling fn
}
| {
// [References and Borrowing]
// The issue with the returning tuple code we've seen elsewhere in
// the ownership section is that we have to return the String to
// the calling function so we can still use the String after the call.
// Here we define calculate_length so that it uses a *reference* to
// an object as a param instead of taking ownership of the value.
let calc_len = |s: &String| -> usize {
s.len()
};
let duderington = String::from("duderington");
println!("the length of the string. -> {}", calc_len(&duderington));
// First, all the tuple code in the variable declaration is gone.
// We pass the string into the function as &duderington, and in the
// definition, we take &String rather than String.
//
// The ampersands are *references*, and they allow you to refer to | identifier_body |
main.rs | fn main() {
//scope()
//moves_and_mem();
//refs()
slices()
}
////////////////////////////////////////////////////////////////////////////////
// What is Ownership?
////////////////////////////////////////////////////////////////////////////////
// Ownership is Rust's central feature.
// All programs have to manage the way they use a computer's memory while
// running. Some have garbage collection that constantly looks for no
// longer used memory as the program runs; in others, the programmer must
// explicitly allocate and release the memory. Rust uses a third approach:
//
// Memory is managed through a system of ownership with a set of rules
// that the compiler checks at compile time. No run-time costs are
// incurred for any of the ownership features.
//
// [Stack and Heap]
// In most langs, you really don't need to think about the stack and
// the heap often. In systems langs, whether the value is on the stack
// or on the heap has more of an effect on how the language behaves and
// why we have to make certain decisions.
//
// The stack and the heap are parts of memory that are available to your
// code to use at runtime, but they are structured differently. The stack
// stores values in the order it gets them and removes the values in
// the opposite order (LIFO). This is referred to as
// *pushing onto the stack* and *popping off of the stack*
//
// It's fast because of the way it accesses the data: it never has to
// search for a place to put new data or a place to get data from because
// that place is *always* the top of the stack. Another propery is that
// all data on the stack must take up a known, fixed size.
//
// For data that is an unknown size at compile time or a size that may
// changeo ver time, we can store that data on the heap instead. The heap
// is less organized; we just ask for some amount of space. The OS
// finds an empty spot somewhere that's big enough for the request, marks
// it as in use, and returns a pointer to that location. It's called
// *allocating on the heap*. Pushing onto the stack is not considered
// allocation. A pointer is a known, fixed size, so it can sit on the
// heap, but for actual data, we have to follow the pointer.
//
// The heap is slower than the stack because we have to follow a pointer
// to get there (a level of indirection). Processors are faster due to
// temporal and spacial locality and caching if they have to jump around
// less.
//
// When a function is called, the values passed into the function
// (including, potentially pointers to data on the heap) and the fns
// local vars get pushed onto the stack. When its over, the vals get
// popped off the stack.
//
//!!
// Keeping track of what code is using what data on the heap, minimizing
// duplicate data on the heap, and cleaning up unused data on the heap
// so we don't run out of space are all problems that ownership helps.
// Once ownership is understood, you won't have to think about the stack
// and the heap often, but knowing that managing heap data is why
// ownership exists can help explain why it works the way that it does.
//!!
//
// [Ownership Rules]
// There exist 3 very important rules to ownership in Rust:
//
// 1) Each value in Rust has a variable that's called its *owner*
// 2) There can only be one owner at a time (the highlander rule)
// 3) When the owner goes out of scope, the value will be dropped
//
// + Variable Scope
// See scope()
//
// + Memory and Allocation
// See moves_and_mem
//
// + References
// See refs()
fn scope() {
// First example of ownership, we'll look at the *scope* of some
// variables. Scope is the range within a program for which an item
// is valid.
// s is a string literal. the value of the string is hardcoded into
// the text of the program. The variable is valid from the point
// at which it's declared until the end of the current *scope*.
{ // s is not valid here, it's not yet declared
let s = "hello"; // s is valid from this point forwards
// do some stuff with s
} // this scope is now over, and s is no longer valid
// There are two important points in time here:
// 1) When s comes *into* scope, it is valid.
// 2) s remains valid until it is *out of scope*
//
// // The relationship between scopes and when variables are valid
// is similar to other programming langs. Let's build on top
// of this introducing the String type.
//
// + String type
// We're going to illustrate the rules of ownership using a data type
// that's more complex than the ones we've seen before. All the data
// types we've seen before are stored on the stack and popped off the
// stack when their scope is over, but we want to look at data
// that's on the heap and explore how Rust knows to clean that up.
//
// We'll concentrate on the parts of String that relate to ownership.
// They also apply to other complex data types provided by the
// stdlib and those that you create.
//
// We've seen string literals hardcoded into the program. They're
// convenient, but they aren't suitable for every situation in which
// you want to use text. For one reason, they're immutable. Also, not
// every string value is known when we write our code. The other type
// is a String, which is allocated on the heap. It's able to store an
// amount of text that is unknown at compile time. It's created from
// a literal with a `from` function:
let s = String::from("hello");
// Again, double colon (::) is an op that allows us to namespace
// this from function under the String type rather than using a name
// like string_from. It can be mutated:
let mut s = String::from("hello");
s.push_str(", world!"); // appends a literal to a String
println!("{}", s); // Will print the full string.
// Why can Strings be mutated but literals cannot? Difference is
// how they deal with memory.
}
fn | () {
// With string literals, we know the contents of the string at compile
// time, so the text is literally hardcoded into the executable,
// making them extremely fast and efficient. This property only comes
// from its immutability. We can't put a blob of memory into the binary
// for each piece of text whose size is unknown at compile time and
// whose size might change while running the program.
//
// To support a mutable, growing piece of text, need to allocate an
// amount of mem on the heap, unknown at compile time, to hold the
// contents. This means:
//
// 1) The memory must be requested from the OS at runtime.
// 2) Need a way of returning the mem to the OS when we're done with
// the allocated string.
//
// First part is done by us: the String::from implementation requests
// the memory it needs from the OS. This is pretty standard for most
// langs.
//
// The second part is different. In langs with GCs, it will keep track
// and clean up mem that isn't used anymore, and the programmer doesn't
// need to think about it. Without a GC, it's the programmer's
// responsibility to know when that memory is no longer being used
// and call code to explicitly return it.
//
// This has historically been a *very* difficult problem to solve.
// If you forget to, we'll waste memory and leak it.
// If we do it too early, we'll have an invalid variable (use after free)
// If we do it twice, that's a bug too.
//
// We need to pair exactly one allocation with one free.
//
// Rust takes its own unique path: the memory is automatically
// returned once the variable that owns it goes out of scope.
// When a variable goes out of scope, Rust calls a special function
// for us. The function is called drop, and it's where the author
// of String can put the code to return the memory. Rust calls
// `drop` automatically at the closing }.
//
// NOTE: C++ calls this pattern of deallocation at the end of its
// lifetime RAII. The drop function in Rust is similar to a dtor
//
// The pattern has a profound impact on the way that Rust code is
// written. Might seem simple, but the behavior of code can be
// unexpected in more complicated situations when we want to
// have multiple variables use the data that's been allocated
// on the heap.
//
// + Ways variables and data interact: Move
// Multiple variables can interact with the same data in different
// ways in rust:
// let x = 5;
// let y = x;
//
// So here, we bind the value of 5 to x, then we make a copy
// of the value in x and bind it to y.
// We now have to vars x and y and both equal 5.
// This is exactly what's happening because integers are simple
// values with a known, fixed size, and these two 5 vals are
// pushed onto the stack.
//
// let a = String::from("hello);
// let b = a;
//
// This looks similar and you'd probably assume it behaves the same
// way; the second would make a copy of the val in a and bind it to b.
// This is not what happens.
//
// Under the covers, a String is actually a type with a few values:
// ptr to some memory holding the string, a length, and a capacity.
// This group is stored on the stack. The length is how much memory
// in bytes the contents of the String is curreently using.
// The capacity is the total amount of memory, in bytes, the String
// has received from the OS. Difference between len and cap matters,
// but not the point of this.
//
// When a is assigned to b, the String data is copied, meaning we copy
// the pointer, the len, and the cap on the stack. The heap data is
// not copied, so b's pointer is going to refer to the same heap
// data that a does.
//
// Earlier we said when a variable goes out of scope, Rust will
// automatically call the drop function and clean up the heap mem.
// But in this case, both pointers are pointing to the same heap
// memory. Thiis is a problem. When a and b go out of scope, they
// will both attempt to free the same memory. This is a *double free*
// error and is one of the memory safety bugs we mentioned previously.
// Freeing mem twice can lead to mem corruption, which can lead
// to security vulnerabilities.
//
// To ensure mem safety, there's another detail to what happens in
// this situation in Rust. Instead of trying to copy the allocated
// memory, Rust considers a to no longer be valid and therefore, Rust
// doesn't need to free anything when a goes out of scope.
//
// If you were to try to use a after copying it to b, an error
// is thrown at compile time.
//
// The ideas of "shallow copy" and "deep copy" apply here. The
// concept of copying the pointer, length, and capacity without
// copying the data psounds like a shallow copy. But because rust
// also invalidates the first variable, instead of calling this
// a shallow copy, it's known as a *move*. We would read this by
// saying that a was *moved* into b.
//
// This solves the problem, because with only b as valid, when it
// goes out of scope, it alone will free the mem.
//
// There is an additional design choice implied by this:
// ** Rust will never automatically create "deep" copies of data. **
// Therefore, any *automatic* copying can be assumed to be
// inexpensive in terms of runtime performance.
//
// + Ways variables and data interact: Clones
//
// If you *do* wawnt to deeply copy the heap data of a String, not
// just the stack data, a common method can be used called a *clone*.
//
// let a = String::from("hello");
// let b = a.clone();
//
// When you see a clone call, you know some arbitrary code is being
// executed and that code may be expensive. It's an indiator that
// something different is going on.
//
// + Stack only data: clone
// There's another wrinkle we haven't talked about yet. This code
// is using integers:
// let x = 5;
// let y = x;
//
// println!("x = {}, y = {}", x, y);
//
// This seems to contradict what we said; we don't have to call
// clone, but x is still valid and wasn't moved to y.
//
// The reason is types like integers that have a known size at
// compile time are stored *entirely* on the stack, so copies
// of the actual values are very quick to make. There's no reason
// we would want to prevent x from being valid after we create the
// variable y.
//
// In other words, there's no different between deep and shallow
// copying here, so calling clone wouldn't do anything different
// from the usual shallow copying and we can leave it out.
//
// Rust has a special annotation called the Copy trait that can
// be placed on types like integres that are stored on the stack.
// If a type has the Copy trait, an older variable is still usable
// after assignment. Rust won't let us annotate a type with the Copy
// trait if the type, or any of its parts, has implemented the Drop
// trait.
//
// If the type needs something special to happen when the value goes
// out of scope and we add the Copy annotation to that type, we'll get
// a compile time error.
//
// What types are Copy? Docs can/should be read, but as a general rule,
// any group of simple scalar values can be Copy, and nothing that
// requires allocation or is some form of resource is Copy.
// -> ints, bools, floats, tuples (only if they contain also Copys).
//
// + Ownership and Functions]
// Semantics for passing a value to a function are similar to
// assigning a value to a variable. Passing a variable to a func
// will move or copy just like assignment.
let s = String::from("derpyfoobar"); // s comes into scope
takes_ownership(s); // s's value moves into the function...
//... and so is no longer valid here.
//println!("{}", s); // COMPILE ERROR!
let x = 5;
makes_copy(x);
println!("{} ", x); // This is fine, because it was a copy.
// [Return values and scope]
// Returning values can also transfer ownership. Here's an ex
// with similar annotations to previous examples:
{
let baz = gives_ownership(); // gives ownership moves its return
// value into baz
let duder = String::from("duder"); // duder comes into scope
let lucha = takes_and_gives_back(duder); // duder is moves into
// takes_and_gives_back, which also moves its return value into lucha
println!("lucha! {}", lucha);
} // Here lucha goes out of scope and is dropped. duder goes out of
// scope but was moved
// The ownership of a variable follows the same pattern every time:
// **assigning a value to another variable moves it**. When a variable
// that includes data on the heap goes out of scope, the value will be
// cleaned up by `drop` unless the data has been moved to be owned by
// another variable.
//
// Taking ownership and then returning ownership with every fn is
// tedious. What if we need to let a function use a value but not take
// ownership? It's quite annoying that anything we pass in also needs
// to be passed back if we want to use it again, in addition to any
// data resulting from the body of the fn that we may want to return
// as well.
//
// It's possible to return multiple values using a tuple.
// But it's still obnoxious to constantly pass back a ton of stuff.
//
// Rust has a way to address this, and its called references!!!
//
////////////////////////////////////////////////////////////////////////////////
}
fn refs() {
// [References and Borrowing]
// The issue with the returning tuple code we've seen elsewhere in
// the ownership section is that we have to return the String to
// the calling function so we can still use the String after the call.
// Here we define calculate_length so that it uses a *reference* to
// an object as a param instead of taking ownership of the value.
let calc_len = |s: &String| -> usize {
s.len()
};
let duderington = String::from("duderington");
println!("the length of the string. -> {}", calc_len(&duderington));
// First, all the tuple code in the variable declaration is gone.
// We pass the string into the function as &duderington, and in the
// definition, we take &String rather than String.
//
// The ampersands are *references*, and they allow you to refer to
// some value without taking ownership over it.
// s inside of calc_len becomes a pointer to the String struct bound
// to duderington, which itself contains a ptr to the actual string
// data on the heap.
//
// A closer look at the function call:
//
// let ano = String::from("ano");
// let len = calc_len(&ano);
//
// The &ano syntax lets us create a ref that refers to the value
// of ano, but does not own it. Because it does no own it, the val
// it points to will not be dropped when the ref goes out of scope.
//
// Likewise, the sig of the fn uses & to indicate the type of the
// param s is a ref.
//
// The scope in which the variable s is valid is the same as any
// fn param scope, but we don't drop what the ref points to when it
// goes out of scope because we don't have ownership.
// Functions that have refs as params instead of vals mean we don't
// need to return the vals in order to give back ownership, since
// we never had ownership in the first place.
//
// What happens if we try to mutate something we borrowed?
// Compiler errors. As vars are immutable by default, so are refs.
// We are not allowed to modify a vanilla ref.
//
// + Mutable refs
let change = |some_str: &mut String| {
some_str.push_str(" fu.");
};
let mut s = String::from("mutref");
change(&mut s);
println!("mutref string after stuff: [{}]", s);
// Mutable refs have a big caveat: you can only have one mutable ref
// to a particular piece of data in a particular scope. This will fail:
//
// let mut s = String::from("fert");
//
// let r1 = &mut s;
// let r2 = &mut s;
//
// This restriction allows for mutation but in a very controlled
// fashion. It's something that new Rusters struggle with, because most
// langs let you mutate whenever you'd like. Benefit is that Rust
// can prevent data races at compile time.
//
// A data race is a particular type of race condition in which these
// three behaviors occur:
// 1) Two or more pointers access the same data at the same time
// 2) At least one of the pointeres is being used to write to the data
// 3) No mechanism being used to sync the access to the data
//
// Data races cause undefined behavior and can be very difficult to
// diagnose and solve when you're trying to figure out what's
// happening at runtime. Rust won't even let you compile it.
//
// Rust will let you create a new scope allowing for multiple mut
// refs, but just not *simultaneous* ones!
//
let mut s = String::from("fert");
{
let r1 = &mut s;
} //r1 goes out of scope here, so we can make a new ref with no prob
let r2 = &mut s;
// A similar rule exists for combining immutable refs.
// This will error out.
//let r1 = &s;
//let r2 = &s;
//let r3 = &mut s;
//
// Rust *also* does not allow for a mut ref while we have an
// immutable one. Users of an immutable ref don't expect the vals
// to change from under them. Multiple immutable refs are okay
// because no one who is reading the data has the ability to modify
// anybody else's data.
//
// + Dangling refs
// In langs with pointers, it's easy to erroneously create dangling
// pointers, or pointers that ref memory that may have been given
// to someone else, by freeing some meory while preserving a pointer
// to that memory. This is called a use after free.
//
// If we have a ref to some data, the compiler ensures that the data
// will not go out of scope before the ref to the data does.
// This will error out:
//
/*
let dangle = || -> &String{
let s = String::from("hello");
&s
}
let ref_to_nothign = dangle();
*/
// This will complain about something we haven't covered yet:
// *lifetimes*.
// The key here is the return type contains a borrowed value, but there
// is no value for it to be borrowed from.
// Because s is created inside dangle, when the code of dangle is
// finished, s will be deallocated. But we tried to return a ref
// to it. That means this ref would be pointing to an invaild String.
// Rust won't let us do that.
// The solution here is to return the String directly, thus transfering
// ownership via a move to the caller of the fn.
let no_dangle = || -> String {
let s = String::from("no dangle");
s
};
let ndstr = no_dangle();
println!("{}", ndstr);
// Recapping the rules of refs:
// 1) At any given time you can have *either* but not both of:
// -> One mutable ref
// -> Any number of immutable refs
// 2) References must always be valid
}
fn takes_ownership(some_string: String) { // some_string comes into scope
println!("{}", some_string);
} // here some string goes out of scope and `drop` is called. The
// backing memory is freed.
fn makes_copy(some_integer: i32) { // some integer comes into scope.
println!("{}", some_integer);
} // Here, some_integer goes out of scope. Nothing special happens.
fn gives_ownership() -> String { // Gives ownership will move its return
// value into the function that calls it
let derp = String::from("derp"); // derp comes into scope
derp // derp is returned and moves out to the calling function
}
// takes_and_gives_back will take a String and return one
fn takes_and_gives_back(foo: String) -> String { // foo comes into scope
foo // foo is returned and moves out to the calling fn
}
| moves_and_mem | identifier_name |
main.rs | fn main() {
//scope()
//moves_and_mem();
//refs()
slices()
}
////////////////////////////////////////////////////////////////////////////////
// What is Ownership?
////////////////////////////////////////////////////////////////////////////////
// Ownership is Rust's central feature.
// All programs have to manage the way they use a computer's memory while
// running. Some have garbage collection that constantly looks for no
// longer used memory as the program runs; in others, the programmer must
// explicitly allocate and release the memory. Rust uses a third approach:
//
// Memory is managed through a system of ownership with a set of rules
// that the compiler checks at compile time. No run-time costs are
// incurred for any of the ownership features.
//
// [Stack and Heap]
// In most langs, you really don't need to think about the stack and
// the heap often. In systems langs, whether the value is on the stack
// or on the heap has more of an effect on how the language behaves and
// why we have to make certain decisions.
//
// The stack and the heap are parts of memory that are available to your
// code to use at runtime, but they are structured differently. The stack | // the opposite order (LIFO). This is referred to as
// *pushing onto the stack* and *popping off of the stack*
//
// It's fast because of the way it accesses the data: it never has to
// search for a place to put new data or a place to get data from because
// that place is *always* the top of the stack. Another propery is that
// all data on the stack must take up a known, fixed size.
//
// For data that is an unknown size at compile time or a size that may
// changeo ver time, we can store that data on the heap instead. The heap
// is less organized; we just ask for some amount of space. The OS
// finds an empty spot somewhere that's big enough for the request, marks
// it as in use, and returns a pointer to that location. It's called
// *allocating on the heap*. Pushing onto the stack is not considered
// allocation. A pointer is a known, fixed size, so it can sit on the
// heap, but for actual data, we have to follow the pointer.
//
// The heap is slower than the stack because we have to follow a pointer
// to get there (a level of indirection). Processors are faster due to
// temporal and spacial locality and caching if they have to jump around
// less.
//
// When a function is called, the values passed into the function
// (including, potentially pointers to data on the heap) and the fns
// local vars get pushed onto the stack. When its over, the vals get
// popped off the stack.
//
//!!
// Keeping track of what code is using what data on the heap, minimizing
// duplicate data on the heap, and cleaning up unused data on the heap
// so we don't run out of space are all problems that ownership helps.
// Once ownership is understood, you won't have to think about the stack
// and the heap often, but knowing that managing heap data is why
// ownership exists can help explain why it works the way that it does.
//!!
//
// [Ownership Rules]
// There exist 3 very important rules to ownership in Rust:
//
// 1) Each value in Rust has a variable that's called its *owner*
// 2) There can only be one owner at a time (the highlander rule)
// 3) When the owner goes out of scope, the value will be dropped
//
// + Variable Scope
// See scope()
//
// + Memory and Allocation
// See moves_and_mem
//
// + References
// See refs()
fn scope() {
// First example of ownership, we'll look at the *scope* of some
// variables. Scope is the range within a program for which an item
// is valid.
// s is a string literal. the value of the string is hardcoded into
// the text of the program. The variable is valid from the point
// at which it's declared until the end of the current *scope*.
{ // s is not valid here, it's not yet declared
let s = "hello"; // s is valid from this point forwards
// do some stuff with s
} // this scope is now over, and s is no longer valid
// There are two important points in time here:
// 1) When s comes *into* scope, it is valid.
// 2) s remains valid until it is *out of scope*
//
// // The relationship between scopes and when variables are valid
// is similar to other programming langs. Let's build on top
// of this introducing the String type.
//
// + String type
// We're going to illustrate the rules of ownership using a data type
// that's more complex than the ones we've seen before. All the data
// types we've seen before are stored on the stack and popped off the
// stack when their scope is over, but we want to look at data
// that's on the heap and explore how Rust knows to clean that up.
//
// We'll concentrate on the parts of String that relate to ownership.
// They also apply to other complex data types provided by the
// stdlib and those that you create.
//
// We've seen string literals hardcoded into the program. They're
// convenient, but they aren't suitable for every situation in which
// you want to use text. For one reason, they're immutable. Also, not
// every string value is known when we write our code. The other type
// is a String, which is allocated on the heap. It's able to store an
// amount of text that is unknown at compile time. It's created from
// a literal with a `from` function:
let s = String::from("hello");
// Again, double colon (::) is an op that allows us to namespace
// this from function under the String type rather than using a name
// like string_from. It can be mutated:
let mut s = String::from("hello");
s.push_str(", world!"); // appends a literal to a String
println!("{}", s); // Will print the full string.
// Why can Strings be mutated but literals cannot? Difference is
// how they deal with memory.
}
fn moves_and_mem() {
// With string literals, we know the contents of the string at compile
// time, so the text is literally hardcoded into the executable,
// making them extremely fast and efficient. This property only comes
// from its immutability. We can't put a blob of memory into the binary
// for each piece of text whose size is unknown at compile time and
// whose size might change while running the program.
//
// To support a mutable, growing piece of text, need to allocate an
// amount of mem on the heap, unknown at compile time, to hold the
// contents. This means:
//
// 1) The memory must be requested from the OS at runtime.
// 2) Need a way of returning the mem to the OS when we're done with
// the allocated string.
//
// First part is done by us: the String::from implementation requests
// the memory it needs from the OS. This is pretty standard for most
// langs.
//
// The second part is different. In langs with GCs, it will keep track
// and clean up mem that isn't used anymore, and the programmer doesn't
// need to think about it. Without a GC, it's the programmer's
// responsibility to know when that memory is no longer being used
// and call code to explicitly return it.
//
// This has historically been a *very* difficult problem to solve.
// If you forget to, we'll waste memory and leak it.
// If we do it too early, we'll have an invalid variable (use after free)
// If we do it twice, that's a bug too.
//
// We need to pair exactly one allocation with one free.
//
// Rust takes its own unique path: the memory is automatically
// returned once the variable that owns it goes out of scope.
// When a variable goes out of scope, Rust calls a special function
// for us. The function is called drop, and it's where the author
// of String can put the code to return the memory. Rust calls
// `drop` automatically at the closing }.
//
// NOTE: C++ calls this pattern of deallocation at the end of its
// lifetime RAII. The drop function in Rust is similar to a dtor
//
// The pattern has a profound impact on the way that Rust code is
// written. Might seem simple, but the behavior of code can be
// unexpected in more complicated situations when we want to
// have multiple variables use the data that's been allocated
// on the heap.
//
// + Ways variables and data interact: Move
// Multiple variables can interact with the same data in different
// ways in rust:
// let x = 5;
// let y = x;
//
// So here, we bind the value of 5 to x, then we make a copy
// of the value in x and bind it to y.
// We now have to vars x and y and both equal 5.
// This is exactly what's happening because integers are simple
// values with a known, fixed size, and these two 5 vals are
// pushed onto the stack.
//
// let a = String::from("hello);
// let b = a;
//
// This looks similar and you'd probably assume it behaves the same
// way; the second would make a copy of the val in a and bind it to b.
// This is not what happens.
//
// Under the covers, a String is actually a type with a few values:
// ptr to some memory holding the string, a length, and a capacity.
// This group is stored on the stack. The length is how much memory
// in bytes the contents of the String is curreently using.
// The capacity is the total amount of memory, in bytes, the String
// has received from the OS. Difference between len and cap matters,
// but not the point of this.
//
// When a is assigned to b, the String data is copied, meaning we copy
// the pointer, the len, and the cap on the stack. The heap data is
// not copied, so b's pointer is going to refer to the same heap
// data that a does.
//
// Earlier we said when a variable goes out of scope, Rust will
// automatically call the drop function and clean up the heap mem.
// But in this case, both pointers are pointing to the same heap
// memory. Thiis is a problem. When a and b go out of scope, they
// will both attempt to free the same memory. This is a *double free*
// error and is one of the memory safety bugs we mentioned previously.
// Freeing mem twice can lead to mem corruption, which can lead
// to security vulnerabilities.
//
// To ensure mem safety, there's another detail to what happens in
// this situation in Rust. Instead of trying to copy the allocated
// memory, Rust considers a to no longer be valid and therefore, Rust
// doesn't need to free anything when a goes out of scope.
//
// If you were to try to use a after copying it to b, an error
// is thrown at compile time.
//
// The ideas of "shallow copy" and "deep copy" apply here. The
// concept of copying the pointer, length, and capacity without
// copying the data psounds like a shallow copy. But because rust
// also invalidates the first variable, instead of calling this
// a shallow copy, it's known as a *move*. We would read this by
// saying that a was *moved* into b.
//
// This solves the problem, because with only b as valid, when it
// goes out of scope, it alone will free the mem.
//
// There is an additional design choice implied by this:
// ** Rust will never automatically create "deep" copies of data. **
// Therefore, any *automatic* copying can be assumed to be
// inexpensive in terms of runtime performance.
//
// + Ways variables and data interact: Clones
//
// If you *do* wawnt to deeply copy the heap data of a String, not
// just the stack data, a common method can be used called a *clone*.
//
// let a = String::from("hello");
// let b = a.clone();
//
// When you see a clone call, you know some arbitrary code is being
// executed and that code may be expensive. It's an indiator that
// something different is going on.
//
// + Stack only data: clone
// There's another wrinkle we haven't talked about yet. This code
// is using integers:
// let x = 5;
// let y = x;
//
// println!("x = {}, y = {}", x, y);
//
// This seems to contradict what we said; we don't have to call
// clone, but x is still valid and wasn't moved to y.
//
// The reason is types like integers that have a known size at
// compile time are stored *entirely* on the stack, so copies
// of the actual values are very quick to make. There's no reason
// we would want to prevent x from being valid after we create the
// variable y.
//
// In other words, there's no different between deep and shallow
// copying here, so calling clone wouldn't do anything different
// from the usual shallow copying and we can leave it out.
//
// Rust has a special annotation called the Copy trait that can
// be placed on types like integres that are stored on the stack.
// If a type has the Copy trait, an older variable is still usable
// after assignment. Rust won't let us annotate a type with the Copy
// trait if the type, or any of its parts, has implemented the Drop
// trait.
//
// If the type needs something special to happen when the value goes
// out of scope and we add the Copy annotation to that type, we'll get
// a compile time error.
//
// What types are Copy? Docs can/should be read, but as a general rule,
// any group of simple scalar values can be Copy, and nothing that
// requires allocation or is some form of resource is Copy.
// -> ints, bools, floats, tuples (only if they contain also Copys).
//
// + Ownership and Functions]
// Semantics for passing a value to a function are similar to
// assigning a value to a variable. Passing a variable to a func
// will move or copy just like assignment.
let s = String::from("derpyfoobar"); // s comes into scope
takes_ownership(s); // s's value moves into the function...
//... and so is no longer valid here.
//println!("{}", s); // COMPILE ERROR!
let x = 5;
makes_copy(x);
println!("{} ", x); // This is fine, because it was a copy.
// [Return values and scope]
// Returning values can also transfer ownership. Here's an ex
// with similar annotations to previous examples:
{
let baz = gives_ownership(); // gives ownership moves its return
// value into baz
let duder = String::from("duder"); // duder comes into scope
let lucha = takes_and_gives_back(duder); // duder is moves into
// takes_and_gives_back, which also moves its return value into lucha
println!("lucha! {}", lucha);
} // Here lucha goes out of scope and is dropped. duder goes out of
// scope but was moved
// The ownership of a variable follows the same pattern every time:
// **assigning a value to another variable moves it**. When a variable
// that includes data on the heap goes out of scope, the value will be
// cleaned up by `drop` unless the data has been moved to be owned by
// another variable.
//
// Taking ownership and then returning ownership with every fn is
// tedious. What if we need to let a function use a value but not take
// ownership? It's quite annoying that anything we pass in also needs
// to be passed back if we want to use it again, in addition to any
// data resulting from the body of the fn that we may want to return
// as well.
//
// It's possible to return multiple values using a tuple.
// But it's still obnoxious to constantly pass back a ton of stuff.
//
// Rust has a way to address this, and its called references!!!
//
////////////////////////////////////////////////////////////////////////////////
}
fn refs() {
// [References and Borrowing]
// The issue with the returning tuple code we've seen elsewhere in
// the ownership section is that we have to return the String to
// the calling function so we can still use the String after the call.
// Here we define calculate_length so that it uses a *reference* to
// an object as a param instead of taking ownership of the value.
let calc_len = |s: &String| -> usize {
s.len()
};
let duderington = String::from("duderington");
println!("the length of the string. -> {}", calc_len(&duderington));
// First, all the tuple code in the variable declaration is gone.
// We pass the string into the function as &duderington, and in the
// definition, we take &String rather than String.
//
// The ampersands are *references*, and they allow you to refer to
// some value without taking ownership over it.
// s inside of calc_len becomes a pointer to the String struct bound
// to duderington, which itself contains a ptr to the actual string
// data on the heap.
//
// A closer look at the function call:
//
// let ano = String::from("ano");
// let len = calc_len(&ano);
//
// The &ano syntax lets us create a ref that refers to the value
// of ano, but does not own it. Because it does no own it, the val
// it points to will not be dropped when the ref goes out of scope.
//
// Likewise, the sig of the fn uses & to indicate the type of the
// param s is a ref.
//
// The scope in which the variable s is valid is the same as any
// fn param scope, but we don't drop what the ref points to when it
// goes out of scope because we don't have ownership.
// Functions that have refs as params instead of vals mean we don't
// need to return the vals in order to give back ownership, since
// we never had ownership in the first place.
//
// What happens if we try to mutate something we borrowed?
// Compiler errors. As vars are immutable by default, so are refs.
// We are not allowed to modify a vanilla ref.
//
// + Mutable refs
let change = |some_str: &mut String| {
some_str.push_str(" fu.");
};
let mut s = String::from("mutref");
change(&mut s);
println!("mutref string after stuff: [{}]", s);
// Mutable refs have a big caveat: you can only have one mutable ref
// to a particular piece of data in a particular scope. This will fail:
//
// let mut s = String::from("fert");
//
// let r1 = &mut s;
// let r2 = &mut s;
//
// This restriction allows for mutation but in a very controlled
// fashion. It's something that new Rusters struggle with, because most
// langs let you mutate whenever you'd like. Benefit is that Rust
// can prevent data races at compile time.
//
// A data race is a particular type of race condition in which these
// three behaviors occur:
// 1) Two or more pointers access the same data at the same time
// 2) At least one of the pointeres is being used to write to the data
// 3) No mechanism being used to sync the access to the data
//
// Data races cause undefined behavior and can be very difficult to
// diagnose and solve when you're trying to figure out what's
// happening at runtime. Rust won't even let you compile it.
//
// Rust will let you create a new scope allowing for multiple mut
// refs, but just not *simultaneous* ones!
//
let mut s = String::from("fert");
{
let r1 = &mut s;
} //r1 goes out of scope here, so we can make a new ref with no prob
let r2 = &mut s;
// A similar rule exists for combining immutable refs.
// This will error out.
//let r1 = &s;
//let r2 = &s;
//let r3 = &mut s;
//
// Rust *also* does not allow for a mut ref while we have an
// immutable one. Users of an immutable ref don't expect the vals
// to change from under them. Multiple immutable refs are okay
// because no one who is reading the data has the ability to modify
// anybody else's data.
//
// + Dangling refs
// In langs with pointers, it's easy to erroneously create dangling
// pointers, or pointers that ref memory that may have been given
// to someone else, by freeing some meory while preserving a pointer
// to that memory. This is called a use after free.
//
// If we have a ref to some data, the compiler ensures that the data
// will not go out of scope before the ref to the data does.
// This will error out:
//
/*
let dangle = || -> &String{
let s = String::from("hello");
&s
}
let ref_to_nothign = dangle();
*/
// This will complain about something we haven't covered yet:
// *lifetimes*.
// The key here is the return type contains a borrowed value, but there
// is no value for it to be borrowed from.
// Because s is created inside dangle, when the code of dangle is
// finished, s will be deallocated. But we tried to return a ref
// to it. That means this ref would be pointing to an invaild String.
// Rust won't let us do that.
// The solution here is to return the String directly, thus transfering
// ownership via a move to the caller of the fn.
let no_dangle = || -> String {
let s = String::from("no dangle");
s
};
let ndstr = no_dangle();
println!("{}", ndstr);
// Recapping the rules of refs:
// 1) At any given time you can have *either* but not both of:
// -> One mutable ref
// -> Any number of immutable refs
// 2) References must always be valid
}
fn takes_ownership(some_string: String) { // some_string comes into scope
println!("{}", some_string);
} // here some string goes out of scope and `drop` is called. The
// backing memory is freed.
fn makes_copy(some_integer: i32) { // some integer comes into scope.
println!("{}", some_integer);
} // Here, some_integer goes out of scope. Nothing special happens.
fn gives_ownership() -> String { // Gives ownership will move its return
// value into the function that calls it
let derp = String::from("derp"); // derp comes into scope
derp // derp is returned and moves out to the calling function
}
// takes_and_gives_back will take a String and return one
fn takes_and_gives_back(foo: String) -> String { // foo comes into scope
foo // foo is returned and moves out to the calling fn
} | // stores values in the order it gets them and removes the values in | random_line_split |
3rd_person.rs | //! Example 03. 3rd person walk simulator.
//!
//! Difficulty: Advanced.
//!
//! This example based on async example, because it requires to load decent amount of
//! resources which might be slow on some machines.
//!
//! In this example we'll create simple 3rd person game with character that can idle,
//! walk, or jump.
//!
//! Also this example demonstrates the power of animation blending machines. Animation
//! blending machines are used in all modern games to create complex animations from set
//! of simple ones.
//!
//! TODO: Improve explanations. Some places can be explained better.
//!
//! Known bugs: Sometimes character will jump, but jumping animations is not playing.
//!
//! Possible improvements:
//! - Smart camera - camera which will not penetrate walls.
//! - Separate animation machines for upper and lower body - upper machine might be
//! for combat, lower - for locomotion.
//! - Tons of them, this is simple example after all.
pub mod shared;
use crate::shared::{create_ui, fix_shadows_distance, Game, GameScene};
use std::time::Instant;
use winit::keyboard::KeyCode;
use fyrox::{
core::{
algebra::Vector2,
log::{Log, MessageKind},
},
engine::GraphicsContext,
event::{Event, WindowEvent},
event_loop::ControlFlow,
gui::{
message::MessageDirection, progress_bar::ProgressBarMessage, text::TextMessage,
widget::WidgetMessage,
},
renderer::QualitySettings,
utils::translate_event,
};
fn | () {
let (mut game, event_loop) = Game::new("Example 03 - 3rd person");
// Create simple user interface that will show some useful info.
let interface = create_ui(
&mut game.engine.user_interface.build_ctx(),
Vector2::new(100.0, 100.0),
);
let mut previous = Instant::now();
let fixed_timestep = 1.0 / 60.0;
let mut lag = 0.0;
// Finally run our event loop which will respond to OS and window events and update
// engine state accordingly.
event_loop.run(move |event, window_target, control_flow| {
match event {
Event::MainEventsCleared => {
// This is main game loop - it has fixed time step which means that game
// code will run at fixed speed even if renderer can't give you desired
// 60 fps.
let elapsed = previous.elapsed();
previous = Instant::now();
lag += elapsed.as_secs_f32();
while lag >= fixed_timestep {
// ************************
// Put your game logic here.
// ************************
// Check each frame if our scene is created - here we just trying to lock context
// without blocking, it is important for main thread to be functional while other
// thread still loading data.
if let Ok(mut load_context) = game.load_context.as_ref().unwrap().try_lock() {
if let Some(load_result) = load_context.scene_data.take() {
// Add scene to engine - engine will take ownership over scene and will return
// you a handle to scene which can be used later on to borrow it and do some
// actions you need.
game.game_scene = Some(GameScene {
scene: game.engine.scenes.add(load_result.scene),
player: load_result.player,
});
// Once scene is loaded, we should hide progress bar and text.
game.engine
.user_interface
.send_message(WidgetMessage::visibility(
interface.progress_bar,
MessageDirection::ToWidget,
false,
));
game.engine
.user_interface
.send_message(WidgetMessage::visibility(
interface.progress_text,
MessageDirection::ToWidget,
false,
));
}
// Report progress in UI.
game.engine
.user_interface
.send_message(ProgressBarMessage::progress(
interface.progress_bar,
MessageDirection::ToWidget,
load_context.progress,
));
game.engine.user_interface.send_message(TextMessage::text(
interface.progress_text,
MessageDirection::ToWidget,
format!(
"Loading scene: {}%\n{}",
load_context.progress * 100.0,
load_context.message
),
));
}
// Update scene only if it is loaded.
if let Some(game_scene) = game.game_scene.as_mut() {
// Use stored scene handle to borrow a mutable reference of scene in
// engine.
let scene = &mut game.engine.scenes[game_scene.scene];
game_scene.player.update(scene, fixed_timestep);
}
if let GraphicsContext::Initialized(ref ctx) = game.engine.graphics_context {
let debug_text = format!(
"Example 03 - 3rd Person\n\
[W][S][A][D] - walk, [SPACE] - jump.\n\
Use [1][2][3][4] to select graphics quality.\n\
{}",
ctx.renderer.get_statistics()
);
game.engine.user_interface.send_message(TextMessage::text(
interface.debug_text,
MessageDirection::ToWidget,
debug_text,
));
}
// It is very important to "pump" messages from UI. Even if don't need to
// respond to such message, you should call this method, otherwise UI
// might behave very weird.
while let Some(_ui_event) = game.engine.user_interface.poll_message() {
// ************************
// Put your data model synchronization code here. It should
// take message and update data in your game according to
// changes in UI.
// ************************
}
game.engine
.update(fixed_timestep, control_flow, &mut lag, Default::default());
lag -= fixed_timestep;
}
// Rendering must be explicitly requested and handled after RedrawRequested event is received.
if let GraphicsContext::Initialized(ref ctx) = game.engine.graphics_context {
ctx.window.request_redraw();
}
}
Event::Resumed => {
game.engine
.initialize_graphics_context(window_target)
.unwrap();
}
Event::Suspended => {
game.engine.destroy_graphics_context().unwrap();
}
Event::RedrawRequested(_) => {
// Run renderer at max speed - it is not tied to game code.
game.engine.render().unwrap();
}
Event::LoopDestroyed => {
println!("{:?}", fyrox::core::profiler::print());
}
Event::WindowEvent { event,.. } => {
match &event {
WindowEvent::CloseRequested => *control_flow = ControlFlow::Exit,
WindowEvent::Resized(size) => {
// It is very important to handle Resized event from window, because
// renderer knows nothing about window size - it must be notified
// directly when window size has changed.
if let Err(e) = game.engine.set_frame_size((*size).into()) {
Log::writeln(
MessageKind::Error,
format!("Unable to set frame size: {:?}", e),
);
}
// Root UI node should be resized too, otherwise progress bar will stay
// in wrong position after resize.
if let GraphicsContext::Initialized(ref ctx) = game.engine.graphics_context
{
let size = size.to_logical(ctx.window.scale_factor());
game.engine
.user_interface
.send_message(WidgetMessage::width(
interface.root,
MessageDirection::ToWidget,
size.width,
));
game.engine
.user_interface
.send_message(WidgetMessage::height(
interface.root,
MessageDirection::ToWidget,
size.height,
));
}
}
WindowEvent::KeyboardInput { event: input,.. } => {
// Handle key input events via `WindowEvent`, not via `DeviceEvent` (#32)
if let Some(game_scene) = game.game_scene.as_mut() {
game_scene.player.handle_key_event(input, fixed_timestep);
}
let settings = match input.physical_key {
KeyCode::Digit1 => Some(QualitySettings::ultra()),
KeyCode::Digit2 => Some(QualitySettings::high()),
KeyCode::Digit3 => Some(QualitySettings::medium()),
KeyCode::Digit4 => Some(QualitySettings::low()),
_ => None,
};
if let Some(settings) = settings {
if let GraphicsContext::Initialized(ref mut ctx) =
game.engine.graphics_context
{
ctx.renderer
.set_quality_settings(&fix_shadows_distance(settings))
.unwrap();
}
}
}
_ => (),
}
// It is very important to "feed" user interface (UI) with events coming
// from main window, otherwise UI won't respond to mouse, keyboard, or any
// other event.
if let Some(os_event) = translate_event(&event) {
game.engine.user_interface.process_os_event(&os_event);
}
}
Event::DeviceEvent { event,.. } => {
if let Some(game_scene) = game.game_scene.as_mut() {
game_scene
.player
.handle_device_event(&event, fixed_timestep);
}
}
_ => *control_flow = ControlFlow::Poll,
}
});
}
| main | identifier_name |
3rd_person.rs | //! Example 03. 3rd person walk simulator.
//!
//! Difficulty: Advanced.
//!
//! This example based on async example, because it requires to load decent amount of
//! resources which might be slow on some machines.
//!
//! In this example we'll create simple 3rd person game with character that can idle,
//! walk, or jump.
//!
//! Also this example demonstrates the power of animation blending machines. Animation
//! blending machines are used in all modern games to create complex animations from set
//! of simple ones.
//!
//! TODO: Improve explanations. Some places can be explained better.
//!
//! Known bugs: Sometimes character will jump, but jumping animations is not playing.
//!
//! Possible improvements:
//! - Smart camera - camera which will not penetrate walls.
//! - Separate animation machines for upper and lower body - upper machine might be
//! for combat, lower - for locomotion.
//! - Tons of them, this is simple example after all.
pub mod shared;
use crate::shared::{create_ui, fix_shadows_distance, Game, GameScene};
use std::time::Instant;
use winit::keyboard::KeyCode;
use fyrox::{
core::{
algebra::Vector2,
log::{Log, MessageKind},
},
engine::GraphicsContext,
event::{Event, WindowEvent},
event_loop::ControlFlow,
gui::{
message::MessageDirection, progress_bar::ProgressBarMessage, text::TextMessage,
widget::WidgetMessage,
},
renderer::QualitySettings,
utils::translate_event,
};
fn main() {
let (mut game, event_loop) = Game::new("Example 03 - 3rd person");
// Create simple user interface that will show some useful info.
let interface = create_ui(
&mut game.engine.user_interface.build_ctx(),
Vector2::new(100.0, 100.0),
);
let mut previous = Instant::now();
let fixed_timestep = 1.0 / 60.0;
let mut lag = 0.0;
// Finally run our event loop which will respond to OS and window events and update
// engine state accordingly.
event_loop.run(move |event, window_target, control_flow| {
match event {
Event::MainEventsCleared => {
// This is main game loop - it has fixed time step which means that game
// code will run at fixed speed even if renderer can't give you desired
// 60 fps.
let elapsed = previous.elapsed();
previous = Instant::now();
lag += elapsed.as_secs_f32();
while lag >= fixed_timestep {
// ************************
// Put your game logic here.
// ************************
// Check each frame if our scene is created - here we just trying to lock context
// without blocking, it is important for main thread to be functional while other
// thread still loading data.
if let Ok(mut load_context) = game.load_context.as_ref().unwrap().try_lock() {
if let Some(load_result) = load_context.scene_data.take() {
// Add scene to engine - engine will take ownership over scene and will return
// you a handle to scene which can be used later on to borrow it and do some
// actions you need.
game.game_scene = Some(GameScene {
scene: game.engine.scenes.add(load_result.scene),
player: load_result.player,
});
// Once scene is loaded, we should hide progress bar and text.
game.engine
.user_interface
.send_message(WidgetMessage::visibility(
interface.progress_bar,
MessageDirection::ToWidget,
false,
));
game.engine
.user_interface
.send_message(WidgetMessage::visibility(
interface.progress_text,
MessageDirection::ToWidget,
false,
));
}
// Report progress in UI.
game.engine
.user_interface
.send_message(ProgressBarMessage::progress(
interface.progress_bar,
MessageDirection::ToWidget,
load_context.progress,
));
game.engine.user_interface.send_message(TextMessage::text(
interface.progress_text,
MessageDirection::ToWidget,
format!(
"Loading scene: {}%\n{}",
load_context.progress * 100.0,
load_context.message
),
));
}
// Update scene only if it is loaded.
if let Some(game_scene) = game.game_scene.as_mut() {
// Use stored scene handle to borrow a mutable reference of scene in
// engine.
let scene = &mut game.engine.scenes[game_scene.scene];
game_scene.player.update(scene, fixed_timestep);
}
if let GraphicsContext::Initialized(ref ctx) = game.engine.graphics_context {
let debug_text = format!(
"Example 03 - 3rd Person\n\
[W][S][A][D] - walk, [SPACE] - jump.\n\
Use [1][2][3][4] to select graphics quality.\n\
{}",
ctx.renderer.get_statistics()
);
game.engine.user_interface.send_message(TextMessage::text(
interface.debug_text,
MessageDirection::ToWidget,
debug_text,
));
}
// It is very important to "pump" messages from UI. Even if don't need to
// respond to such message, you should call this method, otherwise UI
// might behave very weird.
while let Some(_ui_event) = game.engine.user_interface.poll_message() {
// ************************
// Put your data model synchronization code here. It should
// take message and update data in your game according to
// changes in UI.
// ************************
}
game.engine
.update(fixed_timestep, control_flow, &mut lag, Default::default());
lag -= fixed_timestep;
}
// Rendering must be explicitly requested and handled after RedrawRequested event is received.
if let GraphicsContext::Initialized(ref ctx) = game.engine.graphics_context {
ctx.window.request_redraw();
}
}
Event::Resumed => {
game.engine
.initialize_graphics_context(window_target)
.unwrap();
}
Event::Suspended => {
game.engine.destroy_graphics_context().unwrap();
}
Event::RedrawRequested(_) => {
// Run renderer at max speed - it is not tied to game code.
game.engine.render().unwrap();
}
Event::LoopDestroyed => {
println!("{:?}", fyrox::core::profiler::print());
}
Event::WindowEvent { event,.. } => {
match &event {
WindowEvent::CloseRequested => *control_flow = ControlFlow::Exit,
WindowEvent::Resized(size) => {
// It is very important to handle Resized event from window, because
// renderer knows nothing about window size - it must be notified
// directly when window size has changed.
if let Err(e) = game.engine.set_frame_size((*size).into()) {
Log::writeln(
MessageKind::Error,
format!("Unable to set frame size: {:?}", e),
);
}
// Root UI node should be resized too, otherwise progress bar will stay
// in wrong position after resize.
if let GraphicsContext::Initialized(ref ctx) = game.engine.graphics_context
{
let size = size.to_logical(ctx.window.scale_factor());
game.engine
.user_interface
.send_message(WidgetMessage::width(
interface.root,
MessageDirection::ToWidget,
size.width,
));
game.engine
.user_interface
.send_message(WidgetMessage::height(
interface.root,
MessageDirection::ToWidget,
size.height,
));
}
}
WindowEvent::KeyboardInput { event: input,.. } => {
// Handle key input events via `WindowEvent`, not via `DeviceEvent` (#32)
if let Some(game_scene) = game.game_scene.as_mut() |
let settings = match input.physical_key {
KeyCode::Digit1 => Some(QualitySettings::ultra()),
KeyCode::Digit2 => Some(QualitySettings::high()),
KeyCode::Digit3 => Some(QualitySettings::medium()),
KeyCode::Digit4 => Some(QualitySettings::low()),
_ => None,
};
if let Some(settings) = settings {
if let GraphicsContext::Initialized(ref mut ctx) =
game.engine.graphics_context
{
ctx.renderer
.set_quality_settings(&fix_shadows_distance(settings))
.unwrap();
}
}
}
_ => (),
}
// It is very important to "feed" user interface (UI) with events coming
// from main window, otherwise UI won't respond to mouse, keyboard, or any
// other event.
if let Some(os_event) = translate_event(&event) {
game.engine.user_interface.process_os_event(&os_event);
}
}
Event::DeviceEvent { event,.. } => {
if let Some(game_scene) = game.game_scene.as_mut() {
game_scene
.player
.handle_device_event(&event, fixed_timestep);
}
}
_ => *control_flow = ControlFlow::Poll,
}
});
}
| {
game_scene.player.handle_key_event(input, fixed_timestep);
} | conditional_block |
3rd_person.rs | //! Example 03. 3rd person walk simulator.
//!
//! Difficulty: Advanced.
//!
//! This example based on async example, because it requires to load decent amount of
//! resources which might be slow on some machines.
//!
//! In this example we'll create simple 3rd person game with character that can idle,
//! walk, or jump.
//!
//! Also this example demonstrates the power of animation blending machines. Animation
//! blending machines are used in all modern games to create complex animations from set
//! of simple ones.
//!
//! TODO: Improve explanations. Some places can be explained better.
//!
//! Known bugs: Sometimes character will jump, but jumping animations is not playing.
//!
//! Possible improvements:
//! - Smart camera - camera which will not penetrate walls.
//! - Separate animation machines for upper and lower body - upper machine might be
//! for combat, lower - for locomotion.
//! - Tons of them, this is simple example after all.
pub mod shared;
use crate::shared::{create_ui, fix_shadows_distance, Game, GameScene};
use std::time::Instant;
use winit::keyboard::KeyCode;
use fyrox::{
core::{
algebra::Vector2,
log::{Log, MessageKind},
},
engine::GraphicsContext,
event::{Event, WindowEvent},
event_loop::ControlFlow,
gui::{
message::MessageDirection, progress_bar::ProgressBarMessage, text::TextMessage,
widget::WidgetMessage,
},
renderer::QualitySettings,
utils::translate_event,
};
fn main() | // 60 fps.
let elapsed = previous.elapsed();
previous = Instant::now();
lag += elapsed.as_secs_f32();
while lag >= fixed_timestep {
// ************************
// Put your game logic here.
// ************************
// Check each frame if our scene is created - here we just trying to lock context
// without blocking, it is important for main thread to be functional while other
// thread still loading data.
if let Ok(mut load_context) = game.load_context.as_ref().unwrap().try_lock() {
if let Some(load_result) = load_context.scene_data.take() {
// Add scene to engine - engine will take ownership over scene and will return
// you a handle to scene which can be used later on to borrow it and do some
// actions you need.
game.game_scene = Some(GameScene {
scene: game.engine.scenes.add(load_result.scene),
player: load_result.player,
});
// Once scene is loaded, we should hide progress bar and text.
game.engine
.user_interface
.send_message(WidgetMessage::visibility(
interface.progress_bar,
MessageDirection::ToWidget,
false,
));
game.engine
.user_interface
.send_message(WidgetMessage::visibility(
interface.progress_text,
MessageDirection::ToWidget,
false,
));
}
// Report progress in UI.
game.engine
.user_interface
.send_message(ProgressBarMessage::progress(
interface.progress_bar,
MessageDirection::ToWidget,
load_context.progress,
));
game.engine.user_interface.send_message(TextMessage::text(
interface.progress_text,
MessageDirection::ToWidget,
format!(
"Loading scene: {}%\n{}",
load_context.progress * 100.0,
load_context.message
),
));
}
// Update scene only if it is loaded.
if let Some(game_scene) = game.game_scene.as_mut() {
// Use stored scene handle to borrow a mutable reference of scene in
// engine.
let scene = &mut game.engine.scenes[game_scene.scene];
game_scene.player.update(scene, fixed_timestep);
}
if let GraphicsContext::Initialized(ref ctx) = game.engine.graphics_context {
let debug_text = format!(
"Example 03 - 3rd Person\n\
[W][S][A][D] - walk, [SPACE] - jump.\n\
Use [1][2][3][4] to select graphics quality.\n\
{}",
ctx.renderer.get_statistics()
);
game.engine.user_interface.send_message(TextMessage::text(
interface.debug_text,
MessageDirection::ToWidget,
debug_text,
));
}
// It is very important to "pump" messages from UI. Even if don't need to
// respond to such message, you should call this method, otherwise UI
// might behave very weird.
while let Some(_ui_event) = game.engine.user_interface.poll_message() {
// ************************
// Put your data model synchronization code here. It should
// take message and update data in your game according to
// changes in UI.
// ************************
}
game.engine
.update(fixed_timestep, control_flow, &mut lag, Default::default());
lag -= fixed_timestep;
}
// Rendering must be explicitly requested and handled after RedrawRequested event is received.
if let GraphicsContext::Initialized(ref ctx) = game.engine.graphics_context {
ctx.window.request_redraw();
}
}
Event::Resumed => {
game.engine
.initialize_graphics_context(window_target)
.unwrap();
}
Event::Suspended => {
game.engine.destroy_graphics_context().unwrap();
}
Event::RedrawRequested(_) => {
// Run renderer at max speed - it is not tied to game code.
game.engine.render().unwrap();
}
Event::LoopDestroyed => {
println!("{:?}", fyrox::core::profiler::print());
}
Event::WindowEvent { event,.. } => {
match &event {
WindowEvent::CloseRequested => *control_flow = ControlFlow::Exit,
WindowEvent::Resized(size) => {
// It is very important to handle Resized event from window, because
// renderer knows nothing about window size - it must be notified
// directly when window size has changed.
if let Err(e) = game.engine.set_frame_size((*size).into()) {
Log::writeln(
MessageKind::Error,
format!("Unable to set frame size: {:?}", e),
);
}
// Root UI node should be resized too, otherwise progress bar will stay
// in wrong position after resize.
if let GraphicsContext::Initialized(ref ctx) = game.engine.graphics_context
{
let size = size.to_logical(ctx.window.scale_factor());
game.engine
.user_interface
.send_message(WidgetMessage::width(
interface.root,
MessageDirection::ToWidget,
size.width,
));
game.engine
.user_interface
.send_message(WidgetMessage::height(
interface.root,
MessageDirection::ToWidget,
size.height,
));
}
}
WindowEvent::KeyboardInput { event: input,.. } => {
// Handle key input events via `WindowEvent`, not via `DeviceEvent` (#32)
if let Some(game_scene) = game.game_scene.as_mut() {
game_scene.player.handle_key_event(input, fixed_timestep);
}
let settings = match input.physical_key {
KeyCode::Digit1 => Some(QualitySettings::ultra()),
KeyCode::Digit2 => Some(QualitySettings::high()),
KeyCode::Digit3 => Some(QualitySettings::medium()),
KeyCode::Digit4 => Some(QualitySettings::low()),
_ => None,
};
if let Some(settings) = settings {
if let GraphicsContext::Initialized(ref mut ctx) =
game.engine.graphics_context
{
ctx.renderer
.set_quality_settings(&fix_shadows_distance(settings))
.unwrap();
}
}
}
_ => (),
}
// It is very important to "feed" user interface (UI) with events coming
// from main window, otherwise UI won't respond to mouse, keyboard, or any
// other event.
if let Some(os_event) = translate_event(&event) {
game.engine.user_interface.process_os_event(&os_event);
}
}
Event::DeviceEvent { event,.. } => {
if let Some(game_scene) = game.game_scene.as_mut() {
game_scene
.player
.handle_device_event(&event, fixed_timestep);
}
}
_ => *control_flow = ControlFlow::Poll,
}
});
}
| {
let (mut game, event_loop) = Game::new("Example 03 - 3rd person");
// Create simple user interface that will show some useful info.
let interface = create_ui(
&mut game.engine.user_interface.build_ctx(),
Vector2::new(100.0, 100.0),
);
let mut previous = Instant::now();
let fixed_timestep = 1.0 / 60.0;
let mut lag = 0.0;
// Finally run our event loop which will respond to OS and window events and update
// engine state accordingly.
event_loop.run(move |event, window_target, control_flow| {
match event {
Event::MainEventsCleared => {
// This is main game loop - it has fixed time step which means that game
// code will run at fixed speed even if renderer can't give you desired | identifier_body |
3rd_person.rs | //! Example 03. 3rd person walk simulator.
//!
//! Difficulty: Advanced.
//!
//! This example based on async example, because it requires to load decent amount of
//! resources which might be slow on some machines.
//!
//! In this example we'll create simple 3rd person game with character that can idle,
//! walk, or jump.
//! | //! blending machines are used in all modern games to create complex animations from set
//! of simple ones.
//!
//! TODO: Improve explanations. Some places can be explained better.
//!
//! Known bugs: Sometimes character will jump, but jumping animations is not playing.
//!
//! Possible improvements:
//! - Smart camera - camera which will not penetrate walls.
//! - Separate animation machines for upper and lower body - upper machine might be
//! for combat, lower - for locomotion.
//! - Tons of them, this is simple example after all.
pub mod shared;
use crate::shared::{create_ui, fix_shadows_distance, Game, GameScene};
use std::time::Instant;
use winit::keyboard::KeyCode;
use fyrox::{
core::{
algebra::Vector2,
log::{Log, MessageKind},
},
engine::GraphicsContext,
event::{Event, WindowEvent},
event_loop::ControlFlow,
gui::{
message::MessageDirection, progress_bar::ProgressBarMessage, text::TextMessage,
widget::WidgetMessage,
},
renderer::QualitySettings,
utils::translate_event,
};
fn main() {
let (mut game, event_loop) = Game::new("Example 03 - 3rd person");
// Create simple user interface that will show some useful info.
let interface = create_ui(
&mut game.engine.user_interface.build_ctx(),
Vector2::new(100.0, 100.0),
);
let mut previous = Instant::now();
let fixed_timestep = 1.0 / 60.0;
let mut lag = 0.0;
// Finally run our event loop which will respond to OS and window events and update
// engine state accordingly.
event_loop.run(move |event, window_target, control_flow| {
match event {
Event::MainEventsCleared => {
// This is main game loop - it has fixed time step which means that game
// code will run at fixed speed even if renderer can't give you desired
// 60 fps.
let elapsed = previous.elapsed();
previous = Instant::now();
lag += elapsed.as_secs_f32();
while lag >= fixed_timestep {
// ************************
// Put your game logic here.
// ************************
// Check each frame if our scene is created - here we just trying to lock context
// without blocking, it is important for main thread to be functional while other
// thread still loading data.
if let Ok(mut load_context) = game.load_context.as_ref().unwrap().try_lock() {
if let Some(load_result) = load_context.scene_data.take() {
// Add scene to engine - engine will take ownership over scene and will return
// you a handle to scene which can be used later on to borrow it and do some
// actions you need.
game.game_scene = Some(GameScene {
scene: game.engine.scenes.add(load_result.scene),
player: load_result.player,
});
// Once scene is loaded, we should hide progress bar and text.
game.engine
.user_interface
.send_message(WidgetMessage::visibility(
interface.progress_bar,
MessageDirection::ToWidget,
false,
));
game.engine
.user_interface
.send_message(WidgetMessage::visibility(
interface.progress_text,
MessageDirection::ToWidget,
false,
));
}
// Report progress in UI.
game.engine
.user_interface
.send_message(ProgressBarMessage::progress(
interface.progress_bar,
MessageDirection::ToWidget,
load_context.progress,
));
game.engine.user_interface.send_message(TextMessage::text(
interface.progress_text,
MessageDirection::ToWidget,
format!(
"Loading scene: {}%\n{}",
load_context.progress * 100.0,
load_context.message
),
));
}
// Update scene only if it is loaded.
if let Some(game_scene) = game.game_scene.as_mut() {
// Use stored scene handle to borrow a mutable reference of scene in
// engine.
let scene = &mut game.engine.scenes[game_scene.scene];
game_scene.player.update(scene, fixed_timestep);
}
if let GraphicsContext::Initialized(ref ctx) = game.engine.graphics_context {
let debug_text = format!(
"Example 03 - 3rd Person\n\
[W][S][A][D] - walk, [SPACE] - jump.\n\
Use [1][2][3][4] to select graphics quality.\n\
{}",
ctx.renderer.get_statistics()
);
game.engine.user_interface.send_message(TextMessage::text(
interface.debug_text,
MessageDirection::ToWidget,
debug_text,
));
}
// It is very important to "pump" messages from UI. Even if don't need to
// respond to such message, you should call this method, otherwise UI
// might behave very weird.
while let Some(_ui_event) = game.engine.user_interface.poll_message() {
// ************************
// Put your data model synchronization code here. It should
// take message and update data in your game according to
// changes in UI.
// ************************
}
game.engine
.update(fixed_timestep, control_flow, &mut lag, Default::default());
lag -= fixed_timestep;
}
// Rendering must be explicitly requested and handled after RedrawRequested event is received.
if let GraphicsContext::Initialized(ref ctx) = game.engine.graphics_context {
ctx.window.request_redraw();
}
}
Event::Resumed => {
game.engine
.initialize_graphics_context(window_target)
.unwrap();
}
Event::Suspended => {
game.engine.destroy_graphics_context().unwrap();
}
Event::RedrawRequested(_) => {
// Run renderer at max speed - it is not tied to game code.
game.engine.render().unwrap();
}
Event::LoopDestroyed => {
println!("{:?}", fyrox::core::profiler::print());
}
Event::WindowEvent { event,.. } => {
match &event {
WindowEvent::CloseRequested => *control_flow = ControlFlow::Exit,
WindowEvent::Resized(size) => {
// It is very important to handle Resized event from window, because
// renderer knows nothing about window size - it must be notified
// directly when window size has changed.
if let Err(e) = game.engine.set_frame_size((*size).into()) {
Log::writeln(
MessageKind::Error,
format!("Unable to set frame size: {:?}", e),
);
}
// Root UI node should be resized too, otherwise progress bar will stay
// in wrong position after resize.
if let GraphicsContext::Initialized(ref ctx) = game.engine.graphics_context
{
let size = size.to_logical(ctx.window.scale_factor());
game.engine
.user_interface
.send_message(WidgetMessage::width(
interface.root,
MessageDirection::ToWidget,
size.width,
));
game.engine
.user_interface
.send_message(WidgetMessage::height(
interface.root,
MessageDirection::ToWidget,
size.height,
));
}
}
WindowEvent::KeyboardInput { event: input,.. } => {
// Handle key input events via `WindowEvent`, not via `DeviceEvent` (#32)
if let Some(game_scene) = game.game_scene.as_mut() {
game_scene.player.handle_key_event(input, fixed_timestep);
}
let settings = match input.physical_key {
KeyCode::Digit1 => Some(QualitySettings::ultra()),
KeyCode::Digit2 => Some(QualitySettings::high()),
KeyCode::Digit3 => Some(QualitySettings::medium()),
KeyCode::Digit4 => Some(QualitySettings::low()),
_ => None,
};
if let Some(settings) = settings {
if let GraphicsContext::Initialized(ref mut ctx) =
game.engine.graphics_context
{
ctx.renderer
.set_quality_settings(&fix_shadows_distance(settings))
.unwrap();
}
}
}
_ => (),
}
// It is very important to "feed" user interface (UI) with events coming
// from main window, otherwise UI won't respond to mouse, keyboard, or any
// other event.
if let Some(os_event) = translate_event(&event) {
game.engine.user_interface.process_os_event(&os_event);
}
}
Event::DeviceEvent { event,.. } => {
if let Some(game_scene) = game.game_scene.as_mut() {
game_scene
.player
.handle_device_event(&event, fixed_timestep);
}
}
_ => *control_flow = ControlFlow::Poll,
}
});
} | //! Also this example demonstrates the power of animation blending machines. Animation | random_line_split |
cargo-deploy.rs | //! # `cargo deploy`
//! Run a binary on a constellation cluster
//!
//! ## Usage
//! ```text
//! cargo deploy [options] <host> [--] [<args>]...
//! ```
//!
//! ## Options
//! ```text
//! -h --help Show this screen.
//! -V --version Show version.
//! --format=<fmt> Output format [possible values: human, json] [defa ult: human]
//! ```
//!
//! Note: --format can also be given as an env var, such as `CONSTELLATION_FORMAT=json`
#![warn(
missing_copy_implementations,
missing_debug_implementations,
missing_docs,
trivial_numeric_casts,
unused_extern_crates,
unused_import_braces,
unused_qualifications,
unused_results,
clippy::pedantic
)] // from https://github.com/rust-unofficial/patterns/blob/master/anti_patterns/deny-warnings.md
use clap::{crate_version, App, AppSettings, Arg, ArgMatches, SubCommand};
use std::{
convert::TryInto, env, ffi::{OsStr, OsString}, iter, net::SocketAddr, process
};
use constellation_internal::Format;
fn main() {
let args = cli().get_matches();
let args = args.subcommand_matches("deploy").unwrap();
let host: SocketAddr = args.value_of("host").unwrap().parse().unwrap();
let forward_args: Vec<&OsStr> = args.values_of_os("args").unwrap_or_default().collect();
let output = cargo(args)
.stderr(process::Stdio::inherit())
.output()
.expect("Failed to invoke cargo");
if!output.status.success() {
process::exit(output.status.code().unwrap_or(101));
}
let mut bin = Vec::new();
for message in serde_json::Deserializer::from_slice(&output.stdout).into_iter() {
if let cargo_metadata::Message::CompilerArtifact(artifact) =
message.unwrap_or_else(|_| panic!("Failed to parse output of cargo"))
{
if artifact.target.kind == vec![String::from("bin")]
|| artifact.target.kind == vec![String::from("example")]
{
bin.push((
artifact.target.name,
artifact.filenames.into_iter().next().unwrap(),
));
// We're assuming the first filename is the binary –.dSYM etc seem to always be second?
}
}
}
if bin.len() > 1 {
let names = bin
.into_iter()
.map(|(target_name, _)| target_name)
.collect::<Vec<_>>();
println!(
"`cargo deploy` could not determine which binary to run. \
Use the `--bin` option to specify a binary.\n\
available binaries: {}",
names.join(", ")
); //, or the `default-run` manifest key // TODO: work out best way to get this / behave same as cargo run
process::exit(1);
} else if bin.is_empty() {
println!("a bin target must be available for `cargo deploy`");
process::exit(1);
}
let path = bin.into_iter().next().unwrap().1;
let args: Vec<OsString> = iter::once(OsString::from(&path))
.chain(forward_args.into_iter().map(ToOwned::to_owned))
.collect();
let vars: Vec<(OsString, OsString)> = env::vars_os().collect();
let format = Format::Human;
constellation::deploy(host, &path, format, args, vars);
}
fn cli<'a, 'b>() -> App<'a, 'b> {
// https://github.com/rust-lang/cargo/blob/7059559d71de3fffe8c8cb81e32f323454aa96c5/src/bin/cargo/cli.rs#L205-L277
// https://github.com/rust-lang/cargo/blob/982622252a64d7c526c04a244f1a81523dc9ae54/src/bin/cargo/commands/run.rs
App::new("cargo")
.bin_name("cargo")
.settings(&[
AppSettings::UnifiedHelpMessage,
AppSettings::DeriveDisplayOrder,
AppSettings::SubcommandRequired,
])
.arg(
Arg::opt(
"verbose",
"Use verbose output (-vv very verbose/build.rs output)",
)
.short("v")
.multiple(true)
.global(true),
)
.arg(
Arg::opt("color", "Coloring: auto, always, never")
.value_name("WHEN")
.global(true),
)
.arg(Arg::opt("frozen", "Require Cargo.lock and cache are up to date").global(true))
.arg(Arg::opt("locked", "Require Cargo.lock is up to date").global(true))
.arg(Arg::opt("offline", "Run without accessing the network").global(true))
.arg(
Arg::multi_opt("config", "KEY=VALUE", "Override a configuration value")
.global(true)
.hidden(true),
)
.arg(
Arg::with_name("unstable-features")
.help("Unstable (nightly-only) flags to Cargo, see 'cargo -Z help' for details")
.short("Z")
.value_name("FLAG")
.multiple(true)
.number_of_values(1)
.global(true),
)
.subcommand(
SubCommand::with_name("deploy")
.settings(&[
AppSettings::UnifiedHelpMessage,
AppSettings::DeriveDisplayOrder,
AppSettings::DontCollapseArgsInUsage,
AppSettings::TrailingVarArg,
])
.version(crate_version!())
.about("Run a binary or example of the local package on a constellation cluster")
//.arg(Arg::opt("quiet", "No output printed to stdout").short("q"))
.arg(
Arg::with_name("host")
.help("Constellation cluster node to connect to (e.g. 10.0.0.1:8888)")
.required(true)
.validator(|host| {
host.parse::<SocketAddr>()
.map(drop)
.map_err(|err| err.to_string())
}),
)
.arg(Arg::with_name("args").multiple(true))
.args(&Arg::targets_bin_example(
"Name of the bin target to run",
"Name of the example target to run",
))
.arg(Arg::package("Package with the target to run"))
.arg(Arg::jobs())
.arg(Arg::release(
"Build artifacts in release mode, with optimizations",
))
.arg(Arg::profile("Build artifacts with the specified profile"))
.args(&Arg::features())
.arg(Arg::target_triple("Build for the target triple"))
.arg(Arg::target_dir())
.arg(Arg::manifest_path())
//.arg(Arg::message_format())
.after_help(
"\
If neither `--bin` nor `--example` are given, then if the package only has one
bin target it will be run. Otherwise `--bin` specifies the bin target to run,
and `--example` specifies the example target to run. At most one of `--bin` or
`--example` can be provided.
All the arguments following the two dashes (`--`) are passed to the binary to
run. If you're passing arguments to both Cargo and the binary, the ones after
`--` go to the binary, the ones before go to Cargo.
",
),
)
}
fn cargo(args: &ArgMatches) -> process::Command {
let verbose: u64 = args.occurrences_of("verbose");
let color: Option<&str> = args.value_of("color");
let frozen: bool = args.is_present("frozen");
let locked: bool = args.is_present("locked");
let offline: bool = args.is_present("offline");
let config: Vec<&str> = args.values_of("config").unwrap_or_default().collect();
let unstable_features: Vec<&OsStr> = args
.values_of_os("unstable-features")
.unwrap_or_default()
.collect();
let bin: Vec<&str> = args.values_of("bin").unwrap_or_default().collect();
let example: Vec<&str> = args.values_of("example").unwrap_or_default().collect();
let package: Vec<&str> = args.values_of("package").unwrap_or_default().collect();
let jobs: Option<&str> = args.value_of("jobs");
let release: bool = args.is_present("release");
let profile: Option<&str> = args.value_of("profile");
let features: Vec<&str> = args.values_of("features").unwrap_or_default().collect();
let all_features: bool = args.is_present("all-features");
let no_default_features: bool = args.is_present("no-default-features");
let target: Option<&str> = args.value_of("target");
let target_dir: Option<&str> = args.value_of("target-dir");
let manifest_path: Option<&str> = args.value_of("manifest-path");
// let mut args: Vec<String> = Vec::new();
let mut cargo = process::Command::new("cargo");
let _ = cargo.arg("build");
let _ = cargo.arg("--message-format=json");
if verbose > 0 {
let _ = cargo.arg(format!("-{}", "v".repeat(verbose.try_into().unwrap())));
}
if let Some(color) = color {
let _ = cargo.arg(format!("--color={}", color));
}
if frozen {
let _ = cargo.arg("--frozen");
}
if locked {
let _ = cargo.arg("--locked");
}
if offline {
let _ = cargo.arg("--offline");
}
for config in config {
let _ = cargo.arg(format!("--config={}", config));
}
for unstable_features in unstable_features {
let mut arg = OsString::from("-Z");
arg.push(unstable_features);
let _ = cargo.arg(arg);
}
for bin in bin {
let _ = cargo.arg(format!("--bin={}", bin));
}
for example in example {
let _ = cargo.arg(format!("--example={}", example));
}
for package in package {
let _ = cargo.arg(format!("--package={}", package));
}
if let Some(jobs) = jobs {
let _ = cargo.arg(format!("--jobs={}", jobs));
}
if release {
| f let Some(profile) = profile {
let _ = cargo.arg(format!("--profile={}", profile));
}
for features in features {
let _ = cargo.arg(format!("--features={}", features));
}
if all_features {
let _ = cargo.arg("--all-features");
}
if no_default_features {
let _ = cargo.arg("--no-default-features");
}
if let Some(target) = target {
let _ = cargo.arg(format!("--target={}", target));
}
if let Some(target_dir) = target_dir {
let _ = cargo.arg(format!("--target-dir={}", target_dir));
}
if let Some(manifest_path) = manifest_path {
let _ = cargo.arg(format!("--manifest-path={}", manifest_path));
}
cargo
}
// https://github.com/rust-lang/cargo/blob/7059559d71de3fffe8c8cb81e32f323454aa96c5/src/cargo/util/command_prelude.rs
trait ArgExt: Sized {
fn opt(name: &'static str, help: &'static str) -> Self;
fn optional_multi_opt(name: &'static str, value_name: &'static str, help: &'static str)
-> Self;
fn multi_opt(name: &'static str, value_name: &'static str, help: &'static str) -> Self;
fn targets_bin_example(bin: &'static str, example: &'static str) -> [Self; 2];
fn package(package: &'static str) -> Self;
fn jobs() -> Self;
fn release(release: &'static str) -> Self;
fn profile(profile: &'static str) -> Self;
fn features() -> [Self; 3];
fn target_triple(target: &'static str) -> Self;
fn target_dir() -> Self;
fn manifest_path() -> Self;
}
impl<'a, 'b> ArgExt for Arg<'a, 'b> {
fn opt(name: &'static str, help: &'static str) -> Self {
Arg::with_name(name).long(name).help(help)
}
fn optional_multi_opt(
name: &'static str, value_name: &'static str, help: &'static str,
) -> Self {
Self::opt(name, help)
.value_name(value_name)
.multiple(true)
.min_values(0)
.number_of_values(1)
}
fn multi_opt(name: &'static str, value_name: &'static str, help: &'static str) -> Self {
// Note that all `.multiple(true)` arguments in Cargo should specify
// `.number_of_values(1)` as well, so that `--foo val1 val2` is
// *not* parsed as `foo` with values ["val1", "val2"].
// `number_of_values` should become the default in clap 3.
Self::opt(name, help)
.value_name(value_name)
.multiple(true)
.number_of_values(1)
}
fn targets_bin_example(bin: &'static str, example: &'static str) -> [Self; 2] {
[
Self::optional_multi_opt("bin", "NAME", bin),
Self::optional_multi_opt("example", "NAME", example),
]
}
fn package(package: &'static str) -> Self {
Self::opt("package", package).short("p").value_name("SPEC")
}
fn jobs() -> Self {
Self::opt("jobs", "Number of parallel jobs, defaults to # of CPUs")
.short("j")
.value_name("N")
}
fn release(release: &'static str) -> Self {
Self::opt("release", release)
}
fn profile(profile: &'static str) -> Self {
Self::opt("profile", profile).value_name("PROFILE-NAME")
}
fn features() -> [Self; 3] {
[
Self::multi_opt(
"features",
"FEATURES",
"Space-separated list of features to activate",
),
Self::opt("all-features", "Activate all available features"),
Self::opt(
"no-default-features",
"Do not activate the `default` feature",
),
]
}
fn target_triple(target: &'static str) -> Self {
Self::opt("target", target).value_name("TRIPLE")
}
fn target_dir() -> Self {
Self::opt("target-dir", "Directory for all generated artifacts").value_name("DIRECTORY")
}
fn manifest_path() -> Self {
Self::opt("manifest-path", "Path to Cargo.toml").value_name("PATH")
}
}
| let _ = cargo.arg("--release");
}
i | conditional_block |
cargo-deploy.rs | //! # `cargo deploy`
//! Run a binary on a constellation cluster
//!
//! ## Usage
//! ```text
//! cargo deploy [options] <host> [--] [<args>]...
//! ```
//!
//! ## Options
//! ```text
//! -h --help Show this screen.
//! -V --version Show version.
//! --format=<fmt> Output format [possible values: human, json] [defa ult: human]
//! ```
//!
//! Note: --format can also be given as an env var, such as `CONSTELLATION_FORMAT=json`
#![warn(
missing_copy_implementations,
missing_debug_implementations,
missing_docs,
trivial_numeric_casts,
unused_extern_crates,
unused_import_braces,
unused_qualifications,
unused_results,
clippy::pedantic
)] // from https://github.com/rust-unofficial/patterns/blob/master/anti_patterns/deny-warnings.md
use clap::{crate_version, App, AppSettings, Arg, ArgMatches, SubCommand};
use std::{
convert::TryInto, env, ffi::{OsStr, OsString}, iter, net::SocketAddr, process
};
use constellation_internal::Format;
fn main() {
let args = cli().get_matches();
let args = args.subcommand_matches("deploy").unwrap();
let host: SocketAddr = args.value_of("host").unwrap().parse().unwrap();
let forward_args: Vec<&OsStr> = args.values_of_os("args").unwrap_or_default().collect();
let output = cargo(args)
.stderr(process::Stdio::inherit())
.output()
.expect("Failed to invoke cargo");
if!output.status.success() {
process::exit(output.status.code().unwrap_or(101));
}
let mut bin = Vec::new();
for message in serde_json::Deserializer::from_slice(&output.stdout).into_iter() {
if let cargo_metadata::Message::CompilerArtifact(artifact) =
message.unwrap_or_else(|_| panic!("Failed to parse output of cargo"))
{
if artifact.target.kind == vec![String::from("bin")]
|| artifact.target.kind == vec![String::from("example")]
{
bin.push((
artifact.target.name,
artifact.filenames.into_iter().next().unwrap(),
));
// We're assuming the first filename is the binary –.dSYM etc seem to always be second?
}
}
}
if bin.len() > 1 {
let names = bin
.into_iter()
.map(|(target_name, _)| target_name)
.collect::<Vec<_>>();
println!(
"`cargo deploy` could not determine which binary to run. \
Use the `--bin` option to specify a binary.\n\
available binaries: {}",
names.join(", ")
); //, or the `default-run` manifest key // TODO: work out best way to get this / behave same as cargo run
process::exit(1);
} else if bin.is_empty() {
println!("a bin target must be available for `cargo deploy`");
process::exit(1);
}
let path = bin.into_iter().next().unwrap().1;
let args: Vec<OsString> = iter::once(OsString::from(&path))
.chain(forward_args.into_iter().map(ToOwned::to_owned))
.collect();
let vars: Vec<(OsString, OsString)> = env::vars_os().collect();
let format = Format::Human;
constellation::deploy(host, &path, format, args, vars);
}
fn cli<'a, 'b>() -> App<'a, 'b> {
// https://github.com/rust-lang/cargo/blob/7059559d71de3fffe8c8cb81e32f323454aa96c5/src/bin/cargo/cli.rs#L205-L277
// https://github.com/rust-lang/cargo/blob/982622252a64d7c526c04a244f1a81523dc9ae54/src/bin/cargo/commands/run.rs
App::new("cargo")
.bin_name("cargo")
.settings(&[
AppSettings::UnifiedHelpMessage,
AppSettings::DeriveDisplayOrder,
AppSettings::SubcommandRequired,
])
.arg(
Arg::opt(
"verbose",
"Use verbose output (-vv very verbose/build.rs output)",
)
.short("v")
.multiple(true)
.global(true),
)
.arg(
Arg::opt("color", "Coloring: auto, always, never")
.value_name("WHEN")
.global(true),
)
.arg(Arg::opt("frozen", "Require Cargo.lock and cache are up to date").global(true))
.arg(Arg::opt("locked", "Require Cargo.lock is up to date").global(true))
.arg(Arg::opt("offline", "Run without accessing the network").global(true))
.arg(
Arg::multi_opt("config", "KEY=VALUE", "Override a configuration value")
.global(true)
.hidden(true),
)
.arg(
Arg::with_name("unstable-features")
.help("Unstable (nightly-only) flags to Cargo, see 'cargo -Z help' for details")
.short("Z")
.value_name("FLAG")
.multiple(true)
.number_of_values(1)
.global(true),
)
.subcommand(
SubCommand::with_name("deploy")
.settings(&[
AppSettings::UnifiedHelpMessage,
AppSettings::DeriveDisplayOrder,
AppSettings::DontCollapseArgsInUsage,
AppSettings::TrailingVarArg,
])
.version(crate_version!())
.about("Run a binary or example of the local package on a constellation cluster")
//.arg(Arg::opt("quiet", "No output printed to stdout").short("q"))
.arg(
Arg::with_name("host")
.help("Constellation cluster node to connect to (e.g. 10.0.0.1:8888)")
.required(true)
.validator(|host| {
host.parse::<SocketAddr>()
.map(drop)
.map_err(|err| err.to_string())
}),
)
.arg(Arg::with_name("args").multiple(true))
.args(&Arg::targets_bin_example(
"Name of the bin target to run",
"Name of the example target to run",
))
.arg(Arg::package("Package with the target to run"))
.arg(Arg::jobs())
.arg(Arg::release(
"Build artifacts in release mode, with optimizations",
))
.arg(Arg::profile("Build artifacts with the specified profile"))
.args(&Arg::features())
.arg(Arg::target_triple("Build for the target triple"))
.arg(Arg::target_dir())
.arg(Arg::manifest_path())
//.arg(Arg::message_format())
.after_help(
"\
If neither `--bin` nor `--example` are given, then if the package only has one
bin target it will be run. Otherwise `--bin` specifies the bin target to run,
and `--example` specifies the example target to run. At most one of `--bin` or
`--example` can be provided.
All the arguments following the two dashes (`--`) are passed to the binary to
run. If you're passing arguments to both Cargo and the binary, the ones after
`--` go to the binary, the ones before go to Cargo.
",
),
)
}
fn cargo(args: &ArgMatches) -> process::Command {
let verbose: u64 = args.occurrences_of("verbose");
let color: Option<&str> = args.value_of("color");
let frozen: bool = args.is_present("frozen");
let locked: bool = args.is_present("locked");
let offline: bool = args.is_present("offline");
let config: Vec<&str> = args.values_of("config").unwrap_or_default().collect();
let unstable_features: Vec<&OsStr> = args
.values_of_os("unstable-features")
.unwrap_or_default()
.collect();
let bin: Vec<&str> = args.values_of("bin").unwrap_or_default().collect();
let example: Vec<&str> = args.values_of("example").unwrap_or_default().collect();
let package: Vec<&str> = args.values_of("package").unwrap_or_default().collect();
let jobs: Option<&str> = args.value_of("jobs");
let release: bool = args.is_present("release");
let profile: Option<&str> = args.value_of("profile");
let features: Vec<&str> = args.values_of("features").unwrap_or_default().collect();
let all_features: bool = args.is_present("all-features");
let no_default_features: bool = args.is_present("no-default-features");
let target: Option<&str> = args.value_of("target");
let target_dir: Option<&str> = args.value_of("target-dir");
let manifest_path: Option<&str> = args.value_of("manifest-path");
// let mut args: Vec<String> = Vec::new();
let mut cargo = process::Command::new("cargo");
let _ = cargo.arg("build");
let _ = cargo.arg("--message-format=json");
if verbose > 0 {
let _ = cargo.arg(format!("-{}", "v".repeat(verbose.try_into().unwrap())));
}
if let Some(color) = color {
let _ = cargo.arg(format!("--color={}", color));
}
if frozen {
let _ = cargo.arg("--frozen");
}
if locked {
let _ = cargo.arg("--locked");
}
if offline {
let _ = cargo.arg("--offline");
}
for config in config {
let _ = cargo.arg(format!("--config={}", config));
}
for unstable_features in unstable_features {
let mut arg = OsString::from("-Z");
arg.push(unstable_features);
let _ = cargo.arg(arg);
}
for bin in bin {
let _ = cargo.arg(format!("--bin={}", bin));
}
for example in example {
let _ = cargo.arg(format!("--example={}", example));
}
for package in package {
let _ = cargo.arg(format!("--package={}", package));
}
if let Some(jobs) = jobs {
let _ = cargo.arg(format!("--jobs={}", jobs));
}
if release {
let _ = cargo.arg("--release");
}
if let Some(profile) = profile {
let _ = cargo.arg(format!("--profile={}", profile));
}
for features in features {
let _ = cargo.arg(format!("--features={}", features));
}
if all_features {
let _ = cargo.arg("--all-features");
}
if no_default_features {
let _ = cargo.arg("--no-default-features");
}
if let Some(target) = target {
let _ = cargo.arg(format!("--target={}", target));
}
if let Some(target_dir) = target_dir {
let _ = cargo.arg(format!("--target-dir={}", target_dir));
}
if let Some(manifest_path) = manifest_path {
let _ = cargo.arg(format!("--manifest-path={}", manifest_path));
}
cargo
}
// https://github.com/rust-lang/cargo/blob/7059559d71de3fffe8c8cb81e32f323454aa96c5/src/cargo/util/command_prelude.rs
trait ArgExt: Sized {
fn opt(name: &'static str, help: &'static str) -> Self;
fn optional_multi_opt(name: &'static str, value_name: &'static str, help: &'static str)
-> Self;
fn multi_opt(name: &'static str, value_name: &'static str, help: &'static str) -> Self;
fn targets_bin_example(bin: &'static str, example: &'static str) -> [Self; 2];
fn package(package: &'static str) -> Self;
fn jobs() -> Self;
fn release(release: &'static str) -> Self;
fn profile(profile: &'static str) -> Self;
fn features() -> [Self; 3];
fn target_triple(target: &'static str) -> Self;
fn target_dir() -> Self;
fn manifest_path() -> Self;
}
impl<'a, 'b> ArgExt for Arg<'a, 'b> {
fn opt(name: &'static str, help: &'static str) -> Self {
Arg::with_name(name).long(name).help(help)
}
fn optional_multi_opt(
name: &'static str, value_name: &'static str, help: &'static str,
) -> Self {
| n multi_opt(name: &'static str, value_name: &'static str, help: &'static str) -> Self {
// Note that all `.multiple(true)` arguments in Cargo should specify
// `.number_of_values(1)` as well, so that `--foo val1 val2` is
// *not* parsed as `foo` with values ["val1", "val2"].
// `number_of_values` should become the default in clap 3.
Self::opt(name, help)
.value_name(value_name)
.multiple(true)
.number_of_values(1)
}
fn targets_bin_example(bin: &'static str, example: &'static str) -> [Self; 2] {
[
Self::optional_multi_opt("bin", "NAME", bin),
Self::optional_multi_opt("example", "NAME", example),
]
}
fn package(package: &'static str) -> Self {
Self::opt("package", package).short("p").value_name("SPEC")
}
fn jobs() -> Self {
Self::opt("jobs", "Number of parallel jobs, defaults to # of CPUs")
.short("j")
.value_name("N")
}
fn release(release: &'static str) -> Self {
Self::opt("release", release)
}
fn profile(profile: &'static str) -> Self {
Self::opt("profile", profile).value_name("PROFILE-NAME")
}
fn features() -> [Self; 3] {
[
Self::multi_opt(
"features",
"FEATURES",
"Space-separated list of features to activate",
),
Self::opt("all-features", "Activate all available features"),
Self::opt(
"no-default-features",
"Do not activate the `default` feature",
),
]
}
fn target_triple(target: &'static str) -> Self {
Self::opt("target", target).value_name("TRIPLE")
}
fn target_dir() -> Self {
Self::opt("target-dir", "Directory for all generated artifacts").value_name("DIRECTORY")
}
fn manifest_path() -> Self {
Self::opt("manifest-path", "Path to Cargo.toml").value_name("PATH")
}
}
| Self::opt(name, help)
.value_name(value_name)
.multiple(true)
.min_values(0)
.number_of_values(1)
}
f | identifier_body |
cargo-deploy.rs | //! # `cargo deploy`
//! Run a binary on a constellation cluster
//!
//! ## Usage
//! ```text
//! cargo deploy [options] <host> [--] [<args>]...
//! ```
//!
//! ## Options
//! ```text
//! -h --help Show this screen.
//! -V --version Show version.
//! --format=<fmt> Output format [possible values: human, json] [defa ult: human]
//! ```
//!
//! Note: --format can also be given as an env var, such as `CONSTELLATION_FORMAT=json`
#![warn(
missing_copy_implementations,
missing_debug_implementations,
missing_docs,
trivial_numeric_casts,
unused_extern_crates,
unused_import_braces,
unused_qualifications,
unused_results,
clippy::pedantic
)] // from https://github.com/rust-unofficial/patterns/blob/master/anti_patterns/deny-warnings.md
use clap::{crate_version, App, AppSettings, Arg, ArgMatches, SubCommand};
use std::{
convert::TryInto, env, ffi::{OsStr, OsString}, iter, net::SocketAddr, process
};
use constellation_internal::Format;
fn main() {
let args = cli().get_matches();
let args = args.subcommand_matches("deploy").unwrap();
let host: SocketAddr = args.value_of("host").unwrap().parse().unwrap();
let forward_args: Vec<&OsStr> = args.values_of_os("args").unwrap_or_default().collect();
let output = cargo(args)
.stderr(process::Stdio::inherit())
.output()
.expect("Failed to invoke cargo");
if!output.status.success() {
process::exit(output.status.code().unwrap_or(101));
}
let mut bin = Vec::new();
for message in serde_json::Deserializer::from_slice(&output.stdout).into_iter() {
if let cargo_metadata::Message::CompilerArtifact(artifact) =
message.unwrap_or_else(|_| panic!("Failed to parse output of cargo"))
{
if artifact.target.kind == vec![String::from("bin")]
|| artifact.target.kind == vec![String::from("example")]
{
bin.push((
artifact.target.name,
artifact.filenames.into_iter().next().unwrap(),
));
// We're assuming the first filename is the binary –.dSYM etc seem to always be second?
}
}
}
if bin.len() > 1 {
let names = bin
.into_iter()
.map(|(target_name, _)| target_name)
.collect::<Vec<_>>();
println!(
"`cargo deploy` could not determine which binary to run. \
Use the `--bin` option to specify a binary.\n\
available binaries: {}",
names.join(", ")
); //, or the `default-run` manifest key // TODO: work out best way to get this / behave same as cargo run
process::exit(1);
} else if bin.is_empty() {
println!("a bin target must be available for `cargo deploy`");
process::exit(1);
}
let path = bin.into_iter().next().unwrap().1;
let args: Vec<OsString> = iter::once(OsString::from(&path))
.chain(forward_args.into_iter().map(ToOwned::to_owned))
.collect();
let vars: Vec<(OsString, OsString)> = env::vars_os().collect();
let format = Format::Human;
constellation::deploy(host, &path, format, args, vars);
}
fn cli<'a, 'b>() -> App<'a, 'b> {
// https://github.com/rust-lang/cargo/blob/7059559d71de3fffe8c8cb81e32f323454aa96c5/src/bin/cargo/cli.rs#L205-L277
// https://github.com/rust-lang/cargo/blob/982622252a64d7c526c04a244f1a81523dc9ae54/src/bin/cargo/commands/run.rs
App::new("cargo")
.bin_name("cargo")
.settings(&[
AppSettings::UnifiedHelpMessage,
AppSettings::DeriveDisplayOrder,
AppSettings::SubcommandRequired,
])
.arg(
Arg::opt(
"verbose",
"Use verbose output (-vv very verbose/build.rs output)",
)
.short("v")
.multiple(true)
.global(true),
)
.arg(
Arg::opt("color", "Coloring: auto, always, never")
.value_name("WHEN")
.global(true),
)
.arg(Arg::opt("frozen", "Require Cargo.lock and cache are up to date").global(true))
.arg(Arg::opt("locked", "Require Cargo.lock is up to date").global(true))
.arg(Arg::opt("offline", "Run without accessing the network").global(true))
.arg(
Arg::multi_opt("config", "KEY=VALUE", "Override a configuration value")
.global(true)
.hidden(true),
)
.arg(
Arg::with_name("unstable-features")
.help("Unstable (nightly-only) flags to Cargo, see 'cargo -Z help' for details")
.short("Z")
.value_name("FLAG")
.multiple(true)
.number_of_values(1)
.global(true),
)
.subcommand(
SubCommand::with_name("deploy")
.settings(&[
AppSettings::UnifiedHelpMessage,
AppSettings::DeriveDisplayOrder,
AppSettings::DontCollapseArgsInUsage,
AppSettings::TrailingVarArg,
])
.version(crate_version!())
.about("Run a binary or example of the local package on a constellation cluster")
//.arg(Arg::opt("quiet", "No output printed to stdout").short("q"))
.arg(
Arg::with_name("host")
.help("Constellation cluster node to connect to (e.g. 10.0.0.1:8888)")
.required(true)
.validator(|host| {
host.parse::<SocketAddr>()
.map(drop)
.map_err(|err| err.to_string())
}),
)
.arg(Arg::with_name("args").multiple(true))
.args(&Arg::targets_bin_example(
"Name of the bin target to run",
"Name of the example target to run",
))
.arg(Arg::package("Package with the target to run"))
.arg(Arg::jobs())
.arg(Arg::release(
"Build artifacts in release mode, with optimizations",
))
.arg(Arg::profile("Build artifacts with the specified profile"))
.args(&Arg::features())
.arg(Arg::target_triple("Build for the target triple"))
.arg(Arg::target_dir())
.arg(Arg::manifest_path())
//.arg(Arg::message_format())
.after_help(
"\
If neither `--bin` nor `--example` are given, then if the package only has one
bin target it will be run. Otherwise `--bin` specifies the bin target to run, | `--` go to the binary, the ones before go to Cargo.
",
),
)
}
fn cargo(args: &ArgMatches) -> process::Command {
let verbose: u64 = args.occurrences_of("verbose");
let color: Option<&str> = args.value_of("color");
let frozen: bool = args.is_present("frozen");
let locked: bool = args.is_present("locked");
let offline: bool = args.is_present("offline");
let config: Vec<&str> = args.values_of("config").unwrap_or_default().collect();
let unstable_features: Vec<&OsStr> = args
.values_of_os("unstable-features")
.unwrap_or_default()
.collect();
let bin: Vec<&str> = args.values_of("bin").unwrap_or_default().collect();
let example: Vec<&str> = args.values_of("example").unwrap_or_default().collect();
let package: Vec<&str> = args.values_of("package").unwrap_or_default().collect();
let jobs: Option<&str> = args.value_of("jobs");
let release: bool = args.is_present("release");
let profile: Option<&str> = args.value_of("profile");
let features: Vec<&str> = args.values_of("features").unwrap_or_default().collect();
let all_features: bool = args.is_present("all-features");
let no_default_features: bool = args.is_present("no-default-features");
let target: Option<&str> = args.value_of("target");
let target_dir: Option<&str> = args.value_of("target-dir");
let manifest_path: Option<&str> = args.value_of("manifest-path");
// let mut args: Vec<String> = Vec::new();
let mut cargo = process::Command::new("cargo");
let _ = cargo.arg("build");
let _ = cargo.arg("--message-format=json");
if verbose > 0 {
let _ = cargo.arg(format!("-{}", "v".repeat(verbose.try_into().unwrap())));
}
if let Some(color) = color {
let _ = cargo.arg(format!("--color={}", color));
}
if frozen {
let _ = cargo.arg("--frozen");
}
if locked {
let _ = cargo.arg("--locked");
}
if offline {
let _ = cargo.arg("--offline");
}
for config in config {
let _ = cargo.arg(format!("--config={}", config));
}
for unstable_features in unstable_features {
let mut arg = OsString::from("-Z");
arg.push(unstable_features);
let _ = cargo.arg(arg);
}
for bin in bin {
let _ = cargo.arg(format!("--bin={}", bin));
}
for example in example {
let _ = cargo.arg(format!("--example={}", example));
}
for package in package {
let _ = cargo.arg(format!("--package={}", package));
}
if let Some(jobs) = jobs {
let _ = cargo.arg(format!("--jobs={}", jobs));
}
if release {
let _ = cargo.arg("--release");
}
if let Some(profile) = profile {
let _ = cargo.arg(format!("--profile={}", profile));
}
for features in features {
let _ = cargo.arg(format!("--features={}", features));
}
if all_features {
let _ = cargo.arg("--all-features");
}
if no_default_features {
let _ = cargo.arg("--no-default-features");
}
if let Some(target) = target {
let _ = cargo.arg(format!("--target={}", target));
}
if let Some(target_dir) = target_dir {
let _ = cargo.arg(format!("--target-dir={}", target_dir));
}
if let Some(manifest_path) = manifest_path {
let _ = cargo.arg(format!("--manifest-path={}", manifest_path));
}
cargo
}
// https://github.com/rust-lang/cargo/blob/7059559d71de3fffe8c8cb81e32f323454aa96c5/src/cargo/util/command_prelude.rs
trait ArgExt: Sized {
fn opt(name: &'static str, help: &'static str) -> Self;
fn optional_multi_opt(name: &'static str, value_name: &'static str, help: &'static str)
-> Self;
fn multi_opt(name: &'static str, value_name: &'static str, help: &'static str) -> Self;
fn targets_bin_example(bin: &'static str, example: &'static str) -> [Self; 2];
fn package(package: &'static str) -> Self;
fn jobs() -> Self;
fn release(release: &'static str) -> Self;
fn profile(profile: &'static str) -> Self;
fn features() -> [Self; 3];
fn target_triple(target: &'static str) -> Self;
fn target_dir() -> Self;
fn manifest_path() -> Self;
}
impl<'a, 'b> ArgExt for Arg<'a, 'b> {
fn opt(name: &'static str, help: &'static str) -> Self {
Arg::with_name(name).long(name).help(help)
}
fn optional_multi_opt(
name: &'static str, value_name: &'static str, help: &'static str,
) -> Self {
Self::opt(name, help)
.value_name(value_name)
.multiple(true)
.min_values(0)
.number_of_values(1)
}
fn multi_opt(name: &'static str, value_name: &'static str, help: &'static str) -> Self {
// Note that all `.multiple(true)` arguments in Cargo should specify
// `.number_of_values(1)` as well, so that `--foo val1 val2` is
// *not* parsed as `foo` with values ["val1", "val2"].
// `number_of_values` should become the default in clap 3.
Self::opt(name, help)
.value_name(value_name)
.multiple(true)
.number_of_values(1)
}
fn targets_bin_example(bin: &'static str, example: &'static str) -> [Self; 2] {
[
Self::optional_multi_opt("bin", "NAME", bin),
Self::optional_multi_opt("example", "NAME", example),
]
}
fn package(package: &'static str) -> Self {
Self::opt("package", package).short("p").value_name("SPEC")
}
fn jobs() -> Self {
Self::opt("jobs", "Number of parallel jobs, defaults to # of CPUs")
.short("j")
.value_name("N")
}
fn release(release: &'static str) -> Self {
Self::opt("release", release)
}
fn profile(profile: &'static str) -> Self {
Self::opt("profile", profile).value_name("PROFILE-NAME")
}
fn features() -> [Self; 3] {
[
Self::multi_opt(
"features",
"FEATURES",
"Space-separated list of features to activate",
),
Self::opt("all-features", "Activate all available features"),
Self::opt(
"no-default-features",
"Do not activate the `default` feature",
),
]
}
fn target_triple(target: &'static str) -> Self {
Self::opt("target", target).value_name("TRIPLE")
}
fn target_dir() -> Self {
Self::opt("target-dir", "Directory for all generated artifacts").value_name("DIRECTORY")
}
fn manifest_path() -> Self {
Self::opt("manifest-path", "Path to Cargo.toml").value_name("PATH")
}
} | and `--example` specifies the example target to run. At most one of `--bin` or
`--example` can be provided.
All the arguments following the two dashes (`--`) are passed to the binary to
run. If you're passing arguments to both Cargo and the binary, the ones after | random_line_split |
cargo-deploy.rs | //! # `cargo deploy`
//! Run a binary on a constellation cluster
//!
//! ## Usage
//! ```text
//! cargo deploy [options] <host> [--] [<args>]...
//! ```
//!
//! ## Options
//! ```text
//! -h --help Show this screen.
//! -V --version Show version.
//! --format=<fmt> Output format [possible values: human, json] [defa ult: human]
//! ```
//!
//! Note: --format can also be given as an env var, such as `CONSTELLATION_FORMAT=json`
#![warn(
missing_copy_implementations,
missing_debug_implementations,
missing_docs,
trivial_numeric_casts,
unused_extern_crates,
unused_import_braces,
unused_qualifications,
unused_results,
clippy::pedantic
)] // from https://github.com/rust-unofficial/patterns/blob/master/anti_patterns/deny-warnings.md
use clap::{crate_version, App, AppSettings, Arg, ArgMatches, SubCommand};
use std::{
convert::TryInto, env, ffi::{OsStr, OsString}, iter, net::SocketAddr, process
};
use constellation_internal::Format;
fn main() {
let args = cli().get_matches();
let args = args.subcommand_matches("deploy").unwrap();
let host: SocketAddr = args.value_of("host").unwrap().parse().unwrap();
let forward_args: Vec<&OsStr> = args.values_of_os("args").unwrap_or_default().collect();
let output = cargo(args)
.stderr(process::Stdio::inherit())
.output()
.expect("Failed to invoke cargo");
if!output.status.success() {
process::exit(output.status.code().unwrap_or(101));
}
let mut bin = Vec::new();
for message in serde_json::Deserializer::from_slice(&output.stdout).into_iter() {
if let cargo_metadata::Message::CompilerArtifact(artifact) =
message.unwrap_or_else(|_| panic!("Failed to parse output of cargo"))
{
if artifact.target.kind == vec![String::from("bin")]
|| artifact.target.kind == vec![String::from("example")]
{
bin.push((
artifact.target.name,
artifact.filenames.into_iter().next().unwrap(),
));
// We're assuming the first filename is the binary –.dSYM etc seem to always be second?
}
}
}
if bin.len() > 1 {
let names = bin
.into_iter()
.map(|(target_name, _)| target_name)
.collect::<Vec<_>>();
println!(
"`cargo deploy` could not determine which binary to run. \
Use the `--bin` option to specify a binary.\n\
available binaries: {}",
names.join(", ")
); //, or the `default-run` manifest key // TODO: work out best way to get this / behave same as cargo run
process::exit(1);
} else if bin.is_empty() {
println!("a bin target must be available for `cargo deploy`");
process::exit(1);
}
let path = bin.into_iter().next().unwrap().1;
let args: Vec<OsString> = iter::once(OsString::from(&path))
.chain(forward_args.into_iter().map(ToOwned::to_owned))
.collect();
let vars: Vec<(OsString, OsString)> = env::vars_os().collect();
let format = Format::Human;
constellation::deploy(host, &path, format, args, vars);
}
fn cli<'a, 'b>() -> App<'a, 'b> {
// https://github.com/rust-lang/cargo/blob/7059559d71de3fffe8c8cb81e32f323454aa96c5/src/bin/cargo/cli.rs#L205-L277
// https://github.com/rust-lang/cargo/blob/982622252a64d7c526c04a244f1a81523dc9ae54/src/bin/cargo/commands/run.rs
App::new("cargo")
.bin_name("cargo")
.settings(&[
AppSettings::UnifiedHelpMessage,
AppSettings::DeriveDisplayOrder,
AppSettings::SubcommandRequired,
])
.arg(
Arg::opt(
"verbose",
"Use verbose output (-vv very verbose/build.rs output)",
)
.short("v")
.multiple(true)
.global(true),
)
.arg(
Arg::opt("color", "Coloring: auto, always, never")
.value_name("WHEN")
.global(true),
)
.arg(Arg::opt("frozen", "Require Cargo.lock and cache are up to date").global(true))
.arg(Arg::opt("locked", "Require Cargo.lock is up to date").global(true))
.arg(Arg::opt("offline", "Run without accessing the network").global(true))
.arg(
Arg::multi_opt("config", "KEY=VALUE", "Override a configuration value")
.global(true)
.hidden(true),
)
.arg(
Arg::with_name("unstable-features")
.help("Unstable (nightly-only) flags to Cargo, see 'cargo -Z help' for details")
.short("Z")
.value_name("FLAG")
.multiple(true)
.number_of_values(1)
.global(true),
)
.subcommand(
SubCommand::with_name("deploy")
.settings(&[
AppSettings::UnifiedHelpMessage,
AppSettings::DeriveDisplayOrder,
AppSettings::DontCollapseArgsInUsage,
AppSettings::TrailingVarArg,
])
.version(crate_version!())
.about("Run a binary or example of the local package on a constellation cluster")
//.arg(Arg::opt("quiet", "No output printed to stdout").short("q"))
.arg(
Arg::with_name("host")
.help("Constellation cluster node to connect to (e.g. 10.0.0.1:8888)")
.required(true)
.validator(|host| {
host.parse::<SocketAddr>()
.map(drop)
.map_err(|err| err.to_string())
}),
)
.arg(Arg::with_name("args").multiple(true))
.args(&Arg::targets_bin_example(
"Name of the bin target to run",
"Name of the example target to run",
))
.arg(Arg::package("Package with the target to run"))
.arg(Arg::jobs())
.arg(Arg::release(
"Build artifacts in release mode, with optimizations",
))
.arg(Arg::profile("Build artifacts with the specified profile"))
.args(&Arg::features())
.arg(Arg::target_triple("Build for the target triple"))
.arg(Arg::target_dir())
.arg(Arg::manifest_path())
//.arg(Arg::message_format())
.after_help(
"\
If neither `--bin` nor `--example` are given, then if the package only has one
bin target it will be run. Otherwise `--bin` specifies the bin target to run,
and `--example` specifies the example target to run. At most one of `--bin` or
`--example` can be provided.
All the arguments following the two dashes (`--`) are passed to the binary to
run. If you're passing arguments to both Cargo and the binary, the ones after
`--` go to the binary, the ones before go to Cargo.
",
),
)
}
fn cargo(args: &ArgMatches) -> process::Command {
let verbose: u64 = args.occurrences_of("verbose");
let color: Option<&str> = args.value_of("color");
let frozen: bool = args.is_present("frozen");
let locked: bool = args.is_present("locked");
let offline: bool = args.is_present("offline");
let config: Vec<&str> = args.values_of("config").unwrap_or_default().collect();
let unstable_features: Vec<&OsStr> = args
.values_of_os("unstable-features")
.unwrap_or_default()
.collect();
let bin: Vec<&str> = args.values_of("bin").unwrap_or_default().collect();
let example: Vec<&str> = args.values_of("example").unwrap_or_default().collect();
let package: Vec<&str> = args.values_of("package").unwrap_or_default().collect();
let jobs: Option<&str> = args.value_of("jobs");
let release: bool = args.is_present("release");
let profile: Option<&str> = args.value_of("profile");
let features: Vec<&str> = args.values_of("features").unwrap_or_default().collect();
let all_features: bool = args.is_present("all-features");
let no_default_features: bool = args.is_present("no-default-features");
let target: Option<&str> = args.value_of("target");
let target_dir: Option<&str> = args.value_of("target-dir");
let manifest_path: Option<&str> = args.value_of("manifest-path");
// let mut args: Vec<String> = Vec::new();
let mut cargo = process::Command::new("cargo");
let _ = cargo.arg("build");
let _ = cargo.arg("--message-format=json");
if verbose > 0 {
let _ = cargo.arg(format!("-{}", "v".repeat(verbose.try_into().unwrap())));
}
if let Some(color) = color {
let _ = cargo.arg(format!("--color={}", color));
}
if frozen {
let _ = cargo.arg("--frozen");
}
if locked {
let _ = cargo.arg("--locked");
}
if offline {
let _ = cargo.arg("--offline");
}
for config in config {
let _ = cargo.arg(format!("--config={}", config));
}
for unstable_features in unstable_features {
let mut arg = OsString::from("-Z");
arg.push(unstable_features);
let _ = cargo.arg(arg);
}
for bin in bin {
let _ = cargo.arg(format!("--bin={}", bin));
}
for example in example {
let _ = cargo.arg(format!("--example={}", example));
}
for package in package {
let _ = cargo.arg(format!("--package={}", package));
}
if let Some(jobs) = jobs {
let _ = cargo.arg(format!("--jobs={}", jobs));
}
if release {
let _ = cargo.arg("--release");
}
if let Some(profile) = profile {
let _ = cargo.arg(format!("--profile={}", profile));
}
for features in features {
let _ = cargo.arg(format!("--features={}", features));
}
if all_features {
let _ = cargo.arg("--all-features");
}
if no_default_features {
let _ = cargo.arg("--no-default-features");
}
if let Some(target) = target {
let _ = cargo.arg(format!("--target={}", target));
}
if let Some(target_dir) = target_dir {
let _ = cargo.arg(format!("--target-dir={}", target_dir));
}
if let Some(manifest_path) = manifest_path {
let _ = cargo.arg(format!("--manifest-path={}", manifest_path));
}
cargo
}
// https://github.com/rust-lang/cargo/blob/7059559d71de3fffe8c8cb81e32f323454aa96c5/src/cargo/util/command_prelude.rs
trait ArgExt: Sized {
fn opt(name: &'static str, help: &'static str) -> Self;
fn optional_multi_opt(name: &'static str, value_name: &'static str, help: &'static str)
-> Self;
fn multi_opt(name: &'static str, value_name: &'static str, help: &'static str) -> Self;
fn targets_bin_example(bin: &'static str, example: &'static str) -> [Self; 2];
fn package(package: &'static str) -> Self;
fn jobs() -> Self;
fn release(release: &'static str) -> Self;
fn profile(profile: &'static str) -> Self;
fn features() -> [Self; 3];
fn target_triple(target: &'static str) -> Self;
fn target_dir() -> Self;
fn manifest_path() -> Self;
}
impl<'a, 'b> ArgExt for Arg<'a, 'b> {
fn opt(name: &'static str, help: &'static str) -> Self {
Arg::with_name(name).long(name).help(help)
}
fn optional_multi_opt(
name: &'static str, value_name: &'static str, help: &'static str,
) -> Self {
Self::opt(name, help)
.value_name(value_name)
.multiple(true)
.min_values(0)
.number_of_values(1)
}
fn multi_opt(name: &'static str, value_name: &'static str, help: &'static str) -> Self {
// Note that all `.multiple(true)` arguments in Cargo should specify
// `.number_of_values(1)` as well, so that `--foo val1 val2` is
// *not* parsed as `foo` with values ["val1", "val2"].
// `number_of_values` should become the default in clap 3.
Self::opt(name, help)
.value_name(value_name)
.multiple(true)
.number_of_values(1)
}
fn targets_bin_example(bin: &'static str, example: &'static str) -> [Self; 2] {
[
Self::optional_multi_opt("bin", "NAME", bin),
Self::optional_multi_opt("example", "NAME", example),
]
}
fn package(package: &'static str) -> Self {
Self::opt("package", package).short("p").value_name("SPEC")
}
fn jobs() -> Self {
Self::opt("jobs", "Number of parallel jobs, defaults to # of CPUs")
.short("j")
.value_name("N")
}
fn release(release: &'static str) -> Self {
Self::opt("release", release)
}
fn profile(profile: &'static str) -> Self {
Self::opt("profile", profile).value_name("PROFILE-NAME")
}
fn features() -> [Self; 3] {
[
Self::multi_opt(
"features",
"FEATURES",
"Space-separated list of features to activate",
),
Self::opt("all-features", "Activate all available features"),
Self::opt(
"no-default-features",
"Do not activate the `default` feature",
),
]
}
fn tar | rget: &'static str) -> Self {
Self::opt("target", target).value_name("TRIPLE")
}
fn target_dir() -> Self {
Self::opt("target-dir", "Directory for all generated artifacts").value_name("DIRECTORY")
}
fn manifest_path() -> Self {
Self::opt("manifest-path", "Path to Cargo.toml").value_name("PATH")
}
}
| get_triple(ta | identifier_name |
generator.rs | use std::ffi::{OsStr, OsString};
use std::fs::{File, OpenOptions};
use std::io::{BufRead, BufReader, BufWriter, Write};
use std::ops::Deref;
use std::path::{Path, PathBuf};
use std::process::Command;
use std::sync::Arc;
use std::time::Instant;
use std::{env, fs, io, thread};
use opencv_binding_generator::{Generator, IteratorExt};
use crate::docs::transfer_bindings_to_docs;
use super::{files_with_extension, files_with_predicate, Library, Result, MODULES, OUT_DIR, SRC_CPP_DIR, SRC_DIR};
pub struct BindingGenerator {
build_script_path: OsString,
}
impl BindingGenerator {
pub fn new(build_script_path: OsString) -> Self {
Self { build_script_path }
}
pub fn generate_wrapper(&self, opencv_header_dir: &Path, opencv: &Library) -> Result<()> {
let target_docs_dir = env::var_os("OCVRS_DOCS_GENERATE_DIR").map(PathBuf::from);
let target_module_dir = OUT_DIR.join("opencv");
let manual_dir = SRC_DIR.join("manual");
eprintln!("=== Generating code in: {}", OUT_DIR.display());
eprintln!("=== Placing generated bindings into: {}", target_module_dir.display());
if let Some(target_docs_dir) = target_docs_dir.as_ref() {
eprintln!(
"=== Placing static generated docs bindings into: {}",
target_docs_dir.display()
);
}
eprintln!("=== Using OpenCV headers from: {}", opencv_header_dir.display());
let non_dll_files = files_with_predicate(&OUT_DIR, |p| {
p.extension().map_or(true, |ext|!ext.eq_ignore_ascii_case("dll"))
})?;
for path in non_dll_files {
let _ = fs::remove_file(path);
}
let modules = MODULES.get().expect("MODULES not initialized");
self.run(modules, opencv_header_dir, opencv)?;
collect_generated_bindings(modules, &target_module_dir, &manual_dir)?;
if let Some(target_docs_dir) = target_docs_dir {
if!target_docs_dir.exists() {
fs::create_dir(&target_docs_dir)?;
}
transfer_bindings_to_docs(&OUT_DIR, &target_docs_dir);
}
Ok(())
}
fn run(&self, modules: &'static [String], opencv_header_dir: &Path, opencv: &Library) -> Result<()> {
let additional_include_dirs = opencv
.include_paths
.iter()
.map(|path| path.as_path())
.filter(|&include_path| include_path!= opencv_header_dir)
.collect::<Vec<_>>();
let gen = Generator::new(opencv_header_dir, &additional_include_dirs, &SRC_CPP_DIR);
eprintln!("=== Clang: {}", gen.clang_version());
eprintln!("=== Clang command line args: {:#?}", gen.build_clang_command_line_args());
let additional_include_dirs = Arc::new(
additional_include_dirs
.into_iter()
.map(|p| p.to_str().expect("Can't convert additional include dir to UTF-8 string"))
.join(","),
);
let opencv_header_dir = Arc::new(opencv_header_dir.to_owned());
let job_server = build_job_server()?;
let mut join_handles = Vec::with_capacity(modules.len());
let start = Instant::now();
// todo use thread::scope when MSRV is 1.63
eprintln!("=== Generating {} modules", modules.len());
modules.iter().for_each(|module| {
let token = job_server.acquire().expect("Can't acquire token from job server");
let join_handle = thread::spawn({
let additional_include_dirs = Arc::clone(&additional_include_dirs);
let opencv_header_dir = Arc::clone(&opencv_header_dir);
let build_script_path = self.build_script_path.clone();
move || {
let module_start = Instant::now();
let mut bin_generator = Command::new(build_script_path);
bin_generator
.arg(&*opencv_header_dir)
.arg(&*SRC_CPP_DIR)
.arg(&*OUT_DIR)
.arg(module)
.arg(&*additional_include_dirs);
eprintln!("=== Running: {bin_generator:?}");
let res = bin_generator
.status()
.unwrap_or_else(|e| panic!("Can't run bindings generator for module: {module}, error: {e}"));
if!res.success() {
panic!("Failed to run the bindings generator for module: {module}");
}
eprintln!("=== Generated: {module} in {:?}", module_start.elapsed());
drop(token); // needed to move the token to the thread
}
});
join_handles.push(join_handle);
});
for join_handle in join_handles {
join_handle.join().expect("Generator process panicked");
}
eprintln!("=== Total binding generation time: {:?}", start.elapsed());
Ok(())
}
}
fn is_type_file(path: &Path, module: &str) -> bool {
path.file_stem().and_then(OsStr::to_str).map_or(false, |stem| {
let mut stem_chars = stem.chars();
(&mut stem_chars).take(3).all(|c| c.is_ascii_digit()) && // first 3 chars are digits
matches!(stem_chars.next(), Some('-')) && // dash
module.chars().zip(&mut stem_chars).all(|(m, s)| m == s) && // module name
matches!(stem_chars.next(), Some('-')) && // dash
stem.ends_with(".type") // ends with ".type"
})
}
fn is_type_externs_file(path: &Path, module: &str) -> bool {
path.file_stem().and_then(OsStr::to_str).map_or(false, |stem| {
let mut stem_chars = stem.chars();
(&mut stem_chars).take(3).all(|c| c.is_ascii_digit()) && // first 3 chars are digits
matches!(stem_chars.next(), Some('-')) && // dash
module.chars().zip(&mut stem_chars).all(|(m, s)| m == s) && // module name
matches!(stem_chars.next(), Some('-')) && // dash
stem.ends_with(".type.externs") // ends with ".type"
})
}
fn copy_indent(mut read: impl BufRead, mut write: impl Write, indent: &str) -> Result<()> {
let mut line = Vec::with_capacity(100);
while read.read_until(b'\n', &mut line)?!= 0 {
write.write_all(indent.as_bytes())?;
write.write_all(&line)?;
line.clear();
}
Ok(())
}
fn collect_generated_bindings(modules: &[String], target_module_dir: &Path, manual_dir: &Path) -> Result<()> {
if!target_module_dir.exists() {
fs::create_dir(target_module_dir)?;
}
for path in files_with_extension(target_module_dir, "rs")? {
let _ = fs::remove_file(path);
}
fn write_has_module(mut write: impl Write, module: &str) -> Result<()> {
Ok(writeln!(write, "#[cfg(ocvrs_has_module_{module})]")?)
}
fn write_module_include(write: &mut BufWriter<File>, module: &str) -> Result<()> |
let add_manual = |file: &mut BufWriter<File>, module: &str| -> Result<bool> {
if manual_dir.join(format!("{module}.rs")).exists() {
writeln!(file, "pub use crate::manual::{module}::*;")?;
Ok(true)
} else {
Ok(false)
}
};
let start = Instant::now();
let mut hub_rs = BufWriter::new(File::create(target_module_dir.join("hub.rs"))?);
let mut types_rs = BufWriter::new(File::create(target_module_dir.join("types.rs"))?);
writeln!(types_rs)?;
let mut sys_rs = BufWriter::new(File::create(target_module_dir.join("sys.rs"))?);
writeln!(sys_rs, "use crate::{{mod_prelude_sys::*, core}};")?;
writeln!(sys_rs)?;
for module in modules {
// merge multiple *-type.cpp files into a single module_types.hpp
let module_cpp = OUT_DIR.join(format!("{module}.cpp"));
if module_cpp.is_file() {
let module_types_cpp = OUT_DIR.join(format!("{module}_types.hpp"));
let mut module_types_file = BufWriter::new(
OpenOptions::new()
.create(true)
.truncate(true)
.write(true)
.open(module_types_cpp)?,
);
let mut type_files = files_with_extension(&OUT_DIR, "cpp")?
.filter(|f| is_type_file(f, module))
.collect::<Vec<_>>();
type_files.sort_unstable();
for entry in type_files {
io::copy(&mut BufReader::new(File::open(&entry)?), &mut module_types_file)?;
let _ = fs::remove_file(entry);
}
}
// add module entry to hub.rs and move the module file into opencv/
write_has_module(&mut hub_rs, module)?;
write_module_include(&mut hub_rs, module)?;
let module_filename = format!("{module}.rs");
let module_src_file = OUT_DIR.join(&module_filename);
let mut module_rs = BufWriter::new(File::create(&target_module_dir.join(&module_filename))?);
// Need to wrap modules inside `mod { }` because they have top-level comments (//!) and those don't play well when
// module file is include!d (as opposed to connecting the module with `mod` from the parent module).
// The same doesn't apply to `sys` and `types` below because they don't contain top-level comments.
writeln!(module_rs, "pub mod {module} {{")?;
copy_indent(BufReader::new(File::open(&module_src_file)?), &mut module_rs, "\t")?;
add_manual(&mut module_rs, module)?;
writeln!(module_rs, "}}")?;
let _ = fs::remove_file(module_src_file);
// merge multiple *-.type.rs files into a single types.rs
let mut header_written = false;
let mut type_files = files_with_extension(&OUT_DIR, "rs")?
.filter(|f| is_type_file(f, module))
.collect::<Vec<_>>();
type_files.sort_unstable();
for entry in type_files {
if entry.metadata().map(|meta| meta.len()).unwrap_or(0) > 0 {
if!header_written {
write_has_module(&mut types_rs, module)?;
writeln!(types_rs, "mod {module}_types {{")?;
writeln!(types_rs, "\tuse crate::{{mod_prelude::*, core, types, sys}};")?;
writeln!(types_rs)?;
header_written = true;
}
copy_indent(BufReader::new(File::open(&entry)?), &mut types_rs, "\t")?;
}
let _ = fs::remove_file(entry);
}
if header_written {
writeln!(types_rs, "}}")?;
write_has_module(&mut types_rs, module)?;
writeln!(types_rs, "pub use {module}_types::*;")?;
writeln!(types_rs)?;
}
// merge module-specific *.externs.rs and generated type-specific *.type.externs.rs into a single sys.rs
let externs_rs = OUT_DIR.join(format!("{module}.externs.rs"));
write_has_module(&mut sys_rs, module)?;
writeln!(sys_rs, "mod {module}_sys {{")?;
writeln!(sys_rs, "\tuse super::*;")?;
writeln!(sys_rs)?;
writeln!(sys_rs, "\textern \"C\" {{")?;
copy_indent(BufReader::new(File::open(&externs_rs)?), &mut sys_rs, "\t\t")?;
let _ = fs::remove_file(externs_rs);
let mut type_extern_files = files_with_extension(&OUT_DIR, "rs")?
.filter(|f| is_type_externs_file(f, module))
.collect::<Vec<_>>();
type_extern_files.sort_unstable();
for entry in type_extern_files {
if entry.metadata().map(|meta| meta.len()).unwrap_or(0) > 0 {
copy_indent(BufReader::new(File::open(&entry)?), &mut sys_rs, "\t\t")?;
}
let _ = fs::remove_file(entry);
}
writeln!(sys_rs, "\t}}")?;
writeln!(sys_rs, "}}")?;
write_has_module(&mut sys_rs, module)?;
writeln!(sys_rs, "pub use {module}_sys::*;")?;
writeln!(sys_rs)?;
}
writeln!(hub_rs, "pub mod types {{")?;
write_module_include(&mut hub_rs, "types")?;
writeln!(hub_rs, "}}")?;
writeln!(hub_rs, "#[doc(hidden)]")?;
writeln!(hub_rs, "pub mod sys {{")?;
write_module_include(&mut hub_rs, "sys")?;
writeln!(hub_rs, "}}")?;
add_manual(&mut types_rs, "types")?;
add_manual(&mut sys_rs, "sys")?;
// write hub_prelude that imports all module-specific preludes
writeln!(hub_rs, "pub mod hub_prelude {{")?;
for module in modules {
write!(hub_rs, "\t")?;
write_has_module(&mut hub_rs, module)?;
writeln!(hub_rs, "\tpub use super::{module}::prelude::*;")?;
}
writeln!(hub_rs, "}}")?;
eprintln!("=== Total binding collection time: {:?}", start.elapsed());
Ok(())
}
fn build_job_server() -> Result<Jobserver> {
unsafe { jobserver::Client::from_env() }
.and_then(|client| {
let own_token_released = client.release_raw().is_ok();
let available_jobs = client.available().unwrap_or(0);
if available_jobs > 0 {
eprintln!("=== Using environment job server with the the amount of available jobs: {available_jobs}");
Some(Jobserver {
client,
reacquire_token_on_drop: own_token_released,
})
} else {
client.acquire_raw().expect("Can't reacquire build script thread token");
eprintln!(
"=== Available jobs from the environment created jobserver is: {available_jobs} or there is an error reading that value"
);
None
}
})
.or_else(|| {
let num_jobs = env::var("NUM_JOBS")
.ok()
.and_then(|jobs| jobs.parse().ok())
.unwrap_or(2)
.max(1);
eprintln!("=== Creating a new job server with num_jobs: {num_jobs}");
jobserver::Client::new(num_jobs).ok().map(|client| Jobserver {
client,
reacquire_token_on_drop: false,
})
})
.ok_or_else(|| "Can't create job server".into())
}
pub struct Jobserver {
client: jobserver::Client,
reacquire_token_on_drop: bool,
}
impl Drop for Jobserver {
fn drop(&mut self) {
if self.reacquire_token_on_drop {
self.client.acquire_raw().expect("Can't reacquire build script thread token");
}
}
}
impl Deref for Jobserver {
type Target = jobserver::Client;
fn deref(&self) -> &Self::Target {
&self.client
}
}
| {
// Use include instead of #[path] attribute because rust-analyzer doesn't handle #[path] inside other include! too well:
// https://github.com/twistedfall/opencv-rust/issues/418
// https://github.com/rust-lang/rust-analyzer/issues/11682
Ok(writeln!(
write,
r#"include!(concat!(env!("OUT_DIR"), "/opencv/{module}.rs"));"#
)?)
} | identifier_body |
generator.rs | use std::ffi::{OsStr, OsString};
use std::fs::{File, OpenOptions};
use std::io::{BufRead, BufReader, BufWriter, Write};
use std::ops::Deref;
use std::path::{Path, PathBuf};
use std::process::Command;
use std::sync::Arc;
use std::time::Instant;
use std::{env, fs, io, thread};
use opencv_binding_generator::{Generator, IteratorExt};
use crate::docs::transfer_bindings_to_docs;
use super::{files_with_extension, files_with_predicate, Library, Result, MODULES, OUT_DIR, SRC_CPP_DIR, SRC_DIR};
pub struct BindingGenerator {
build_script_path: OsString,
}
impl BindingGenerator {
pub fn new(build_script_path: OsString) -> Self {
Self { build_script_path }
}
pub fn generate_wrapper(&self, opencv_header_dir: &Path, opencv: &Library) -> Result<()> {
let target_docs_dir = env::var_os("OCVRS_DOCS_GENERATE_DIR").map(PathBuf::from);
let target_module_dir = OUT_DIR.join("opencv");
let manual_dir = SRC_DIR.join("manual");
eprintln!("=== Generating code in: {}", OUT_DIR.display());
eprintln!("=== Placing generated bindings into: {}", target_module_dir.display());
if let Some(target_docs_dir) = target_docs_dir.as_ref() {
eprintln!(
"=== Placing static generated docs bindings into: {}",
target_docs_dir.display()
);
}
eprintln!("=== Using OpenCV headers from: {}", opencv_header_dir.display());
let non_dll_files = files_with_predicate(&OUT_DIR, |p| {
p.extension().map_or(true, |ext|!ext.eq_ignore_ascii_case("dll"))
})?;
for path in non_dll_files {
let _ = fs::remove_file(path);
}
let modules = MODULES.get().expect("MODULES not initialized");
self.run(modules, opencv_header_dir, opencv)?;
collect_generated_bindings(modules, &target_module_dir, &manual_dir)?;
if let Some(target_docs_dir) = target_docs_dir {
if!target_docs_dir.exists() {
fs::create_dir(&target_docs_dir)?;
}
transfer_bindings_to_docs(&OUT_DIR, &target_docs_dir);
}
Ok(())
}
fn run(&self, modules: &'static [String], opencv_header_dir: &Path, opencv: &Library) -> Result<()> {
let additional_include_dirs = opencv
.include_paths
.iter()
.map(|path| path.as_path())
.filter(|&include_path| include_path!= opencv_header_dir)
.collect::<Vec<_>>();
let gen = Generator::new(opencv_header_dir, &additional_include_dirs, &SRC_CPP_DIR);
eprintln!("=== Clang: {}", gen.clang_version());
eprintln!("=== Clang command line args: {:#?}", gen.build_clang_command_line_args());
let additional_include_dirs = Arc::new(
additional_include_dirs
.into_iter()
.map(|p| p.to_str().expect("Can't convert additional include dir to UTF-8 string"))
.join(","),
);
let opencv_header_dir = Arc::new(opencv_header_dir.to_owned());
let job_server = build_job_server()?;
let mut join_handles = Vec::with_capacity(modules.len());
let start = Instant::now();
// todo use thread::scope when MSRV is 1.63
eprintln!("=== Generating {} modules", modules.len());
modules.iter().for_each(|module| {
let token = job_server.acquire().expect("Can't acquire token from job server");
let join_handle = thread::spawn({
let additional_include_dirs = Arc::clone(&additional_include_dirs);
let opencv_header_dir = Arc::clone(&opencv_header_dir);
let build_script_path = self.build_script_path.clone();
move || {
let module_start = Instant::now();
let mut bin_generator = Command::new(build_script_path);
bin_generator
.arg(&*opencv_header_dir)
.arg(&*SRC_CPP_DIR)
.arg(&*OUT_DIR)
.arg(module)
.arg(&*additional_include_dirs);
eprintln!("=== Running: {bin_generator:?}");
let res = bin_generator
.status()
.unwrap_or_else(|e| panic!("Can't run bindings generator for module: {module}, error: {e}"));
if!res.success() {
panic!("Failed to run the bindings generator for module: {module}");
}
eprintln!("=== Generated: {module} in {:?}", module_start.elapsed());
drop(token); // needed to move the token to the thread
}
});
join_handles.push(join_handle);
});
for join_handle in join_handles {
join_handle.join().expect("Generator process panicked");
}
eprintln!("=== Total binding generation time: {:?}", start.elapsed());
Ok(())
}
}
fn is_type_file(path: &Path, module: &str) -> bool {
path.file_stem().and_then(OsStr::to_str).map_or(false, |stem| {
let mut stem_chars = stem.chars();
(&mut stem_chars).take(3).all(|c| c.is_ascii_digit()) && // first 3 chars are digits
matches!(stem_chars.next(), Some('-')) && // dash
module.chars().zip(&mut stem_chars).all(|(m, s)| m == s) && // module name
matches!(stem_chars.next(), Some('-')) && // dash
stem.ends_with(".type") // ends with ".type"
})
}
fn is_type_externs_file(path: &Path, module: &str) -> bool {
path.file_stem().and_then(OsStr::to_str).map_or(false, |stem| {
let mut stem_chars = stem.chars();
(&mut stem_chars).take(3).all(|c| c.is_ascii_digit()) && // first 3 chars are digits
matches!(stem_chars.next(), Some('-')) && // dash
module.chars().zip(&mut stem_chars).all(|(m, s)| m == s) && // module name
matches!(stem_chars.next(), Some('-')) && // dash
stem.ends_with(".type.externs") // ends with ".type"
})
}
fn copy_indent(mut read: impl BufRead, mut write: impl Write, indent: &str) -> Result<()> {
let mut line = Vec::with_capacity(100);
while read.read_until(b'\n', &mut line)?!= 0 {
write.write_all(indent.as_bytes())?;
write.write_all(&line)?;
line.clear();
}
Ok(())
}
fn collect_generated_bindings(modules: &[String], target_module_dir: &Path, manual_dir: &Path) -> Result<()> {
if!target_module_dir.exists() {
fs::create_dir(target_module_dir)?;
}
for path in files_with_extension(target_module_dir, "rs")? {
let _ = fs::remove_file(path);
}
fn | (mut write: impl Write, module: &str) -> Result<()> {
Ok(writeln!(write, "#[cfg(ocvrs_has_module_{module})]")?)
}
fn write_module_include(write: &mut BufWriter<File>, module: &str) -> Result<()> {
// Use include instead of #[path] attribute because rust-analyzer doesn't handle #[path] inside other include! too well:
// https://github.com/twistedfall/opencv-rust/issues/418
// https://github.com/rust-lang/rust-analyzer/issues/11682
Ok(writeln!(
write,
r#"include!(concat!(env!("OUT_DIR"), "/opencv/{module}.rs"));"#
)?)
}
let add_manual = |file: &mut BufWriter<File>, module: &str| -> Result<bool> {
if manual_dir.join(format!("{module}.rs")).exists() {
writeln!(file, "pub use crate::manual::{module}::*;")?;
Ok(true)
} else {
Ok(false)
}
};
let start = Instant::now();
let mut hub_rs = BufWriter::new(File::create(target_module_dir.join("hub.rs"))?);
let mut types_rs = BufWriter::new(File::create(target_module_dir.join("types.rs"))?);
writeln!(types_rs)?;
let mut sys_rs = BufWriter::new(File::create(target_module_dir.join("sys.rs"))?);
writeln!(sys_rs, "use crate::{{mod_prelude_sys::*, core}};")?;
writeln!(sys_rs)?;
for module in modules {
// merge multiple *-type.cpp files into a single module_types.hpp
let module_cpp = OUT_DIR.join(format!("{module}.cpp"));
if module_cpp.is_file() {
let module_types_cpp = OUT_DIR.join(format!("{module}_types.hpp"));
let mut module_types_file = BufWriter::new(
OpenOptions::new()
.create(true)
.truncate(true)
.write(true)
.open(module_types_cpp)?,
);
let mut type_files = files_with_extension(&OUT_DIR, "cpp")?
.filter(|f| is_type_file(f, module))
.collect::<Vec<_>>();
type_files.sort_unstable();
for entry in type_files {
io::copy(&mut BufReader::new(File::open(&entry)?), &mut module_types_file)?;
let _ = fs::remove_file(entry);
}
}
// add module entry to hub.rs and move the module file into opencv/
write_has_module(&mut hub_rs, module)?;
write_module_include(&mut hub_rs, module)?;
let module_filename = format!("{module}.rs");
let module_src_file = OUT_DIR.join(&module_filename);
let mut module_rs = BufWriter::new(File::create(&target_module_dir.join(&module_filename))?);
// Need to wrap modules inside `mod { }` because they have top-level comments (//!) and those don't play well when
// module file is include!d (as opposed to connecting the module with `mod` from the parent module).
// The same doesn't apply to `sys` and `types` below because they don't contain top-level comments.
writeln!(module_rs, "pub mod {module} {{")?;
copy_indent(BufReader::new(File::open(&module_src_file)?), &mut module_rs, "\t")?;
add_manual(&mut module_rs, module)?;
writeln!(module_rs, "}}")?;
let _ = fs::remove_file(module_src_file);
// merge multiple *-.type.rs files into a single types.rs
let mut header_written = false;
let mut type_files = files_with_extension(&OUT_DIR, "rs")?
.filter(|f| is_type_file(f, module))
.collect::<Vec<_>>();
type_files.sort_unstable();
for entry in type_files {
if entry.metadata().map(|meta| meta.len()).unwrap_or(0) > 0 {
if!header_written {
write_has_module(&mut types_rs, module)?;
writeln!(types_rs, "mod {module}_types {{")?;
writeln!(types_rs, "\tuse crate::{{mod_prelude::*, core, types, sys}};")?;
writeln!(types_rs)?;
header_written = true;
}
copy_indent(BufReader::new(File::open(&entry)?), &mut types_rs, "\t")?;
}
let _ = fs::remove_file(entry);
}
if header_written {
writeln!(types_rs, "}}")?;
write_has_module(&mut types_rs, module)?;
writeln!(types_rs, "pub use {module}_types::*;")?;
writeln!(types_rs)?;
}
// merge module-specific *.externs.rs and generated type-specific *.type.externs.rs into a single sys.rs
let externs_rs = OUT_DIR.join(format!("{module}.externs.rs"));
write_has_module(&mut sys_rs, module)?;
writeln!(sys_rs, "mod {module}_sys {{")?;
writeln!(sys_rs, "\tuse super::*;")?;
writeln!(sys_rs)?;
writeln!(sys_rs, "\textern \"C\" {{")?;
copy_indent(BufReader::new(File::open(&externs_rs)?), &mut sys_rs, "\t\t")?;
let _ = fs::remove_file(externs_rs);
let mut type_extern_files = files_with_extension(&OUT_DIR, "rs")?
.filter(|f| is_type_externs_file(f, module))
.collect::<Vec<_>>();
type_extern_files.sort_unstable();
for entry in type_extern_files {
if entry.metadata().map(|meta| meta.len()).unwrap_or(0) > 0 {
copy_indent(BufReader::new(File::open(&entry)?), &mut sys_rs, "\t\t")?;
}
let _ = fs::remove_file(entry);
}
writeln!(sys_rs, "\t}}")?;
writeln!(sys_rs, "}}")?;
write_has_module(&mut sys_rs, module)?;
writeln!(sys_rs, "pub use {module}_sys::*;")?;
writeln!(sys_rs)?;
}
writeln!(hub_rs, "pub mod types {{")?;
write_module_include(&mut hub_rs, "types")?;
writeln!(hub_rs, "}}")?;
writeln!(hub_rs, "#[doc(hidden)]")?;
writeln!(hub_rs, "pub mod sys {{")?;
write_module_include(&mut hub_rs, "sys")?;
writeln!(hub_rs, "}}")?;
add_manual(&mut types_rs, "types")?;
add_manual(&mut sys_rs, "sys")?;
// write hub_prelude that imports all module-specific preludes
writeln!(hub_rs, "pub mod hub_prelude {{")?;
for module in modules {
write!(hub_rs, "\t")?;
write_has_module(&mut hub_rs, module)?;
writeln!(hub_rs, "\tpub use super::{module}::prelude::*;")?;
}
writeln!(hub_rs, "}}")?;
eprintln!("=== Total binding collection time: {:?}", start.elapsed());
Ok(())
}
fn build_job_server() -> Result<Jobserver> {
unsafe { jobserver::Client::from_env() }
.and_then(|client| {
let own_token_released = client.release_raw().is_ok();
let available_jobs = client.available().unwrap_or(0);
if available_jobs > 0 {
eprintln!("=== Using environment job server with the the amount of available jobs: {available_jobs}");
Some(Jobserver {
client,
reacquire_token_on_drop: own_token_released,
})
} else {
client.acquire_raw().expect("Can't reacquire build script thread token");
eprintln!(
"=== Available jobs from the environment created jobserver is: {available_jobs} or there is an error reading that value"
);
None
}
})
.or_else(|| {
let num_jobs = env::var("NUM_JOBS")
.ok()
.and_then(|jobs| jobs.parse().ok())
.unwrap_or(2)
.max(1);
eprintln!("=== Creating a new job server with num_jobs: {num_jobs}");
jobserver::Client::new(num_jobs).ok().map(|client| Jobserver {
client,
reacquire_token_on_drop: false,
})
})
.ok_or_else(|| "Can't create job server".into())
}
pub struct Jobserver {
client: jobserver::Client,
reacquire_token_on_drop: bool,
}
impl Drop for Jobserver {
fn drop(&mut self) {
if self.reacquire_token_on_drop {
self.client.acquire_raw().expect("Can't reacquire build script thread token");
}
}
}
impl Deref for Jobserver {
type Target = jobserver::Client;
fn deref(&self) -> &Self::Target {
&self.client
}
}
| write_has_module | identifier_name |
generator.rs | use std::ffi::{OsStr, OsString};
use std::fs::{File, OpenOptions};
use std::io::{BufRead, BufReader, BufWriter, Write};
use std::ops::Deref;
use std::path::{Path, PathBuf};
use std::process::Command;
use std::sync::Arc;
use std::time::Instant;
use std::{env, fs, io, thread};
use opencv_binding_generator::{Generator, IteratorExt};
use crate::docs::transfer_bindings_to_docs;
use super::{files_with_extension, files_with_predicate, Library, Result, MODULES, OUT_DIR, SRC_CPP_DIR, SRC_DIR};
pub struct BindingGenerator {
build_script_path: OsString,
}
impl BindingGenerator {
pub fn new(build_script_path: OsString) -> Self {
Self { build_script_path }
}
pub fn generate_wrapper(&self, opencv_header_dir: &Path, opencv: &Library) -> Result<()> {
let target_docs_dir = env::var_os("OCVRS_DOCS_GENERATE_DIR").map(PathBuf::from);
let target_module_dir = OUT_DIR.join("opencv");
let manual_dir = SRC_DIR.join("manual");
eprintln!("=== Generating code in: {}", OUT_DIR.display());
eprintln!("=== Placing generated bindings into: {}", target_module_dir.display());
if let Some(target_docs_dir) = target_docs_dir.as_ref() {
eprintln!(
"=== Placing static generated docs bindings into: {}",
target_docs_dir.display()
);
}
eprintln!("=== Using OpenCV headers from: {}", opencv_header_dir.display());
let non_dll_files = files_with_predicate(&OUT_DIR, |p| {
p.extension().map_or(true, |ext|!ext.eq_ignore_ascii_case("dll"))
})?;
for path in non_dll_files {
let _ = fs::remove_file(path);
}
let modules = MODULES.get().expect("MODULES not initialized");
self.run(modules, opencv_header_dir, opencv)?;
collect_generated_bindings(modules, &target_module_dir, &manual_dir)?;
if let Some(target_docs_dir) = target_docs_dir {
if!target_docs_dir.exists() {
fs::create_dir(&target_docs_dir)?;
}
transfer_bindings_to_docs(&OUT_DIR, &target_docs_dir);
}
Ok(())
}
fn run(&self, modules: &'static [String], opencv_header_dir: &Path, opencv: &Library) -> Result<()> {
let additional_include_dirs = opencv
.include_paths
.iter()
.map(|path| path.as_path())
.filter(|&include_path| include_path!= opencv_header_dir)
.collect::<Vec<_>>();
let gen = Generator::new(opencv_header_dir, &additional_include_dirs, &SRC_CPP_DIR);
eprintln!("=== Clang: {}", gen.clang_version());
eprintln!("=== Clang command line args: {:#?}", gen.build_clang_command_line_args());
let additional_include_dirs = Arc::new(
additional_include_dirs
.into_iter()
.map(|p| p.to_str().expect("Can't convert additional include dir to UTF-8 string"))
.join(","),
);
let opencv_header_dir = Arc::new(opencv_header_dir.to_owned());
let job_server = build_job_server()?;
let mut join_handles = Vec::with_capacity(modules.len());
let start = Instant::now();
// todo use thread::scope when MSRV is 1.63
eprintln!("=== Generating {} modules", modules.len());
modules.iter().for_each(|module| {
let token = job_server.acquire().expect("Can't acquire token from job server");
let join_handle = thread::spawn({
let additional_include_dirs = Arc::clone(&additional_include_dirs);
let opencv_header_dir = Arc::clone(&opencv_header_dir);
let build_script_path = self.build_script_path.clone();
move || {
let module_start = Instant::now();
let mut bin_generator = Command::new(build_script_path);
bin_generator
.arg(&*opencv_header_dir)
.arg(&*SRC_CPP_DIR)
.arg(&*OUT_DIR)
.arg(module)
.arg(&*additional_include_dirs);
eprintln!("=== Running: {bin_generator:?}");
let res = bin_generator
.status()
.unwrap_or_else(|e| panic!("Can't run bindings generator for module: {module}, error: {e}"));
if!res.success() {
panic!("Failed to run the bindings generator for module: {module}");
}
eprintln!("=== Generated: {module} in {:?}", module_start.elapsed());
drop(token); // needed to move the token to the thread
}
});
join_handles.push(join_handle);
});
for join_handle in join_handles {
join_handle.join().expect("Generator process panicked");
}
eprintln!("=== Total binding generation time: {:?}", start.elapsed());
Ok(())
}
}
fn is_type_file(path: &Path, module: &str) -> bool {
path.file_stem().and_then(OsStr::to_str).map_or(false, |stem| {
let mut stem_chars = stem.chars();
(&mut stem_chars).take(3).all(|c| c.is_ascii_digit()) && // first 3 chars are digits
matches!(stem_chars.next(), Some('-')) && // dash
module.chars().zip(&mut stem_chars).all(|(m, s)| m == s) && // module name
matches!(stem_chars.next(), Some('-')) && // dash
stem.ends_with(".type") // ends with ".type"
})
}
fn is_type_externs_file(path: &Path, module: &str) -> bool {
path.file_stem().and_then(OsStr::to_str).map_or(false, |stem| {
let mut stem_chars = stem.chars();
(&mut stem_chars).take(3).all(|c| c.is_ascii_digit()) && // first 3 chars are digits
matches!(stem_chars.next(), Some('-')) && // dash
module.chars().zip(&mut stem_chars).all(|(m, s)| m == s) && // module name
matches!(stem_chars.next(), Some('-')) && // dash
stem.ends_with(".type.externs") // ends with ".type"
})
}
fn copy_indent(mut read: impl BufRead, mut write: impl Write, indent: &str) -> Result<()> {
let mut line = Vec::with_capacity(100);
while read.read_until(b'\n', &mut line)?!= 0 {
write.write_all(indent.as_bytes())?;
write.write_all(&line)?;
line.clear();
}
Ok(())
}
fn collect_generated_bindings(modules: &[String], target_module_dir: &Path, manual_dir: &Path) -> Result<()> {
if!target_module_dir.exists() {
fs::create_dir(target_module_dir)?;
}
for path in files_with_extension(target_module_dir, "rs")? {
let _ = fs::remove_file(path);
}
fn write_has_module(mut write: impl Write, module: &str) -> Result<()> {
Ok(writeln!(write, "#[cfg(ocvrs_has_module_{module})]")?)
}
fn write_module_include(write: &mut BufWriter<File>, module: &str) -> Result<()> {
// Use include instead of #[path] attribute because rust-analyzer doesn't handle #[path] inside other include! too well:
// https://github.com/twistedfall/opencv-rust/issues/418
// https://github.com/rust-lang/rust-analyzer/issues/11682
Ok(writeln!(
write,
r#"include!(concat!(env!("OUT_DIR"), "/opencv/{module}.rs"));"#
)?)
}
let add_manual = |file: &mut BufWriter<File>, module: &str| -> Result<bool> {
if manual_dir.join(format!("{module}.rs")).exists() {
writeln!(file, "pub use crate::manual::{module}::*;")?;
Ok(true)
} else {
Ok(false)
}
};
let start = Instant::now();
let mut hub_rs = BufWriter::new(File::create(target_module_dir.join("hub.rs"))?);
let mut types_rs = BufWriter::new(File::create(target_module_dir.join("types.rs"))?);
writeln!(types_rs)?;
let mut sys_rs = BufWriter::new(File::create(target_module_dir.join("sys.rs"))?);
writeln!(sys_rs, "use crate::{{mod_prelude_sys::*, core}};")?;
writeln!(sys_rs)?;
for module in modules {
// merge multiple *-type.cpp files into a single module_types.hpp
let module_cpp = OUT_DIR.join(format!("{module}.cpp"));
if module_cpp.is_file() |
// add module entry to hub.rs and move the module file into opencv/
write_has_module(&mut hub_rs, module)?;
write_module_include(&mut hub_rs, module)?;
let module_filename = format!("{module}.rs");
let module_src_file = OUT_DIR.join(&module_filename);
let mut module_rs = BufWriter::new(File::create(&target_module_dir.join(&module_filename))?);
// Need to wrap modules inside `mod { }` because they have top-level comments (//!) and those don't play well when
// module file is include!d (as opposed to connecting the module with `mod` from the parent module).
// The same doesn't apply to `sys` and `types` below because they don't contain top-level comments.
writeln!(module_rs, "pub mod {module} {{")?;
copy_indent(BufReader::new(File::open(&module_src_file)?), &mut module_rs, "\t")?;
add_manual(&mut module_rs, module)?;
writeln!(module_rs, "}}")?;
let _ = fs::remove_file(module_src_file);
// merge multiple *-.type.rs files into a single types.rs
let mut header_written = false;
let mut type_files = files_with_extension(&OUT_DIR, "rs")?
.filter(|f| is_type_file(f, module))
.collect::<Vec<_>>();
type_files.sort_unstable();
for entry in type_files {
if entry.metadata().map(|meta| meta.len()).unwrap_or(0) > 0 {
if!header_written {
write_has_module(&mut types_rs, module)?;
writeln!(types_rs, "mod {module}_types {{")?;
writeln!(types_rs, "\tuse crate::{{mod_prelude::*, core, types, sys}};")?;
writeln!(types_rs)?;
header_written = true;
}
copy_indent(BufReader::new(File::open(&entry)?), &mut types_rs, "\t")?;
}
let _ = fs::remove_file(entry);
}
if header_written {
writeln!(types_rs, "}}")?;
write_has_module(&mut types_rs, module)?;
writeln!(types_rs, "pub use {module}_types::*;")?;
writeln!(types_rs)?;
}
// merge module-specific *.externs.rs and generated type-specific *.type.externs.rs into a single sys.rs
let externs_rs = OUT_DIR.join(format!("{module}.externs.rs"));
write_has_module(&mut sys_rs, module)?;
writeln!(sys_rs, "mod {module}_sys {{")?;
writeln!(sys_rs, "\tuse super::*;")?;
writeln!(sys_rs)?;
writeln!(sys_rs, "\textern \"C\" {{")?;
copy_indent(BufReader::new(File::open(&externs_rs)?), &mut sys_rs, "\t\t")?;
let _ = fs::remove_file(externs_rs);
let mut type_extern_files = files_with_extension(&OUT_DIR, "rs")?
.filter(|f| is_type_externs_file(f, module))
.collect::<Vec<_>>();
type_extern_files.sort_unstable();
for entry in type_extern_files {
if entry.metadata().map(|meta| meta.len()).unwrap_or(0) > 0 {
copy_indent(BufReader::new(File::open(&entry)?), &mut sys_rs, "\t\t")?;
}
let _ = fs::remove_file(entry);
}
writeln!(sys_rs, "\t}}")?;
writeln!(sys_rs, "}}")?;
write_has_module(&mut sys_rs, module)?;
writeln!(sys_rs, "pub use {module}_sys::*;")?;
writeln!(sys_rs)?;
}
writeln!(hub_rs, "pub mod types {{")?;
write_module_include(&mut hub_rs, "types")?;
writeln!(hub_rs, "}}")?;
writeln!(hub_rs, "#[doc(hidden)]")?;
writeln!(hub_rs, "pub mod sys {{")?;
write_module_include(&mut hub_rs, "sys")?;
writeln!(hub_rs, "}}")?;
add_manual(&mut types_rs, "types")?;
add_manual(&mut sys_rs, "sys")?;
// write hub_prelude that imports all module-specific preludes
writeln!(hub_rs, "pub mod hub_prelude {{")?;
for module in modules {
write!(hub_rs, "\t")?;
write_has_module(&mut hub_rs, module)?;
writeln!(hub_rs, "\tpub use super::{module}::prelude::*;")?;
}
writeln!(hub_rs, "}}")?;
eprintln!("=== Total binding collection time: {:?}", start.elapsed());
Ok(())
}
fn build_job_server() -> Result<Jobserver> {
unsafe { jobserver::Client::from_env() }
.and_then(|client| {
let own_token_released = client.release_raw().is_ok();
let available_jobs = client.available().unwrap_or(0);
if available_jobs > 0 {
eprintln!("=== Using environment job server with the the amount of available jobs: {available_jobs}");
Some(Jobserver {
client,
reacquire_token_on_drop: own_token_released,
})
} else {
client.acquire_raw().expect("Can't reacquire build script thread token");
eprintln!(
"=== Available jobs from the environment created jobserver is: {available_jobs} or there is an error reading that value"
);
None
}
})
.or_else(|| {
let num_jobs = env::var("NUM_JOBS")
.ok()
.and_then(|jobs| jobs.parse().ok())
.unwrap_or(2)
.max(1);
eprintln!("=== Creating a new job server with num_jobs: {num_jobs}");
jobserver::Client::new(num_jobs).ok().map(|client| Jobserver {
client,
reacquire_token_on_drop: false,
})
})
.ok_or_else(|| "Can't create job server".into())
}
pub struct Jobserver {
client: jobserver::Client,
reacquire_token_on_drop: bool,
}
impl Drop for Jobserver {
fn drop(&mut self) {
if self.reacquire_token_on_drop {
self.client.acquire_raw().expect("Can't reacquire build script thread token");
}
}
}
impl Deref for Jobserver {
type Target = jobserver::Client;
fn deref(&self) -> &Self::Target {
&self.client
}
}
| {
let module_types_cpp = OUT_DIR.join(format!("{module}_types.hpp"));
let mut module_types_file = BufWriter::new(
OpenOptions::new()
.create(true)
.truncate(true)
.write(true)
.open(module_types_cpp)?,
);
let mut type_files = files_with_extension(&OUT_DIR, "cpp")?
.filter(|f| is_type_file(f, module))
.collect::<Vec<_>>();
type_files.sort_unstable();
for entry in type_files {
io::copy(&mut BufReader::new(File::open(&entry)?), &mut module_types_file)?;
let _ = fs::remove_file(entry);
}
} | conditional_block |
generator.rs | use std::ffi::{OsStr, OsString};
use std::fs::{File, OpenOptions};
use std::io::{BufRead, BufReader, BufWriter, Write};
use std::ops::Deref;
use std::path::{Path, PathBuf};
use std::process::Command;
use std::sync::Arc;
use std::time::Instant;
use std::{env, fs, io, thread};
use opencv_binding_generator::{Generator, IteratorExt};
use crate::docs::transfer_bindings_to_docs;
use super::{files_with_extension, files_with_predicate, Library, Result, MODULES, OUT_DIR, SRC_CPP_DIR, SRC_DIR};
pub struct BindingGenerator {
build_script_path: OsString,
}
impl BindingGenerator {
pub fn new(build_script_path: OsString) -> Self {
Self { build_script_path }
}
pub fn generate_wrapper(&self, opencv_header_dir: &Path, opencv: &Library) -> Result<()> {
let target_docs_dir = env::var_os("OCVRS_DOCS_GENERATE_DIR").map(PathBuf::from);
let target_module_dir = OUT_DIR.join("opencv");
let manual_dir = SRC_DIR.join("manual");
eprintln!("=== Generating code in: {}", OUT_DIR.display());
eprintln!("=== Placing generated bindings into: {}", target_module_dir.display());
if let Some(target_docs_dir) = target_docs_dir.as_ref() {
eprintln!(
"=== Placing static generated docs bindings into: {}",
target_docs_dir.display()
);
}
eprintln!("=== Using OpenCV headers from: {}", opencv_header_dir.display());
let non_dll_files = files_with_predicate(&OUT_DIR, |p| {
p.extension().map_or(true, |ext|!ext.eq_ignore_ascii_case("dll"))
})?;
for path in non_dll_files {
let _ = fs::remove_file(path);
}
let modules = MODULES.get().expect("MODULES not initialized");
self.run(modules, opencv_header_dir, opencv)?;
collect_generated_bindings(modules, &target_module_dir, &manual_dir)?;
if let Some(target_docs_dir) = target_docs_dir {
if!target_docs_dir.exists() {
fs::create_dir(&target_docs_dir)?;
}
transfer_bindings_to_docs(&OUT_DIR, &target_docs_dir);
}
Ok(())
}
fn run(&self, modules: &'static [String], opencv_header_dir: &Path, opencv: &Library) -> Result<()> {
let additional_include_dirs = opencv
.include_paths
.iter()
.map(|path| path.as_path())
.filter(|&include_path| include_path!= opencv_header_dir)
.collect::<Vec<_>>();
let gen = Generator::new(opencv_header_dir, &additional_include_dirs, &SRC_CPP_DIR);
eprintln!("=== Clang: {}", gen.clang_version());
eprintln!("=== Clang command line args: {:#?}", gen.build_clang_command_line_args());
let additional_include_dirs = Arc::new(
additional_include_dirs
.into_iter()
.map(|p| p.to_str().expect("Can't convert additional include dir to UTF-8 string"))
.join(","),
);
let opencv_header_dir = Arc::new(opencv_header_dir.to_owned());
let job_server = build_job_server()?;
let mut join_handles = Vec::with_capacity(modules.len());
let start = Instant::now();
// todo use thread::scope when MSRV is 1.63
eprintln!("=== Generating {} modules", modules.len());
modules.iter().for_each(|module| {
let token = job_server.acquire().expect("Can't acquire token from job server");
let join_handle = thread::spawn({
let additional_include_dirs = Arc::clone(&additional_include_dirs);
let opencv_header_dir = Arc::clone(&opencv_header_dir);
let build_script_path = self.build_script_path.clone();
move || {
let module_start = Instant::now();
let mut bin_generator = Command::new(build_script_path);
bin_generator
.arg(&*opencv_header_dir)
.arg(&*SRC_CPP_DIR)
.arg(&*OUT_DIR)
.arg(module)
.arg(&*additional_include_dirs);
eprintln!("=== Running: {bin_generator:?}");
let res = bin_generator
.status()
.unwrap_or_else(|e| panic!("Can't run bindings generator for module: {module}, error: {e}"));
if!res.success() {
panic!("Failed to run the bindings generator for module: {module}");
}
eprintln!("=== Generated: {module} in {:?}", module_start.elapsed());
drop(token); // needed to move the token to the thread
}
});
join_handles.push(join_handle);
});
for join_handle in join_handles {
join_handle.join().expect("Generator process panicked");
}
eprintln!("=== Total binding generation time: {:?}", start.elapsed());
Ok(())
}
}
fn is_type_file(path: &Path, module: &str) -> bool {
path.file_stem().and_then(OsStr::to_str).map_or(false, |stem| {
let mut stem_chars = stem.chars();
(&mut stem_chars).take(3).all(|c| c.is_ascii_digit()) && // first 3 chars are digits
matches!(stem_chars.next(), Some('-')) && // dash
module.chars().zip(&mut stem_chars).all(|(m, s)| m == s) && // module name
matches!(stem_chars.next(), Some('-')) && // dash
stem.ends_with(".type") // ends with ".type"
})
}
fn is_type_externs_file(path: &Path, module: &str) -> bool {
path.file_stem().and_then(OsStr::to_str).map_or(false, |stem| {
let mut stem_chars = stem.chars();
(&mut stem_chars).take(3).all(|c| c.is_ascii_digit()) && // first 3 chars are digits
matches!(stem_chars.next(), Some('-')) && // dash
module.chars().zip(&mut stem_chars).all(|(m, s)| m == s) && // module name
matches!(stem_chars.next(), Some('-')) && // dash
stem.ends_with(".type.externs") // ends with ".type"
})
}
fn copy_indent(mut read: impl BufRead, mut write: impl Write, indent: &str) -> Result<()> {
let mut line = Vec::with_capacity(100);
while read.read_until(b'\n', &mut line)?!= 0 {
write.write_all(indent.as_bytes())?;
write.write_all(&line)?;
line.clear();
}
Ok(())
}
fn collect_generated_bindings(modules: &[String], target_module_dir: &Path, manual_dir: &Path) -> Result<()> {
if!target_module_dir.exists() {
fs::create_dir(target_module_dir)?;
}
for path in files_with_extension(target_module_dir, "rs")? {
let _ = fs::remove_file(path);
}
fn write_has_module(mut write: impl Write, module: &str) -> Result<()> {
Ok(writeln!(write, "#[cfg(ocvrs_has_module_{module})]")?)
}
fn write_module_include(write: &mut BufWriter<File>, module: &str) -> Result<()> {
// Use include instead of #[path] attribute because rust-analyzer doesn't handle #[path] inside other include! too well:
// https://github.com/twistedfall/opencv-rust/issues/418
// https://github.com/rust-lang/rust-analyzer/issues/11682
Ok(writeln!(
write,
r#"include!(concat!(env!("OUT_DIR"), "/opencv/{module}.rs"));"#
)?)
}
let add_manual = |file: &mut BufWriter<File>, module: &str| -> Result<bool> {
if manual_dir.join(format!("{module}.rs")).exists() {
writeln!(file, "pub use crate::manual::{module}::*;")?;
Ok(true)
} else {
Ok(false)
}
};
let start = Instant::now();
let mut hub_rs = BufWriter::new(File::create(target_module_dir.join("hub.rs"))?);
let mut types_rs = BufWriter::new(File::create(target_module_dir.join("types.rs"))?);
writeln!(types_rs)?;
let mut sys_rs = BufWriter::new(File::create(target_module_dir.join("sys.rs"))?);
writeln!(sys_rs, "use crate::{{mod_prelude_sys::*, core}};")?;
writeln!(sys_rs)?;
for module in modules {
// merge multiple *-type.cpp files into a single module_types.hpp
let module_cpp = OUT_DIR.join(format!("{module}.cpp"));
if module_cpp.is_file() {
let module_types_cpp = OUT_DIR.join(format!("{module}_types.hpp"));
let mut module_types_file = BufWriter::new(
OpenOptions::new()
.create(true)
.truncate(true)
.write(true)
.open(module_types_cpp)?,
);
let mut type_files = files_with_extension(&OUT_DIR, "cpp")?
.filter(|f| is_type_file(f, module))
.collect::<Vec<_>>();
type_files.sort_unstable();
for entry in type_files { | io::copy(&mut BufReader::new(File::open(&entry)?), &mut module_types_file)?;
let _ = fs::remove_file(entry);
}
}
// add module entry to hub.rs and move the module file into opencv/
write_has_module(&mut hub_rs, module)?;
write_module_include(&mut hub_rs, module)?;
let module_filename = format!("{module}.rs");
let module_src_file = OUT_DIR.join(&module_filename);
let mut module_rs = BufWriter::new(File::create(&target_module_dir.join(&module_filename))?);
// Need to wrap modules inside `mod { }` because they have top-level comments (//!) and those don't play well when
// module file is include!d (as opposed to connecting the module with `mod` from the parent module).
// The same doesn't apply to `sys` and `types` below because they don't contain top-level comments.
writeln!(module_rs, "pub mod {module} {{")?;
copy_indent(BufReader::new(File::open(&module_src_file)?), &mut module_rs, "\t")?;
add_manual(&mut module_rs, module)?;
writeln!(module_rs, "}}")?;
let _ = fs::remove_file(module_src_file);
// merge multiple *-.type.rs files into a single types.rs
let mut header_written = false;
let mut type_files = files_with_extension(&OUT_DIR, "rs")?
.filter(|f| is_type_file(f, module))
.collect::<Vec<_>>();
type_files.sort_unstable();
for entry in type_files {
if entry.metadata().map(|meta| meta.len()).unwrap_or(0) > 0 {
if!header_written {
write_has_module(&mut types_rs, module)?;
writeln!(types_rs, "mod {module}_types {{")?;
writeln!(types_rs, "\tuse crate::{{mod_prelude::*, core, types, sys}};")?;
writeln!(types_rs)?;
header_written = true;
}
copy_indent(BufReader::new(File::open(&entry)?), &mut types_rs, "\t")?;
}
let _ = fs::remove_file(entry);
}
if header_written {
writeln!(types_rs, "}}")?;
write_has_module(&mut types_rs, module)?;
writeln!(types_rs, "pub use {module}_types::*;")?;
writeln!(types_rs)?;
}
// merge module-specific *.externs.rs and generated type-specific *.type.externs.rs into a single sys.rs
let externs_rs = OUT_DIR.join(format!("{module}.externs.rs"));
write_has_module(&mut sys_rs, module)?;
writeln!(sys_rs, "mod {module}_sys {{")?;
writeln!(sys_rs, "\tuse super::*;")?;
writeln!(sys_rs)?;
writeln!(sys_rs, "\textern \"C\" {{")?;
copy_indent(BufReader::new(File::open(&externs_rs)?), &mut sys_rs, "\t\t")?;
let _ = fs::remove_file(externs_rs);
let mut type_extern_files = files_with_extension(&OUT_DIR, "rs")?
.filter(|f| is_type_externs_file(f, module))
.collect::<Vec<_>>();
type_extern_files.sort_unstable();
for entry in type_extern_files {
if entry.metadata().map(|meta| meta.len()).unwrap_or(0) > 0 {
copy_indent(BufReader::new(File::open(&entry)?), &mut sys_rs, "\t\t")?;
}
let _ = fs::remove_file(entry);
}
writeln!(sys_rs, "\t}}")?;
writeln!(sys_rs, "}}")?;
write_has_module(&mut sys_rs, module)?;
writeln!(sys_rs, "pub use {module}_sys::*;")?;
writeln!(sys_rs)?;
}
writeln!(hub_rs, "pub mod types {{")?;
write_module_include(&mut hub_rs, "types")?;
writeln!(hub_rs, "}}")?;
writeln!(hub_rs, "#[doc(hidden)]")?;
writeln!(hub_rs, "pub mod sys {{")?;
write_module_include(&mut hub_rs, "sys")?;
writeln!(hub_rs, "}}")?;
add_manual(&mut types_rs, "types")?;
add_manual(&mut sys_rs, "sys")?;
// write hub_prelude that imports all module-specific preludes
writeln!(hub_rs, "pub mod hub_prelude {{")?;
for module in modules {
write!(hub_rs, "\t")?;
write_has_module(&mut hub_rs, module)?;
writeln!(hub_rs, "\tpub use super::{module}::prelude::*;")?;
}
writeln!(hub_rs, "}}")?;
eprintln!("=== Total binding collection time: {:?}", start.elapsed());
Ok(())
}
fn build_job_server() -> Result<Jobserver> {
unsafe { jobserver::Client::from_env() }
.and_then(|client| {
let own_token_released = client.release_raw().is_ok();
let available_jobs = client.available().unwrap_or(0);
if available_jobs > 0 {
eprintln!("=== Using environment job server with the the amount of available jobs: {available_jobs}");
Some(Jobserver {
client,
reacquire_token_on_drop: own_token_released,
})
} else {
client.acquire_raw().expect("Can't reacquire build script thread token");
eprintln!(
"=== Available jobs from the environment created jobserver is: {available_jobs} or there is an error reading that value"
);
None
}
})
.or_else(|| {
let num_jobs = env::var("NUM_JOBS")
.ok()
.and_then(|jobs| jobs.parse().ok())
.unwrap_or(2)
.max(1);
eprintln!("=== Creating a new job server with num_jobs: {num_jobs}");
jobserver::Client::new(num_jobs).ok().map(|client| Jobserver {
client,
reacquire_token_on_drop: false,
})
})
.ok_or_else(|| "Can't create job server".into())
}
pub struct Jobserver {
client: jobserver::Client,
reacquire_token_on_drop: bool,
}
impl Drop for Jobserver {
fn drop(&mut self) {
if self.reacquire_token_on_drop {
self.client.acquire_raw().expect("Can't reacquire build script thread token");
}
}
}
impl Deref for Jobserver {
type Target = jobserver::Client;
fn deref(&self) -> &Self::Target {
&self.client
}
} | random_line_split |
|
flood_order.rs | /*
This tool is part of the WhiteboxTools geospatial analysis library.
Authors: Dr. John Lindsay
Created: 12/07/2017
Last Modified: 12/10/2018
License: MIT
*/
use whitebox_raster::*;
use whitebox_common::structures::Array2D;
use crate::tools::*;
use std::cmp::Ordering;
use std::collections::BinaryHeap;
use std::collections::VecDeque;
use std::env;
use std::f64;
use std::i32;
use std::io::{Error, ErrorKind};
use std::path;
/// This tool takes an input digital elevation model (DEM) and creates an output raster where every grid cell
/// contains the flood order of that cell within the DEM. The flood order is the sequence of grid cells that
/// are encountered during a search, starting from the raster grid edges and the lowest grid cell, moving inward
/// at increasing elevations. This is in fact similar to how the highly efficient Wang and Liu (2006) depression
/// filling algorithm and the Breach Depressions (Fast) operates. The output flood order raster contains the
/// sequential order, from lowest edge cell to the highest pixel in the DEM.
///
/// Like the `FillDepressions` tool, `FloodOrder` will read the entire DEM into memory. This may make the
/// algorithm ill suited to processing massive DEMs except where the user's computer has substantial memory
/// (RAM) resources.
///
/// # Reference
/// Wang, L., and Liu, H. (2006). An efficient method for identifying and filling surface depressions in digital
/// elevation models for hydrologic analysis and modelling. International Journal of Geographical Information Science,
/// 20(2), 193-213.
///
/// # See Also
/// `FillDepressions`
pub struct FloodOrder {
name: String,
description: String,
toolbox: String,
parameters: Vec<ToolParameter>,
example_usage: String,
}
impl FloodOrder {
pub fn new() -> FloodOrder {
// public constructor
let name = "FloodOrder".to_string();
let toolbox = "Hydrological Analysis".to_string();
let description = "Assigns each DEM grid cell its order in the sequence of inundations that are encountered during a search starting from the edges, moving inward at increasing elevations.".to_string();
let mut parameters = vec![];
parameters.push(ToolParameter {
name: "Input File".to_owned(),
flags: vec!["-i".to_owned(), "--dem".to_owned()],
description: "Input raster DEM file.".to_owned(),
parameter_type: ParameterType::ExistingFile(ParameterFileType::Raster),
default_value: None,
optional: false,
});
parameters.push(ToolParameter {
name: "Output File".to_owned(),
flags: vec!["-o".to_owned(), "--output".to_owned()],
description: "Output raster file.".to_owned(),
parameter_type: ParameterType::NewFile(ParameterFileType::Raster),
default_value: None,
optional: false,
});
let sep: String = path::MAIN_SEPARATOR.to_string();
let e = format!("{}", env::current_exe().unwrap().display());
let mut parent = env::current_exe().unwrap();
parent.pop();
let p = format!("{}", parent.display());
let mut short_exe = e
.replace(&p, "")
.replace(".exe", "")
.replace(".", "")
.replace(&sep, "");
if e.contains(".exe") {
short_exe += ".exe";
}
let usage = format!(
">>.*{0} -r={1} -v --wd=\"*path*to*data*\" --dem=DEM.tif -o=output.tif",
short_exe, name
)
.replace("*", &sep);
FloodOrder {
name: name,
description: description,
toolbox: toolbox,
parameters: parameters,
example_usage: usage,
}
}
}
impl WhiteboxTool for FloodOrder {
fn get_source_file(&self) -> String {
String::from(file!())
}
fn get_tool_name(&self) -> String {
self.name.clone()
}
fn get_tool_description(&self) -> String {
self.description.clone()
}
fn get_tool_parameters(&self) -> String {
match serde_json::to_string(&self.parameters) {
Ok(json_str) => return format!("{{\"parameters\":{}}}", json_str),
Err(err) => return format!("{:?}", err),
}
}
fn get_example_usage(&self) -> String {
self.example_usage.clone()
}
fn get_toolbox(&self) -> String {
self.toolbox.clone()
}
fn run<'a>(
&self,
args: Vec<String>,
working_directory: &'a str,
verbose: bool,
) -> Result<(), Error> {
let mut input_file = String::new();
let mut output_file = String::new();
if args.len() == 0 {
return Err(Error::new(
ErrorKind::InvalidInput,
"Tool run with no parameters.",
));
}
for i in 0..args.len() {
let mut arg = args[i].replace("\"", "");
arg = arg.replace("\'", "");
let cmd = arg.split("="); // in case an equals sign was used
let vec = cmd.collect::<Vec<&str>>();
let mut keyval = false;
if vec.len() > 1 {
keyval = true;
}
if vec[0].to_lowercase() == "-i"
|| vec[0].to_lowercase() == "--input"
|| vec[0].to_lowercase() == "--dem"
{
if keyval {
input_file = vec[1].to_string();
} else {
input_file = args[i + 1].to_string();
}
} else if vec[0].to_lowercase() == "-o" || vec[0].to_lowercase() == "--output" {
if keyval {
output_file = vec[1].to_string();
} else {
output_file = args[i + 1].to_string();
}
}
}
if verbose {
let tool_name = self.get_tool_name();
let welcome_len = format!("* Welcome to {} *", tool_name).len().max(28);
// 28 = length of the 'Powered by' by statement.
println!("{}", "*".repeat(welcome_len));
println!("* Welcome to {} {}*", tool_name, " ".repeat(welcome_len - 15 - tool_name.len()));
println!("* Powered by WhiteboxTools {}*", " ".repeat(welcome_len - 28));
println!("* www.whiteboxgeo.com {}*", " ".repeat(welcome_len - 23));
println!("{}", "*".repeat(welcome_len));
}
let sep: String = path::MAIN_SEPARATOR.to_string();
let mut progress: usize;
let mut old_progress: usize = 1;
if!input_file.contains(&sep) &&!input_file.contains("/") {
input_file = format!("{}{}", working_directory, input_file);
}
if!output_file.contains(&sep) &&!output_file.contains("/") {
output_file = format!("{}{}", working_directory, output_file);
}
if verbose {
println!("Reading data...")
};
let input = Raster::new(&input_file, "r")?;
let start = Instant::now();
let rows = input.configs.rows as isize;
let columns = input.configs.columns as isize;
let num_cells = rows * columns;
let nodata = input.configs.nodata;
// let min_val = input.configs.minimum;
// let elev_digits = ((input.configs.maximum - min_val) as i64).to_string().len();
// let elev_multiplier = 10.0_f64.powi((7 - elev_digits) as i32);
// let mut small_num = 0f64; //1.0 / elev_multiplier as f64;
let background_val = (i32::min_value() + 1) as f64;
let mut filled_dem: Array2D<f64> = Array2D::new(rows, columns, background_val, nodata)?;
let mut output = Raster::initialize_using_file(&output_file, &input);
/*
Find the data edges. This is complicated by the fact that DEMs frequently
have nodata edges, whereby the DEM does not occupy the full extent of
the raster. One approach to doing this would be simply to scan the
raster, looking for cells that neighbour nodata values. However, this
assumes that there are no interior nodata holes in the dataset. Instead,
the approach used here is to perform a region-growing operation, looking
for nodata values along the raster's edges.
*/
let mut queue: VecDeque<(isize, isize)> =
VecDeque::with_capacity((rows * columns) as usize);
for row in 0..rows {
/*
Note that this is only possible because Whitebox rasters
allow you to address cells beyond the raster extent but
return the nodata value for these regions.
*/
queue.push_back((row, -1));
queue.push_back((row, columns));
}
for col in 0..columns {
queue.push_back((-1, col));
queue.push_back((rows, col));
}
/*
minheap is the priority queue. Note that I've tested using integer-based
priority values, by multiplying the elevations, but this didn't result
in a significant performance gain over the use of f64s.
*/
let mut minheap = BinaryHeap::with_capacity((rows * columns) as usize);
let mut num_solved_cells = 0;
let mut zin_n: f64; // value of neighbour of row, col in input raster
let mut zout: f64; // value of row, col in output raster
let mut zout_n: f64; // value of neighbour of row, col in output raster
let dx = [1, 1, 1, 0, -1, -1, -1, 0];
let dy = [-1, 0, 1, 1, 1, 0, -1, -1];
let (mut row, mut col): (isize, isize);
let (mut row_n, mut col_n): (isize, isize);
while!queue.is_empty() {
let cell = queue.pop_front().unwrap();
row = cell.0;
col = cell.1;
for n in 0..8 {
row_n = row + dy[n];
col_n = col + dx[n];
zin_n = input[(row_n, col_n)];
zout_n = filled_dem[(row_n, col_n)];
if zout_n == background_val {
if zin_n == nodata {
filled_dem[(row_n, col_n)] = nodata;
output[(row_n, col_n)] = nodata;
queue.push_back((row_n, col_n));
} else {
filled_dem[(row_n, col_n)] = zin_n;
// Push it onto the priority queue for the priority flood operation
minheap.push(GridCell {
row: row_n,
column: col_n,
priority: zin_n,
});
}
num_solved_cells += 1;
}
}
if verbose {
progress = (100.0_f64 * num_solved_cells as f64 / (num_cells - 1) as f64) as usize;
if progress!= old_progress {
println!("progress: {}%", progress);
old_progress = progress;
}
}
}
// Perform the priority flood operation.
let mut order_val = 1f64;
while!minheap.is_empty() {
let cell = minheap.pop().expect("Error during pop operation.");
row = cell.row;
col = cell.column;
zout = filled_dem[(row, col)];
output[(row, col)] = order_val;
order_val += 1f64;
for n in 0..8 {
row_n = row + dy[n];
col_n = col + dx[n];
zout_n = filled_dem[(row_n, col_n)];
if zout_n == background_val {
zin_n = input[(row_n, col_n)];
if zin_n!= nodata {
if zin_n < zout {
zin_n = zout;
} // We're in a depression. Raise the elevation.
filled_dem[(row_n, col_n)] = zin_n;
minheap.push(GridCell {
row: row_n,
column: col_n,
priority: zin_n,
});
} else {
// Interior nodata cells are still treated as nodata and are not filled.
output[(row_n, col_n)] = nodata;
num_solved_cells += 1;
}
}
}
if verbose {
num_solved_cells += 1;
progress = (100.0_f64 * num_solved_cells as f64 / (num_cells - 1) as f64) as usize;
if progress!= old_progress {
println!("Progress: {}%", progress);
old_progress = progress;
}
}
}
let elapsed_time = get_formatted_elapsed_time(start);
output.add_metadata_entry(format!(
"Created by whitebox_tools\' {} tool",
self.get_tool_name()
));
output.add_metadata_entry(format!("Input file: {}", input_file));
output.add_metadata_entry(format!("Elapsed Time (excluding I/O): {}", elapsed_time));
if verbose {
println!("Saving data...")
};
let _ = match output.write() {
Ok(_) => {
if verbose {
println!("Output file written")
}
}
Err(e) => return Err(e),
};
if verbose {
println!(
"{}",
&format!("Elapsed Time (excluding I/O): {}", elapsed_time)
);
}
Ok(())
}
}
#[derive(PartialEq, Debug)]
struct GridCell {
row: isize,
column: isize,
// priority: usize,
priority: f64,
}
impl Eq for GridCell {}
impl PartialOrd for GridCell {
fn | (&self, other: &Self) -> Option<Ordering> {
// Some(other.priority.cmp(&self.priority))
other.priority.partial_cmp(&self.priority)
}
}
impl Ord for GridCell {
fn cmp(&self, other: &GridCell) -> Ordering {
// other.priority.cmp(&self.priority)
let ord = self.partial_cmp(other).unwrap();
match ord {
Ordering::Greater => Ordering::Less,
Ordering::Less => Ordering::Greater,
Ordering::Equal => ord,
}
}
}
| partial_cmp | identifier_name |
flood_order.rs | /*
This tool is part of the WhiteboxTools geospatial analysis library.
Authors: Dr. John Lindsay
Created: 12/07/2017
Last Modified: 12/10/2018
License: MIT
*/
use whitebox_raster::*;
use whitebox_common::structures::Array2D;
use crate::tools::*;
use std::cmp::Ordering;
use std::collections::BinaryHeap;
use std::collections::VecDeque;
use std::env;
use std::f64;
use std::i32;
use std::io::{Error, ErrorKind};
use std::path;
/// This tool takes an input digital elevation model (DEM) and creates an output raster where every grid cell
/// contains the flood order of that cell within the DEM. The flood order is the sequence of grid cells that
/// are encountered during a search, starting from the raster grid edges and the lowest grid cell, moving inward
/// at increasing elevations. This is in fact similar to how the highly efficient Wang and Liu (2006) depression
/// filling algorithm and the Breach Depressions (Fast) operates. The output flood order raster contains the
/// sequential order, from lowest edge cell to the highest pixel in the DEM.
///
/// Like the `FillDepressions` tool, `FloodOrder` will read the entire DEM into memory. This may make the
/// algorithm ill suited to processing massive DEMs except where the user's computer has substantial memory
/// (RAM) resources.
///
/// # Reference
/// Wang, L., and Liu, H. (2006). An efficient method for identifying and filling surface depressions in digital
/// elevation models for hydrologic analysis and modelling. International Journal of Geographical Information Science,
/// 20(2), 193-213.
///
/// # See Also
/// `FillDepressions`
pub struct FloodOrder {
name: String,
description: String,
toolbox: String,
parameters: Vec<ToolParameter>,
example_usage: String,
}
impl FloodOrder {
pub fn new() -> FloodOrder {
// public constructor
let name = "FloodOrder".to_string();
let toolbox = "Hydrological Analysis".to_string();
let description = "Assigns each DEM grid cell its order in the sequence of inundations that are encountered during a search starting from the edges, moving inward at increasing elevations.".to_string();
let mut parameters = vec![];
parameters.push(ToolParameter {
name: "Input File".to_owned(),
flags: vec!["-i".to_owned(), "--dem".to_owned()],
description: "Input raster DEM file.".to_owned(),
parameter_type: ParameterType::ExistingFile(ParameterFileType::Raster),
default_value: None,
optional: false,
});
parameters.push(ToolParameter {
name: "Output File".to_owned(),
flags: vec!["-o".to_owned(), "--output".to_owned()],
description: "Output raster file.".to_owned(),
parameter_type: ParameterType::NewFile(ParameterFileType::Raster),
default_value: None,
optional: false,
});
let sep: String = path::MAIN_SEPARATOR.to_string();
let e = format!("{}", env::current_exe().unwrap().display());
let mut parent = env::current_exe().unwrap();
parent.pop();
let p = format!("{}", parent.display());
let mut short_exe = e
.replace(&p, "")
.replace(".exe", "")
.replace(".", "")
.replace(&sep, "");
if e.contains(".exe") {
short_exe += ".exe";
}
let usage = format!(
">>.*{0} -r={1} -v --wd=\"*path*to*data*\" --dem=DEM.tif -o=output.tif",
short_exe, name
)
.replace("*", &sep);
FloodOrder {
name: name,
description: description,
toolbox: toolbox,
parameters: parameters,
example_usage: usage,
}
}
}
impl WhiteboxTool for FloodOrder {
fn get_source_file(&self) -> String {
String::from(file!())
}
fn get_tool_name(&self) -> String {
self.name.clone()
}
fn get_tool_description(&self) -> String {
self.description.clone()
}
fn get_tool_parameters(&self) -> String {
match serde_json::to_string(&self.parameters) {
Ok(json_str) => return format!("{{\"parameters\":{}}}", json_str),
Err(err) => return format!("{:?}", err),
}
}
fn get_example_usage(&self) -> String {
self.example_usage.clone()
}
fn get_toolbox(&self) -> String {
self.toolbox.clone()
}
fn run<'a>(
&self,
args: Vec<String>,
working_directory: &'a str,
verbose: bool,
) -> Result<(), Error> {
let mut input_file = String::new();
let mut output_file = String::new();
if args.len() == 0 {
return Err(Error::new(
ErrorKind::InvalidInput,
"Tool run with no parameters.",
));
}
for i in 0..args.len() {
let mut arg = args[i].replace("\"", "");
arg = arg.replace("\'", "");
let cmd = arg.split("="); // in case an equals sign was used
let vec = cmd.collect::<Vec<&str>>();
let mut keyval = false;
if vec.len() > 1 {
keyval = true;
}
if vec[0].to_lowercase() == "-i"
|| vec[0].to_lowercase() == "--input"
|| vec[0].to_lowercase() == "--dem"
{
if keyval {
input_file = vec[1].to_string();
} else {
input_file = args[i + 1].to_string();
}
} else if vec[0].to_lowercase() == "-o" || vec[0].to_lowercase() == "--output" {
if keyval {
output_file = vec[1].to_string();
} else {
output_file = args[i + 1].to_string();
}
}
}
if verbose {
let tool_name = self.get_tool_name();
let welcome_len = format!("* Welcome to {} *", tool_name).len().max(28);
// 28 = length of the 'Powered by' by statement.
println!("{}", "*".repeat(welcome_len));
println!("* Welcome to {} {}*", tool_name, " ".repeat(welcome_len - 15 - tool_name.len()));
println!("* Powered by WhiteboxTools {}*", " ".repeat(welcome_len - 28));
println!("* www.whiteboxgeo.com {}*", " ".repeat(welcome_len - 23));
println!("{}", "*".repeat(welcome_len));
}
let sep: String = path::MAIN_SEPARATOR.to_string();
let mut progress: usize;
let mut old_progress: usize = 1;
if!input_file.contains(&sep) &&!input_file.contains("/") {
input_file = format!("{}{}", working_directory, input_file);
}
if!output_file.contains(&sep) &&!output_file.contains("/") {
output_file = format!("{}{}", working_directory, output_file);
}
if verbose {
println!("Reading data...")
};
let input = Raster::new(&input_file, "r")?;
let start = Instant::now();
let rows = input.configs.rows as isize;
let columns = input.configs.columns as isize;
let num_cells = rows * columns;
let nodata = input.configs.nodata;
// let min_val = input.configs.minimum;
// let elev_digits = ((input.configs.maximum - min_val) as i64).to_string().len();
// let elev_multiplier = 10.0_f64.powi((7 - elev_digits) as i32);
// let mut small_num = 0f64; //1.0 / elev_multiplier as f64;
let background_val = (i32::min_value() + 1) as f64;
let mut filled_dem: Array2D<f64> = Array2D::new(rows, columns, background_val, nodata)?;
let mut output = Raster::initialize_using_file(&output_file, &input);
/*
Find the data edges. This is complicated by the fact that DEMs frequently
have nodata edges, whereby the DEM does not occupy the full extent of
the raster. One approach to doing this would be simply to scan the
raster, looking for cells that neighbour nodata values. However, this
assumes that there are no interior nodata holes in the dataset. Instead,
the approach used here is to perform a region-growing operation, looking
for nodata values along the raster's edges.
*/
let mut queue: VecDeque<(isize, isize)> =
VecDeque::with_capacity((rows * columns) as usize);
for row in 0..rows {
/*
Note that this is only possible because Whitebox rasters
allow you to address cells beyond the raster extent but
return the nodata value for these regions.
*/
queue.push_back((row, -1));
queue.push_back((row, columns));
}
for col in 0..columns {
queue.push_back((-1, col));
queue.push_back((rows, col));
}
/*
minheap is the priority queue. Note that I've tested using integer-based
priority values, by multiplying the elevations, but this didn't result
in a significant performance gain over the use of f64s.
*/
let mut minheap = BinaryHeap::with_capacity((rows * columns) as usize);
let mut num_solved_cells = 0;
let mut zin_n: f64; // value of neighbour of row, col in input raster
let mut zout: f64; // value of row, col in output raster
let mut zout_n: f64; // value of neighbour of row, col in output raster
let dx = [1, 1, 1, 0, -1, -1, -1, 0];
let dy = [-1, 0, 1, 1, 1, 0, -1, -1];
let (mut row, mut col): (isize, isize);
let (mut row_n, mut col_n): (isize, isize);
while!queue.is_empty() {
let cell = queue.pop_front().unwrap();
row = cell.0;
col = cell.1;
for n in 0..8 {
row_n = row + dy[n];
col_n = col + dx[n];
zin_n = input[(row_n, col_n)];
zout_n = filled_dem[(row_n, col_n)];
if zout_n == background_val {
if zin_n == nodata {
filled_dem[(row_n, col_n)] = nodata;
output[(row_n, col_n)] = nodata;
queue.push_back((row_n, col_n));
} else {
filled_dem[(row_n, col_n)] = zin_n;
// Push it onto the priority queue for the priority flood operation
minheap.push(GridCell {
row: row_n,
column: col_n,
priority: zin_n,
});
}
num_solved_cells += 1;
}
}
if verbose {
progress = (100.0_f64 * num_solved_cells as f64 / (num_cells - 1) as f64) as usize;
if progress!= old_progress {
println!("progress: {}%", progress);
old_progress = progress;
}
}
}
// Perform the priority flood operation.
let mut order_val = 1f64;
while!minheap.is_empty() {
let cell = minheap.pop().expect("Error during pop operation.");
row = cell.row;
col = cell.column;
zout = filled_dem[(row, col)];
output[(row, col)] = order_val;
order_val += 1f64;
for n in 0..8 {
row_n = row + dy[n];
col_n = col + dx[n];
zout_n = filled_dem[(row_n, col_n)];
if zout_n == background_val {
zin_n = input[(row_n, col_n)];
if zin_n!= nodata {
if zin_n < zout {
zin_n = zout;
} // We're in a depression. Raise the elevation.
filled_dem[(row_n, col_n)] = zin_n;
minheap.push(GridCell {
row: row_n,
column: col_n,
priority: zin_n,
});
} else {
// Interior nodata cells are still treated as nodata and are not filled.
output[(row_n, col_n)] = nodata;
num_solved_cells += 1;
}
}
}
if verbose {
num_solved_cells += 1; | progress = (100.0_f64 * num_solved_cells as f64 / (num_cells - 1) as f64) as usize;
if progress!= old_progress {
println!("Progress: {}%", progress);
old_progress = progress;
}
}
}
let elapsed_time = get_formatted_elapsed_time(start);
output.add_metadata_entry(format!(
"Created by whitebox_tools\' {} tool",
self.get_tool_name()
));
output.add_metadata_entry(format!("Input file: {}", input_file));
output.add_metadata_entry(format!("Elapsed Time (excluding I/O): {}", elapsed_time));
if verbose {
println!("Saving data...")
};
let _ = match output.write() {
Ok(_) => {
if verbose {
println!("Output file written")
}
}
Err(e) => return Err(e),
};
if verbose {
println!(
"{}",
&format!("Elapsed Time (excluding I/O): {}", elapsed_time)
);
}
Ok(())
}
}
#[derive(PartialEq, Debug)]
struct GridCell {
row: isize,
column: isize,
// priority: usize,
priority: f64,
}
impl Eq for GridCell {}
impl PartialOrd for GridCell {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
// Some(other.priority.cmp(&self.priority))
other.priority.partial_cmp(&self.priority)
}
}
impl Ord for GridCell {
fn cmp(&self, other: &GridCell) -> Ordering {
// other.priority.cmp(&self.priority)
let ord = self.partial_cmp(other).unwrap();
match ord {
Ordering::Greater => Ordering::Less,
Ordering::Less => Ordering::Greater,
Ordering::Equal => ord,
}
}
} | random_line_split |
|
flood_order.rs | /*
This tool is part of the WhiteboxTools geospatial analysis library.
Authors: Dr. John Lindsay
Created: 12/07/2017
Last Modified: 12/10/2018
License: MIT
*/
use whitebox_raster::*;
use whitebox_common::structures::Array2D;
use crate::tools::*;
use std::cmp::Ordering;
use std::collections::BinaryHeap;
use std::collections::VecDeque;
use std::env;
use std::f64;
use std::i32;
use std::io::{Error, ErrorKind};
use std::path;
/// This tool takes an input digital elevation model (DEM) and creates an output raster where every grid cell
/// contains the flood order of that cell within the DEM. The flood order is the sequence of grid cells that
/// are encountered during a search, starting from the raster grid edges and the lowest grid cell, moving inward
/// at increasing elevations. This is in fact similar to how the highly efficient Wang and Liu (2006) depression
/// filling algorithm and the Breach Depressions (Fast) operates. The output flood order raster contains the
/// sequential order, from lowest edge cell to the highest pixel in the DEM.
///
/// Like the `FillDepressions` tool, `FloodOrder` will read the entire DEM into memory. This may make the
/// algorithm ill suited to processing massive DEMs except where the user's computer has substantial memory
/// (RAM) resources.
///
/// # Reference
/// Wang, L., and Liu, H. (2006). An efficient method for identifying and filling surface depressions in digital
/// elevation models for hydrologic analysis and modelling. International Journal of Geographical Information Science,
/// 20(2), 193-213.
///
/// # See Also
/// `FillDepressions`
pub struct FloodOrder {
name: String,
description: String,
toolbox: String,
parameters: Vec<ToolParameter>,
example_usage: String,
}
impl FloodOrder {
pub fn new() -> FloodOrder {
// public constructor
let name = "FloodOrder".to_string();
let toolbox = "Hydrological Analysis".to_string();
let description = "Assigns each DEM grid cell its order in the sequence of inundations that are encountered during a search starting from the edges, moving inward at increasing elevations.".to_string();
let mut parameters = vec![];
parameters.push(ToolParameter {
name: "Input File".to_owned(),
flags: vec!["-i".to_owned(), "--dem".to_owned()],
description: "Input raster DEM file.".to_owned(),
parameter_type: ParameterType::ExistingFile(ParameterFileType::Raster),
default_value: None,
optional: false,
});
parameters.push(ToolParameter {
name: "Output File".to_owned(),
flags: vec!["-o".to_owned(), "--output".to_owned()],
description: "Output raster file.".to_owned(),
parameter_type: ParameterType::NewFile(ParameterFileType::Raster),
default_value: None,
optional: false,
});
let sep: String = path::MAIN_SEPARATOR.to_string();
let e = format!("{}", env::current_exe().unwrap().display());
let mut parent = env::current_exe().unwrap();
parent.pop();
let p = format!("{}", parent.display());
let mut short_exe = e
.replace(&p, "")
.replace(".exe", "")
.replace(".", "")
.replace(&sep, "");
if e.contains(".exe") {
short_exe += ".exe";
}
let usage = format!(
">>.*{0} -r={1} -v --wd=\"*path*to*data*\" --dem=DEM.tif -o=output.tif",
short_exe, name
)
.replace("*", &sep);
FloodOrder {
name: name,
description: description,
toolbox: toolbox,
parameters: parameters,
example_usage: usage,
}
}
}
impl WhiteboxTool for FloodOrder {
fn get_source_file(&self) -> String {
String::from(file!())
}
fn get_tool_name(&self) -> String {
self.name.clone()
}
fn get_tool_description(&self) -> String {
self.description.clone()
}
fn get_tool_parameters(&self) -> String {
match serde_json::to_string(&self.parameters) {
Ok(json_str) => return format!("{{\"parameters\":{}}}", json_str),
Err(err) => return format!("{:?}", err),
}
}
fn get_example_usage(&self) -> String {
self.example_usage.clone()
}
fn get_toolbox(&self) -> String {
self.toolbox.clone()
}
fn run<'a>(
&self,
args: Vec<String>,
working_directory: &'a str,
verbose: bool,
) -> Result<(), Error> {
let mut input_file = String::new();
let mut output_file = String::new();
if args.len() == 0 {
return Err(Error::new(
ErrorKind::InvalidInput,
"Tool run with no parameters.",
));
}
for i in 0..args.len() {
let mut arg = args[i].replace("\"", "");
arg = arg.replace("\'", "");
let cmd = arg.split("="); // in case an equals sign was used
let vec = cmd.collect::<Vec<&str>>();
let mut keyval = false;
if vec.len() > 1 {
keyval = true;
}
if vec[0].to_lowercase() == "-i"
|| vec[0].to_lowercase() == "--input"
|| vec[0].to_lowercase() == "--dem"
{
if keyval {
input_file = vec[1].to_string();
} else {
input_file = args[i + 1].to_string();
}
} else if vec[0].to_lowercase() == "-o" || vec[0].to_lowercase() == "--output" {
if keyval {
output_file = vec[1].to_string();
} else {
output_file = args[i + 1].to_string();
}
}
}
if verbose {
let tool_name = self.get_tool_name();
let welcome_len = format!("* Welcome to {} *", tool_name).len().max(28);
// 28 = length of the 'Powered by' by statement.
println!("{}", "*".repeat(welcome_len));
println!("* Welcome to {} {}*", tool_name, " ".repeat(welcome_len - 15 - tool_name.len()));
println!("* Powered by WhiteboxTools {}*", " ".repeat(welcome_len - 28));
println!("* www.whiteboxgeo.com {}*", " ".repeat(welcome_len - 23));
println!("{}", "*".repeat(welcome_len));
}
let sep: String = path::MAIN_SEPARATOR.to_string();
let mut progress: usize;
let mut old_progress: usize = 1;
if!input_file.contains(&sep) &&!input_file.contains("/") {
input_file = format!("{}{}", working_directory, input_file);
}
if!output_file.contains(&sep) &&!output_file.contains("/") {
output_file = format!("{}{}", working_directory, output_file);
}
if verbose {
println!("Reading data...")
};
let input = Raster::new(&input_file, "r")?;
let start = Instant::now();
let rows = input.configs.rows as isize;
let columns = input.configs.columns as isize;
let num_cells = rows * columns;
let nodata = input.configs.nodata;
// let min_val = input.configs.minimum;
// let elev_digits = ((input.configs.maximum - min_val) as i64).to_string().len();
// let elev_multiplier = 10.0_f64.powi((7 - elev_digits) as i32);
// let mut small_num = 0f64; //1.0 / elev_multiplier as f64;
let background_val = (i32::min_value() + 1) as f64;
let mut filled_dem: Array2D<f64> = Array2D::new(rows, columns, background_val, nodata)?;
let mut output = Raster::initialize_using_file(&output_file, &input);
/*
Find the data edges. This is complicated by the fact that DEMs frequently
have nodata edges, whereby the DEM does not occupy the full extent of
the raster. One approach to doing this would be simply to scan the
raster, looking for cells that neighbour nodata values. However, this
assumes that there are no interior nodata holes in the dataset. Instead,
the approach used here is to perform a region-growing operation, looking
for nodata values along the raster's edges.
*/
let mut queue: VecDeque<(isize, isize)> =
VecDeque::with_capacity((rows * columns) as usize);
for row in 0..rows {
/*
Note that this is only possible because Whitebox rasters
allow you to address cells beyond the raster extent but
return the nodata value for these regions.
*/
queue.push_back((row, -1));
queue.push_back((row, columns));
}
for col in 0..columns {
queue.push_back((-1, col));
queue.push_back((rows, col));
}
/*
minheap is the priority queue. Note that I've tested using integer-based
priority values, by multiplying the elevations, but this didn't result
in a significant performance gain over the use of f64s.
*/
let mut minheap = BinaryHeap::with_capacity((rows * columns) as usize);
let mut num_solved_cells = 0;
let mut zin_n: f64; // value of neighbour of row, col in input raster
let mut zout: f64; // value of row, col in output raster
let mut zout_n: f64; // value of neighbour of row, col in output raster
let dx = [1, 1, 1, 0, -1, -1, -1, 0];
let dy = [-1, 0, 1, 1, 1, 0, -1, -1];
let (mut row, mut col): (isize, isize);
let (mut row_n, mut col_n): (isize, isize);
while!queue.is_empty() {
let cell = queue.pop_front().unwrap();
row = cell.0;
col = cell.1;
for n in 0..8 {
row_n = row + dy[n];
col_n = col + dx[n];
zin_n = input[(row_n, col_n)];
zout_n = filled_dem[(row_n, col_n)];
if zout_n == background_val {
if zin_n == nodata {
filled_dem[(row_n, col_n)] = nodata;
output[(row_n, col_n)] = nodata;
queue.push_back((row_n, col_n));
} else {
filled_dem[(row_n, col_n)] = zin_n;
// Push it onto the priority queue for the priority flood operation
minheap.push(GridCell {
row: row_n,
column: col_n,
priority: zin_n,
});
}
num_solved_cells += 1;
}
}
if verbose {
progress = (100.0_f64 * num_solved_cells as f64 / (num_cells - 1) as f64) as usize;
if progress!= old_progress {
println!("progress: {}%", progress);
old_progress = progress;
}
}
}
// Perform the priority flood operation.
let mut order_val = 1f64;
while!minheap.is_empty() {
let cell = minheap.pop().expect("Error during pop operation.");
row = cell.row;
col = cell.column;
zout = filled_dem[(row, col)];
output[(row, col)] = order_val;
order_val += 1f64;
for n in 0..8 {
row_n = row + dy[n];
col_n = col + dx[n];
zout_n = filled_dem[(row_n, col_n)];
if zout_n == background_val {
zin_n = input[(row_n, col_n)];
if zin_n!= nodata {
if zin_n < zout {
zin_n = zout;
} // We're in a depression. Raise the elevation.
filled_dem[(row_n, col_n)] = zin_n;
minheap.push(GridCell {
row: row_n,
column: col_n,
priority: zin_n,
});
} else {
// Interior nodata cells are still treated as nodata and are not filled.
output[(row_n, col_n)] = nodata;
num_solved_cells += 1;
}
}
}
if verbose {
num_solved_cells += 1;
progress = (100.0_f64 * num_solved_cells as f64 / (num_cells - 1) as f64) as usize;
if progress!= old_progress {
println!("Progress: {}%", progress);
old_progress = progress;
}
}
}
let elapsed_time = get_formatted_elapsed_time(start);
output.add_metadata_entry(format!(
"Created by whitebox_tools\' {} tool",
self.get_tool_name()
));
output.add_metadata_entry(format!("Input file: {}", input_file));
output.add_metadata_entry(format!("Elapsed Time (excluding I/O): {}", elapsed_time));
if verbose {
println!("Saving data...")
};
let _ = match output.write() {
Ok(_) => {
if verbose {
println!("Output file written")
}
}
Err(e) => return Err(e),
};
if verbose {
println!(
"{}",
&format!("Elapsed Time (excluding I/O): {}", elapsed_time)
);
}
Ok(())
}
}
#[derive(PartialEq, Debug)]
struct GridCell {
row: isize,
column: isize,
// priority: usize,
priority: f64,
}
impl Eq for GridCell {}
impl PartialOrd for GridCell {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> |
}
impl Ord for GridCell {
fn cmp(&self, other: &GridCell) -> Ordering {
// other.priority.cmp(&self.priority)
let ord = self.partial_cmp(other).unwrap();
match ord {
Ordering::Greater => Ordering::Less,
Ordering::Less => Ordering::Greater,
Ordering::Equal => ord,
}
}
}
| {
// Some(other.priority.cmp(&self.priority))
other.priority.partial_cmp(&self.priority)
} | identifier_body |
window.rs | use crate::prelude::*;
use crossterm::style::{Attribute, Print, Styler};
use crossterm::{cursor, terminal, ExecutableCommand, QueueableCommand};
use std::io::{stdout, Write};
const CARGO_PKG_VERSION: &str = env!("CARGO_PKG_VERSION");
pub struct Window {
started_at: StartedAt,
lines: u16,
cols: u16,
options: Arc<Mutex<Options>>,
alltime_stats: Arc<Mutex<Stats>>,
ring_buffer: Arc<Mutex<RingBuffer>>,
}
impl Window {
pub fn new(
options: Arc<Mutex<Options>>,
alltime_stats: Arc<Mutex<Stats>>,
ring_buffer: Arc<Mutex<RingBuffer>>,
) -> Self {
let (cols, lines) = crossterm::terminal::size().unwrap();
let now = std::time::Instant::now();
Window {
started_at: StartedAt(now),
lines,
cols,
options,
alltime_stats,
ring_buffer,
}
}
pub fn run(&mut self) -> Result<(), Error> {
// temporary lock on options to get interval
let options = self.options.lock().unwrap();
let interval = options.interval;
drop(options);
// support f64 seconds by multiplying then using from_millis
let interval = std::time::Duration::from_millis((interval * 1000.0) as u64);
crossterm::terminal::enable_raw_mode()?;
// stdout().execute(crossterm::event::EnableMouseCapture)?
stdout().execute(cursor::Hide)?;
stdout().execute(terminal::EnterAlternateScreen)?;
stdout().execute(terminal::SetTitle("apachetop"))?;
loop {
self.redraw()?;
if crossterm::event::poll(interval)? &&!self.handle_event()? {
break;
}
}
crossterm::terminal::disable_raw_mode()?;
stdout().execute(terminal::LeaveAlternateScreen)?;
stdout().execute(cursor::Show)?;
// stdout().execute(crossterm::event::DisableMouseCapture)?;
Ok(())
}
fn redraw(&mut self) -> Result<(), Error> {
let mut stdout = stdout();
stdout
.queue(terminal::Clear(terminal::ClearType::All))?
.queue(cursor::MoveTo(0, 0))?
.queue(Print(format!("apachetop {}", CARGO_PKG_VERSION)))?
.queue(cursor::MoveTo(self.cols / 2, 0))?
.queue(Print(self.started_at.to_string()))?
.queue(cursor::MoveTo(self.cols - 8 as u16, 0))?
.queue(Print(chrono::Local::now().format("%H:%M:%S").to_string()))?;
{
let alltime_stats = self.alltime_stats.lock().unwrap();
let elapsed = self.started_at.elapsed().as_secs() as f64;
stdout
.queue(cursor::MoveTo(0, 1))?
.queue(Print(self.primary_stats_line(
&alltime_stats,
elapsed,
true,
)))?
.queue(cursor::MoveTo(0, 2))?
.queue(Print(self.per_code_line(&alltime_stats)))?;
} // mutex on alltime_stats
{
let mut ring_buffer = self.ring_buffer.lock().unwrap();
// TODO: better in another thread, not at display time?
ring_buffer.cleanup()?;
let elapsed = match ring_buffer.first() {
Some(f) => {
let first = chrono::DateTime::<chrono::Utc>::from(f.time);
(chrono::Utc::now() - first).num_seconds() as f64
}
None => 1.0, // avoid divide by zero later
};
stdout
.queue(cursor::MoveTo(0, 3))?
.queue(Print(self.primary_stats_line(
&ring_buffer.stats,
elapsed,
false,
)))?
.queue(cursor::MoveTo(0, 4))?
.queue(Print(self.per_code_line(&ring_buffer.stats)))?;
{
let options = self.options.lock().unwrap();
stdout.queue(cursor::MoveTo(0, 6))?.queue(Print(
format!(
"{:width$}",
&format!(
" REQS REQS/S SIZE SZ/S {}",
options.group.to_string()
),
width = self.cols as usize
)
.negative(),
))?;
} // read lock on options
if let Some(grouped) = &ring_buffer.grouped {
use lazysort::SortedBy;
// convert HashMap<GroupKey, RingBuffer> to Vec<(GroupKey, RingBuffer)>,
// sort it by the RingBuffers, then lazy-sort the first n lines for display.
for (key, ring_buffer) in grouped
.iter()
.filter(|(_, v)|!v.buffer.is_empty()) // filter out empty buffers to save work
.collect::<Vec<(&GroupKey, &RingBuffer)>>()
.iter()
.sorted_by(|a, b| b.1.cmp(&a.1)) // see impl Ord for RingBuffer
.take((self.lines - 7/* lines used for header */) as usize)
{
stdout
.queue(cursor::MoveToNextLine(1))?
.queue(Print(self.table_line(key, ring_buffer, elapsed)))?;
}
}
} // mutex on ring_buffer
stdout.flush()?;
Ok(())
}
fn | (&mut self) -> Result<bool, Error> {
use crossterm::event::Event::{Key, Mouse, Resize};
use crossterm::event::KeyCode::Char;
use crossterm::event::{KeyEvent, KeyModifiers};
match crossterm::event::read()? {
Key(KeyEvent {
code: Char('q'),..
})
| Key(KeyEvent {
modifiers: KeyModifiers::CONTROL,
code: Char('c'),
}) => return Ok(false),
Key(KeyEvent {
code: Char('o'),..
}) => {
self.toggle_sort();
}
Key(KeyEvent {
code: Char('g'),..
}) => {
self.toggle_group();
}
Key(event) => info!("{:?}", event),
Mouse(event) => info!("{:?}", event),
Resize(cols, lines) => {
self.lines = lines;
self.cols = cols;
}
}
Ok(true)
}
fn toggle_sort(&self) {
self.options.lock().unwrap().toggle_sort();
}
fn toggle_group(&self) {
let mut o = self.options.lock().unwrap();
let group_by = o.toggle_group();
drop(o);
self.ring_buffer.lock().unwrap().regroup(group_by);
}
fn table_line(&self, key: &GroupKey, rr: &RingBuffer, elapsed: f64) -> String {
let reqs = rr.stats.global.requests as f64;
format!(
"{reqs:6} {reqs_per_sec:6.2} {hb:>6} {hb_per_sec:>6} {key:width$}",
width = (self.cols - 30) as usize,
reqs = reqs,
reqs_per_sec = reqs / elapsed,
hb = Self::humansize(rr.stats.global.bytes as f64),
hb_per_sec = Self::humansize(rr.stats.global.bytes as f64 / elapsed),
key = key
)
}
// All: 638924 reqs ( 182.65/sec) 3433539K ( 981.6K/sec) ( 5.4K/req)
fn primary_stats_line(&self, stats: &Stats, elapsed: f64, alltime: bool) -> String {
let reqs_non_zero = std::cmp::max(stats.global.requests, 1) as f64;
let reqs = stats.global.requests as f64;
let header = if alltime { "All:" } else { "R:" };
format!(
"{header:5} {bold}{reqs:>space$}{reset} ({reqs_per_sec:6.2}/sec) {bold}{hb:>space$}{reset} ({hb_per_sec}/sec) {hb_per_req}/req",
bold = Attribute::Bold,
reset = Attribute::Reset,
space = ((self.cols - 50) / 2) as usize,
header = header,
reqs = reqs,
reqs_per_sec = reqs / elapsed,
hb = Self::humansize(stats.global.bytes as f64),
hb_per_sec = Self::humansize(stats.global.bytes as f64 / elapsed),
hb_per_req = Self::humansize((stats.global.bytes as f64) / reqs_non_zero)
)
}
// 2xx: 455415 (71.3%) 3xx: 175745 (27.5%) 4xx: 7746 ( 1.2%) 5xx: 10 ( 0.0%)
fn per_code_line(&self, stats: &Stats) -> String {
let stats_2 = &stats.by_status_code[2];
let stats_3 = &stats.by_status_code[3];
let stats_4 = &stats.by_status_code[4];
let stats_5 = &stats.by_status_code[5];
// closure to reduce some duplication for some munging below
let c = |rb_stats: &crate::stats::Counters| -> (f64, usize) {
// avoid divide by zero if there's no requests yet
let pct = if stats.global.requests > 0 {
100.0 * (rb_stats.requests as f64 / stats.global.requests as f64)
} else {
0.0
};
// intelligent dp detection: eg 2.34%, 10.5%, 100%
let dp = if (pct - 100.0).abs() < f64::EPSILON {
0
} else if pct < 10.0 {
2
} else {
1
};
(pct, dp)
};
let (code_2_pct, code_2_dp) = c(stats_2);
let (code_3_pct, code_3_dp) = c(stats_3);
let (code_4_pct, code_4_dp) = c(stats_4);
let (code_5_pct, code_5_dp) = c(stats_5);
format!(
"2xx: {code_2:space$} ({code_2_pct:4.code_2_dp$}%) 3xx: {code_3:space$} ({code_3_pct:4.code_3_dp$}%) 4xx: {code_4:space$} ({code_4_pct:4.code_4_dp$}%) 5xx: {code_5:space$} ({code_5_pct:4.code_5_dp$}%)",
space = ((self.cols - 55) / 4) as usize,
code_2 = stats_2.requests,
code_2_dp = code_2_dp,
code_2_pct = code_2_pct,
code_3 = stats_3.requests,
code_3_dp = code_3_dp,
code_3_pct = code_3_pct,
code_4 = stats_4.requests,
code_4_dp = code_4_dp,
code_4_pct = code_4_pct,
code_5 = stats_5.requests,
code_5_dp = code_5_dp,
code_5_pct = code_5_pct,
)
}
fn humansize(bytes: f64) -> String {
if bytes > 1073741824.0 {
format!("{:6.2}G", (bytes / 1073741824.0))
} else if bytes > 1048576.0 {
format!("{:6.2}M", (bytes / 1048576.00))
} else if bytes > 1024.0 {
format!("{:6.2}K", (bytes / 1024.0))
} else {
format!("{:6.0}B", bytes)
}
}
}
struct StartedAt(std::time::Instant);
impl StartedAt {
fn elapsed(&self) -> std::time::Duration {
self.0.elapsed()
}
}
impl std::fmt::Display for StartedAt {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let duration = self.0.elapsed().as_secs();
let hours = duration / 3600;
let minutes = duration % 3600 / 60;
let seconds = duration % 60;
write!(f, "runtime: ")?;
if hours > 0 {
write!(f, "{}h ", hours)?;
}
if hours > 0 || minutes > 0 {
write!(f, "{}m ", minutes)?;
}
write!(f, "{}s", seconds)
}
}
| handle_event | identifier_name |
window.rs | use crate::prelude::*;
use crossterm::style::{Attribute, Print, Styler};
use crossterm::{cursor, terminal, ExecutableCommand, QueueableCommand};
use std::io::{stdout, Write};
const CARGO_PKG_VERSION: &str = env!("CARGO_PKG_VERSION");
pub struct Window {
started_at: StartedAt,
lines: u16,
cols: u16,
options: Arc<Mutex<Options>>,
alltime_stats: Arc<Mutex<Stats>>,
ring_buffer: Arc<Mutex<RingBuffer>>,
}
impl Window {
pub fn new(
options: Arc<Mutex<Options>>,
alltime_stats: Arc<Mutex<Stats>>,
ring_buffer: Arc<Mutex<RingBuffer>>,
) -> Self {
let (cols, lines) = crossterm::terminal::size().unwrap();
let now = std::time::Instant::now();
Window {
started_at: StartedAt(now),
lines,
cols,
options,
alltime_stats,
ring_buffer,
}
}
pub fn run(&mut self) -> Result<(), Error> {
// temporary lock on options to get interval
let options = self.options.lock().unwrap();
let interval = options.interval;
drop(options);
// support f64 seconds by multiplying then using from_millis
let interval = std::time::Duration::from_millis((interval * 1000.0) as u64);
crossterm::terminal::enable_raw_mode()?;
// stdout().execute(crossterm::event::EnableMouseCapture)?
stdout().execute(cursor::Hide)?;
stdout().execute(terminal::EnterAlternateScreen)?;
stdout().execute(terminal::SetTitle("apachetop"))?;
loop {
self.redraw()?;
if crossterm::event::poll(interval)? &&!self.handle_event()? {
break;
}
}
crossterm::terminal::disable_raw_mode()?;
stdout().execute(terminal::LeaveAlternateScreen)?;
stdout().execute(cursor::Show)?;
// stdout().execute(crossterm::event::DisableMouseCapture)?;
Ok(())
}
fn redraw(&mut self) -> Result<(), Error> | elapsed,
true,
)))?
.queue(cursor::MoveTo(0, 2))?
.queue(Print(self.per_code_line(&alltime_stats)))?;
} // mutex on alltime_stats
{
let mut ring_buffer = self.ring_buffer.lock().unwrap();
// TODO: better in another thread, not at display time?
ring_buffer.cleanup()?;
let elapsed = match ring_buffer.first() {
Some(f) => {
let first = chrono::DateTime::<chrono::Utc>::from(f.time);
(chrono::Utc::now() - first).num_seconds() as f64
}
None => 1.0, // avoid divide by zero later
};
stdout
.queue(cursor::MoveTo(0, 3))?
.queue(Print(self.primary_stats_line(
&ring_buffer.stats,
elapsed,
false,
)))?
.queue(cursor::MoveTo(0, 4))?
.queue(Print(self.per_code_line(&ring_buffer.stats)))?;
{
let options = self.options.lock().unwrap();
stdout.queue(cursor::MoveTo(0, 6))?.queue(Print(
format!(
"{:width$}",
&format!(
" REQS REQS/S SIZE SZ/S {}",
options.group.to_string()
),
width = self.cols as usize
)
.negative(),
))?;
} // read lock on options
if let Some(grouped) = &ring_buffer.grouped {
use lazysort::SortedBy;
// convert HashMap<GroupKey, RingBuffer> to Vec<(GroupKey, RingBuffer)>,
// sort it by the RingBuffers, then lazy-sort the first n lines for display.
for (key, ring_buffer) in grouped
.iter()
.filter(|(_, v)|!v.buffer.is_empty()) // filter out empty buffers to save work
.collect::<Vec<(&GroupKey, &RingBuffer)>>()
.iter()
.sorted_by(|a, b| b.1.cmp(&a.1)) // see impl Ord for RingBuffer
.take((self.lines - 7/* lines used for header */) as usize)
{
stdout
.queue(cursor::MoveToNextLine(1))?
.queue(Print(self.table_line(key, ring_buffer, elapsed)))?;
}
}
} // mutex on ring_buffer
stdout.flush()?;
Ok(())
}
fn handle_event(&mut self) -> Result<bool, Error> {
use crossterm::event::Event::{Key, Mouse, Resize};
use crossterm::event::KeyCode::Char;
use crossterm::event::{KeyEvent, KeyModifiers};
match crossterm::event::read()? {
Key(KeyEvent {
code: Char('q'),..
})
| Key(KeyEvent {
modifiers: KeyModifiers::CONTROL,
code: Char('c'),
}) => return Ok(false),
Key(KeyEvent {
code: Char('o'),..
}) => {
self.toggle_sort();
}
Key(KeyEvent {
code: Char('g'),..
}) => {
self.toggle_group();
}
Key(event) => info!("{:?}", event),
Mouse(event) => info!("{:?}", event),
Resize(cols, lines) => {
self.lines = lines;
self.cols = cols;
}
}
Ok(true)
}
fn toggle_sort(&self) {
self.options.lock().unwrap().toggle_sort();
}
fn toggle_group(&self) {
let mut o = self.options.lock().unwrap();
let group_by = o.toggle_group();
drop(o);
self.ring_buffer.lock().unwrap().regroup(group_by);
}
fn table_line(&self, key: &GroupKey, rr: &RingBuffer, elapsed: f64) -> String {
let reqs = rr.stats.global.requests as f64;
format!(
"{reqs:6} {reqs_per_sec:6.2} {hb:>6} {hb_per_sec:>6} {key:width$}",
width = (self.cols - 30) as usize,
reqs = reqs,
reqs_per_sec = reqs / elapsed,
hb = Self::humansize(rr.stats.global.bytes as f64),
hb_per_sec = Self::humansize(rr.stats.global.bytes as f64 / elapsed),
key = key
)
}
// All: 638924 reqs ( 182.65/sec) 3433539K ( 981.6K/sec) ( 5.4K/req)
fn primary_stats_line(&self, stats: &Stats, elapsed: f64, alltime: bool) -> String {
let reqs_non_zero = std::cmp::max(stats.global.requests, 1) as f64;
let reqs = stats.global.requests as f64;
let header = if alltime { "All:" } else { "R:" };
format!(
"{header:5} {bold}{reqs:>space$}{reset} ({reqs_per_sec:6.2}/sec) {bold}{hb:>space$}{reset} ({hb_per_sec}/sec) {hb_per_req}/req",
bold = Attribute::Bold,
reset = Attribute::Reset,
space = ((self.cols - 50) / 2) as usize,
header = header,
reqs = reqs,
reqs_per_sec = reqs / elapsed,
hb = Self::humansize(stats.global.bytes as f64),
hb_per_sec = Self::humansize(stats.global.bytes as f64 / elapsed),
hb_per_req = Self::humansize((stats.global.bytes as f64) / reqs_non_zero)
)
}
// 2xx: 455415 (71.3%) 3xx: 175745 (27.5%) 4xx: 7746 ( 1.2%) 5xx: 10 ( 0.0%)
fn per_code_line(&self, stats: &Stats) -> String {
let stats_2 = &stats.by_status_code[2];
let stats_3 = &stats.by_status_code[3];
let stats_4 = &stats.by_status_code[4];
let stats_5 = &stats.by_status_code[5];
// closure to reduce some duplication for some munging below
let c = |rb_stats: &crate::stats::Counters| -> (f64, usize) {
// avoid divide by zero if there's no requests yet
let pct = if stats.global.requests > 0 {
100.0 * (rb_stats.requests as f64 / stats.global.requests as f64)
} else {
0.0
};
// intelligent dp detection: eg 2.34%, 10.5%, 100%
let dp = if (pct - 100.0).abs() < f64::EPSILON {
0
} else if pct < 10.0 {
2
} else {
1
};
(pct, dp)
};
let (code_2_pct, code_2_dp) = c(stats_2);
let (code_3_pct, code_3_dp) = c(stats_3);
let (code_4_pct, code_4_dp) = c(stats_4);
let (code_5_pct, code_5_dp) = c(stats_5);
format!(
"2xx: {code_2:space$} ({code_2_pct:4.code_2_dp$}%) 3xx: {code_3:space$} ({code_3_pct:4.code_3_dp$}%) 4xx: {code_4:space$} ({code_4_pct:4.code_4_dp$}%) 5xx: {code_5:space$} ({code_5_pct:4.code_5_dp$}%)",
space = ((self.cols - 55) / 4) as usize,
code_2 = stats_2.requests,
code_2_dp = code_2_dp,
code_2_pct = code_2_pct,
code_3 = stats_3.requests,
code_3_dp = code_3_dp,
code_3_pct = code_3_pct,
code_4 = stats_4.requests,
code_4_dp = code_4_dp,
code_4_pct = code_4_pct,
code_5 = stats_5.requests,
code_5_dp = code_5_dp,
code_5_pct = code_5_pct,
)
}
fn humansize(bytes: f64) -> String {
if bytes > 1073741824.0 {
format!("{:6.2}G", (bytes / 1073741824.0))
} else if bytes > 1048576.0 {
format!("{:6.2}M", (bytes / 1048576.00))
} else if bytes > 1024.0 {
format!("{:6.2}K", (bytes / 1024.0))
} else {
format!("{:6.0}B", bytes)
}
}
}
struct StartedAt(std::time::Instant);
impl StartedAt {
fn elapsed(&self) -> std::time::Duration {
self.0.elapsed()
}
}
impl std::fmt::Display for StartedAt {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let duration = self.0.elapsed().as_secs();
let hours = duration / 3600;
let minutes = duration % 3600 / 60;
let seconds = duration % 60;
write!(f, "runtime: ")?;
if hours > 0 {
write!(f, "{}h ", hours)?;
}
if hours > 0 || minutes > 0 {
write!(f, "{}m ", minutes)?;
}
write!(f, "{}s", seconds)
}
}
| {
let mut stdout = stdout();
stdout
.queue(terminal::Clear(terminal::ClearType::All))?
.queue(cursor::MoveTo(0, 0))?
.queue(Print(format!("apachetop {}", CARGO_PKG_VERSION)))?
.queue(cursor::MoveTo(self.cols / 2, 0))?
.queue(Print(self.started_at.to_string()))?
.queue(cursor::MoveTo(self.cols - 8 as u16, 0))?
.queue(Print(chrono::Local::now().format("%H:%M:%S").to_string()))?;
{
let alltime_stats = self.alltime_stats.lock().unwrap();
let elapsed = self.started_at.elapsed().as_secs() as f64;
stdout
.queue(cursor::MoveTo(0, 1))?
.queue(Print(self.primary_stats_line(
&alltime_stats, | identifier_body |
window.rs | use crate::prelude::*;
use crossterm::style::{Attribute, Print, Styler};
use crossterm::{cursor, terminal, ExecutableCommand, QueueableCommand};
use std::io::{stdout, Write};
const CARGO_PKG_VERSION: &str = env!("CARGO_PKG_VERSION");
pub struct Window {
started_at: StartedAt,
lines: u16,
cols: u16,
options: Arc<Mutex<Options>>,
alltime_stats: Arc<Mutex<Stats>>,
ring_buffer: Arc<Mutex<RingBuffer>>,
}
impl Window {
pub fn new(
options: Arc<Mutex<Options>>,
alltime_stats: Arc<Mutex<Stats>>,
ring_buffer: Arc<Mutex<RingBuffer>>,
) -> Self {
let (cols, lines) = crossterm::terminal::size().unwrap();
let now = std::time::Instant::now();
Window {
started_at: StartedAt(now),
lines,
cols,
options,
alltime_stats,
ring_buffer,
}
}
pub fn run(&mut self) -> Result<(), Error> {
// temporary lock on options to get interval
let options = self.options.lock().unwrap();
let interval = options.interval;
drop(options);
// support f64 seconds by multiplying then using from_millis
let interval = std::time::Duration::from_millis((interval * 1000.0) as u64);
crossterm::terminal::enable_raw_mode()?;
// stdout().execute(crossterm::event::EnableMouseCapture)?
stdout().execute(cursor::Hide)?;
stdout().execute(terminal::EnterAlternateScreen)?;
stdout().execute(terminal::SetTitle("apachetop"))?;
loop {
self.redraw()?;
if crossterm::event::poll(interval)? &&!self.handle_event()? {
break;
}
}
crossterm::terminal::disable_raw_mode()?;
stdout().execute(terminal::LeaveAlternateScreen)?;
stdout().execute(cursor::Show)?;
// stdout().execute(crossterm::event::DisableMouseCapture)?;
Ok(())
}
fn redraw(&mut self) -> Result<(), Error> {
let mut stdout = stdout();
stdout
.queue(terminal::Clear(terminal::ClearType::All))?
.queue(cursor::MoveTo(0, 0))?
.queue(Print(format!("apachetop {}", CARGO_PKG_VERSION)))?
.queue(cursor::MoveTo(self.cols / 2, 0))?
.queue(Print(self.started_at.to_string()))?
.queue(cursor::MoveTo(self.cols - 8 as u16, 0))?
.queue(Print(chrono::Local::now().format("%H:%M:%S").to_string()))?;
{
let alltime_stats = self.alltime_stats.lock().unwrap();
let elapsed = self.started_at.elapsed().as_secs() as f64;
stdout
.queue(cursor::MoveTo(0, 1))?
.queue(Print(self.primary_stats_line(
&alltime_stats,
elapsed,
true,
)))?
.queue(cursor::MoveTo(0, 2))?
.queue(Print(self.per_code_line(&alltime_stats)))?;
} // mutex on alltime_stats
{
let mut ring_buffer = self.ring_buffer.lock().unwrap();
// TODO: better in another thread, not at display time?
ring_buffer.cleanup()?;
let elapsed = match ring_buffer.first() {
Some(f) => {
let first = chrono::DateTime::<chrono::Utc>::from(f.time);
(chrono::Utc::now() - first).num_seconds() as f64
}
None => 1.0, // avoid divide by zero later
};
stdout
.queue(cursor::MoveTo(0, 3))?
.queue(Print(self.primary_stats_line(
&ring_buffer.stats,
elapsed,
false,
)))?
.queue(cursor::MoveTo(0, 4))?
.queue(Print(self.per_code_line(&ring_buffer.stats)))?;
{
let options = self.options.lock().unwrap();
stdout.queue(cursor::MoveTo(0, 6))?.queue(Print(
format!(
"{:width$}",
&format!(
" REQS REQS/S SIZE SZ/S {}",
options.group.to_string()
),
width = self.cols as usize
)
.negative(),
))?;
} // read lock on options
if let Some(grouped) = &ring_buffer.grouped {
use lazysort::SortedBy;
// convert HashMap<GroupKey, RingBuffer> to Vec<(GroupKey, RingBuffer)>,
// sort it by the RingBuffers, then lazy-sort the first n lines for display.
for (key, ring_buffer) in grouped
.iter()
.filter(|(_, v)|!v.buffer.is_empty()) // filter out empty buffers to save work
.collect::<Vec<(&GroupKey, &RingBuffer)>>()
.iter()
.sorted_by(|a, b| b.1.cmp(&a.1)) // see impl Ord for RingBuffer
.take((self.lines - 7/* lines used for header */) as usize)
{
stdout
.queue(cursor::MoveToNextLine(1))?
.queue(Print(self.table_line(key, ring_buffer, elapsed)))?;
}
}
} // mutex on ring_buffer
stdout.flush()?;
Ok(())
}
fn handle_event(&mut self) -> Result<bool, Error> {
use crossterm::event::Event::{Key, Mouse, Resize};
use crossterm::event::KeyCode::Char;
use crossterm::event::{KeyEvent, KeyModifiers};
match crossterm::event::read()? {
Key(KeyEvent {
code: Char('q'),..
})
| Key(KeyEvent {
modifiers: KeyModifiers::CONTROL,
code: Char('c'),
}) => return Ok(false),
Key(KeyEvent {
code: Char('o'),..
}) => {
self.toggle_sort();
}
Key(KeyEvent {
code: Char('g'),..
}) => {
self.toggle_group();
}
Key(event) => info!("{:?}", event),
Mouse(event) => info!("{:?}", event),
Resize(cols, lines) => {
self.lines = lines;
self.cols = cols;
}
}
Ok(true)
}
fn toggle_sort(&self) {
self.options.lock().unwrap().toggle_sort();
}
fn toggle_group(&self) {
let mut o = self.options.lock().unwrap();
let group_by = o.toggle_group();
drop(o);
self.ring_buffer.lock().unwrap().regroup(group_by);
}
fn table_line(&self, key: &GroupKey, rr: &RingBuffer, elapsed: f64) -> String {
let reqs = rr.stats.global.requests as f64;
format!(
"{reqs:6} {reqs_per_sec:6.2} {hb:>6} {hb_per_sec:>6} {key:width$}",
width = (self.cols - 30) as usize,
reqs = reqs,
reqs_per_sec = reqs / elapsed,
hb = Self::humansize(rr.stats.global.bytes as f64),
hb_per_sec = Self::humansize(rr.stats.global.bytes as f64 / elapsed),
key = key
)
}
// All: 638924 reqs ( 182.65/sec) 3433539K ( 981.6K/sec) ( 5.4K/req)
fn primary_stats_line(&self, stats: &Stats, elapsed: f64, alltime: bool) -> String {
let reqs_non_zero = std::cmp::max(stats.global.requests, 1) as f64;
let reqs = stats.global.requests as f64;
let header = if alltime { "All:" } else { "R:" };
format!(
"{header:5} {bold}{reqs:>space$}{reset} ({reqs_per_sec:6.2}/sec) {bold}{hb:>space$}{reset} ({hb_per_sec}/sec) {hb_per_req}/req",
bold = Attribute::Bold,
reset = Attribute::Reset,
space = ((self.cols - 50) / 2) as usize,
header = header,
reqs = reqs,
reqs_per_sec = reqs / elapsed,
hb = Self::humansize(stats.global.bytes as f64),
hb_per_sec = Self::humansize(stats.global.bytes as f64 / elapsed),
hb_per_req = Self::humansize((stats.global.bytes as f64) / reqs_non_zero)
)
}
// 2xx: 455415 (71.3%) 3xx: 175745 (27.5%) 4xx: 7746 ( 1.2%) 5xx: 10 ( 0.0%)
fn per_code_line(&self, stats: &Stats) -> String {
let stats_2 = &stats.by_status_code[2];
let stats_3 = &stats.by_status_code[3];
let stats_4 = &stats.by_status_code[4];
let stats_5 = &stats.by_status_code[5];
// closure to reduce some duplication for some munging below
let c = |rb_stats: &crate::stats::Counters| -> (f64, usize) {
// avoid divide by zero if there's no requests yet
let pct = if stats.global.requests > 0 | else {
0.0
};
// intelligent dp detection: eg 2.34%, 10.5%, 100%
let dp = if (pct - 100.0).abs() < f64::EPSILON {
0
} else if pct < 10.0 {
2
} else {
1
};
(pct, dp)
};
let (code_2_pct, code_2_dp) = c(stats_2);
let (code_3_pct, code_3_dp) = c(stats_3);
let (code_4_pct, code_4_dp) = c(stats_4);
let (code_5_pct, code_5_dp) = c(stats_5);
format!(
"2xx: {code_2:space$} ({code_2_pct:4.code_2_dp$}%) 3xx: {code_3:space$} ({code_3_pct:4.code_3_dp$}%) 4xx: {code_4:space$} ({code_4_pct:4.code_4_dp$}%) 5xx: {code_5:space$} ({code_5_pct:4.code_5_dp$}%)",
space = ((self.cols - 55) / 4) as usize,
code_2 = stats_2.requests,
code_2_dp = code_2_dp,
code_2_pct = code_2_pct,
code_3 = stats_3.requests,
code_3_dp = code_3_dp,
code_3_pct = code_3_pct,
code_4 = stats_4.requests,
code_4_dp = code_4_dp,
code_4_pct = code_4_pct,
code_5 = stats_5.requests,
code_5_dp = code_5_dp,
code_5_pct = code_5_pct,
)
}
fn humansize(bytes: f64) -> String {
if bytes > 1073741824.0 {
format!("{:6.2}G", (bytes / 1073741824.0))
} else if bytes > 1048576.0 {
format!("{:6.2}M", (bytes / 1048576.00))
} else if bytes > 1024.0 {
format!("{:6.2}K", (bytes / 1024.0))
} else {
format!("{:6.0}B", bytes)
}
}
}
struct StartedAt(std::time::Instant);
impl StartedAt {
fn elapsed(&self) -> std::time::Duration {
self.0.elapsed()
}
}
impl std::fmt::Display for StartedAt {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let duration = self.0.elapsed().as_secs();
let hours = duration / 3600;
let minutes = duration % 3600 / 60;
let seconds = duration % 60;
write!(f, "runtime: ")?;
if hours > 0 {
write!(f, "{}h ", hours)?;
}
if hours > 0 || minutes > 0 {
write!(f, "{}m ", minutes)?;
}
write!(f, "{}s", seconds)
}
}
| {
100.0 * (rb_stats.requests as f64 / stats.global.requests as f64)
} | conditional_block |
window.rs | use crate::prelude::*;
use crossterm::style::{Attribute, Print, Styler};
use crossterm::{cursor, terminal, ExecutableCommand, QueueableCommand};
use std::io::{stdout, Write};
const CARGO_PKG_VERSION: &str = env!("CARGO_PKG_VERSION");
pub struct Window {
started_at: StartedAt,
lines: u16,
cols: u16,
options: Arc<Mutex<Options>>,
alltime_stats: Arc<Mutex<Stats>>,
ring_buffer: Arc<Mutex<RingBuffer>>,
}
impl Window {
pub fn new(
options: Arc<Mutex<Options>>,
alltime_stats: Arc<Mutex<Stats>>,
ring_buffer: Arc<Mutex<RingBuffer>>,
) -> Self {
let (cols, lines) = crossterm::terminal::size().unwrap();
let now = std::time::Instant::now();
Window {
started_at: StartedAt(now),
lines,
cols,
options,
alltime_stats,
ring_buffer,
}
}
pub fn run(&mut self) -> Result<(), Error> {
// temporary lock on options to get interval
let options = self.options.lock().unwrap();
let interval = options.interval;
drop(options);
// support f64 seconds by multiplying then using from_millis
let interval = std::time::Duration::from_millis((interval * 1000.0) as u64);
crossterm::terminal::enable_raw_mode()?;
// stdout().execute(crossterm::event::EnableMouseCapture)?
stdout().execute(cursor::Hide)?;
stdout().execute(terminal::EnterAlternateScreen)?;
stdout().execute(terminal::SetTitle("apachetop"))?;
loop {
self.redraw()?;
if crossterm::event::poll(interval)? &&!self.handle_event()? {
break;
}
}
crossterm::terminal::disable_raw_mode()?;
stdout().execute(terminal::LeaveAlternateScreen)?;
stdout().execute(cursor::Show)?;
// stdout().execute(crossterm::event::DisableMouseCapture)?;
Ok(())
}
fn redraw(&mut self) -> Result<(), Error> {
let mut stdout = stdout();
stdout
.queue(terminal::Clear(terminal::ClearType::All))? | .queue(Print(format!("apachetop {}", CARGO_PKG_VERSION)))?
.queue(cursor::MoveTo(self.cols / 2, 0))?
.queue(Print(self.started_at.to_string()))?
.queue(cursor::MoveTo(self.cols - 8 as u16, 0))?
.queue(Print(chrono::Local::now().format("%H:%M:%S").to_string()))?;
{
let alltime_stats = self.alltime_stats.lock().unwrap();
let elapsed = self.started_at.elapsed().as_secs() as f64;
stdout
.queue(cursor::MoveTo(0, 1))?
.queue(Print(self.primary_stats_line(
&alltime_stats,
elapsed,
true,
)))?
.queue(cursor::MoveTo(0, 2))?
.queue(Print(self.per_code_line(&alltime_stats)))?;
} // mutex on alltime_stats
{
let mut ring_buffer = self.ring_buffer.lock().unwrap();
// TODO: better in another thread, not at display time?
ring_buffer.cleanup()?;
let elapsed = match ring_buffer.first() {
Some(f) => {
let first = chrono::DateTime::<chrono::Utc>::from(f.time);
(chrono::Utc::now() - first).num_seconds() as f64
}
None => 1.0, // avoid divide by zero later
};
stdout
.queue(cursor::MoveTo(0, 3))?
.queue(Print(self.primary_stats_line(
&ring_buffer.stats,
elapsed,
false,
)))?
.queue(cursor::MoveTo(0, 4))?
.queue(Print(self.per_code_line(&ring_buffer.stats)))?;
{
let options = self.options.lock().unwrap();
stdout.queue(cursor::MoveTo(0, 6))?.queue(Print(
format!(
"{:width$}",
&format!(
" REQS REQS/S SIZE SZ/S {}",
options.group.to_string()
),
width = self.cols as usize
)
.negative(),
))?;
} // read lock on options
if let Some(grouped) = &ring_buffer.grouped {
use lazysort::SortedBy;
// convert HashMap<GroupKey, RingBuffer> to Vec<(GroupKey, RingBuffer)>,
// sort it by the RingBuffers, then lazy-sort the first n lines for display.
for (key, ring_buffer) in grouped
.iter()
.filter(|(_, v)|!v.buffer.is_empty()) // filter out empty buffers to save work
.collect::<Vec<(&GroupKey, &RingBuffer)>>()
.iter()
.sorted_by(|a, b| b.1.cmp(&a.1)) // see impl Ord for RingBuffer
.take((self.lines - 7/* lines used for header */) as usize)
{
stdout
.queue(cursor::MoveToNextLine(1))?
.queue(Print(self.table_line(key, ring_buffer, elapsed)))?;
}
}
} // mutex on ring_buffer
stdout.flush()?;
Ok(())
}
fn handle_event(&mut self) -> Result<bool, Error> {
use crossterm::event::Event::{Key, Mouse, Resize};
use crossterm::event::KeyCode::Char;
use crossterm::event::{KeyEvent, KeyModifiers};
match crossterm::event::read()? {
Key(KeyEvent {
code: Char('q'),..
})
| Key(KeyEvent {
modifiers: KeyModifiers::CONTROL,
code: Char('c'),
}) => return Ok(false),
Key(KeyEvent {
code: Char('o'),..
}) => {
self.toggle_sort();
}
Key(KeyEvent {
code: Char('g'),..
}) => {
self.toggle_group();
}
Key(event) => info!("{:?}", event),
Mouse(event) => info!("{:?}", event),
Resize(cols, lines) => {
self.lines = lines;
self.cols = cols;
}
}
Ok(true)
}
fn toggle_sort(&self) {
self.options.lock().unwrap().toggle_sort();
}
fn toggle_group(&self) {
let mut o = self.options.lock().unwrap();
let group_by = o.toggle_group();
drop(o);
self.ring_buffer.lock().unwrap().regroup(group_by);
}
fn table_line(&self, key: &GroupKey, rr: &RingBuffer, elapsed: f64) -> String {
let reqs = rr.stats.global.requests as f64;
format!(
"{reqs:6} {reqs_per_sec:6.2} {hb:>6} {hb_per_sec:>6} {key:width$}",
width = (self.cols - 30) as usize,
reqs = reqs,
reqs_per_sec = reqs / elapsed,
hb = Self::humansize(rr.stats.global.bytes as f64),
hb_per_sec = Self::humansize(rr.stats.global.bytes as f64 / elapsed),
key = key
)
}
// All: 638924 reqs ( 182.65/sec) 3433539K ( 981.6K/sec) ( 5.4K/req)
fn primary_stats_line(&self, stats: &Stats, elapsed: f64, alltime: bool) -> String {
let reqs_non_zero = std::cmp::max(stats.global.requests, 1) as f64;
let reqs = stats.global.requests as f64;
let header = if alltime { "All:" } else { "R:" };
format!(
"{header:5} {bold}{reqs:>space$}{reset} ({reqs_per_sec:6.2}/sec) {bold}{hb:>space$}{reset} ({hb_per_sec}/sec) {hb_per_req}/req",
bold = Attribute::Bold,
reset = Attribute::Reset,
space = ((self.cols - 50) / 2) as usize,
header = header,
reqs = reqs,
reqs_per_sec = reqs / elapsed,
hb = Self::humansize(stats.global.bytes as f64),
hb_per_sec = Self::humansize(stats.global.bytes as f64 / elapsed),
hb_per_req = Self::humansize((stats.global.bytes as f64) / reqs_non_zero)
)
}
// 2xx: 455415 (71.3%) 3xx: 175745 (27.5%) 4xx: 7746 ( 1.2%) 5xx: 10 ( 0.0%)
fn per_code_line(&self, stats: &Stats) -> String {
let stats_2 = &stats.by_status_code[2];
let stats_3 = &stats.by_status_code[3];
let stats_4 = &stats.by_status_code[4];
let stats_5 = &stats.by_status_code[5];
// closure to reduce some duplication for some munging below
let c = |rb_stats: &crate::stats::Counters| -> (f64, usize) {
// avoid divide by zero if there's no requests yet
let pct = if stats.global.requests > 0 {
100.0 * (rb_stats.requests as f64 / stats.global.requests as f64)
} else {
0.0
};
// intelligent dp detection: eg 2.34%, 10.5%, 100%
let dp = if (pct - 100.0).abs() < f64::EPSILON {
0
} else if pct < 10.0 {
2
} else {
1
};
(pct, dp)
};
let (code_2_pct, code_2_dp) = c(stats_2);
let (code_3_pct, code_3_dp) = c(stats_3);
let (code_4_pct, code_4_dp) = c(stats_4);
let (code_5_pct, code_5_dp) = c(stats_5);
format!(
"2xx: {code_2:space$} ({code_2_pct:4.code_2_dp$}%) 3xx: {code_3:space$} ({code_3_pct:4.code_3_dp$}%) 4xx: {code_4:space$} ({code_4_pct:4.code_4_dp$}%) 5xx: {code_5:space$} ({code_5_pct:4.code_5_dp$}%)",
space = ((self.cols - 55) / 4) as usize,
code_2 = stats_2.requests,
code_2_dp = code_2_dp,
code_2_pct = code_2_pct,
code_3 = stats_3.requests,
code_3_dp = code_3_dp,
code_3_pct = code_3_pct,
code_4 = stats_4.requests,
code_4_dp = code_4_dp,
code_4_pct = code_4_pct,
code_5 = stats_5.requests,
code_5_dp = code_5_dp,
code_5_pct = code_5_pct,
)
}
fn humansize(bytes: f64) -> String {
if bytes > 1073741824.0 {
format!("{:6.2}G", (bytes / 1073741824.0))
} else if bytes > 1048576.0 {
format!("{:6.2}M", (bytes / 1048576.00))
} else if bytes > 1024.0 {
format!("{:6.2}K", (bytes / 1024.0))
} else {
format!("{:6.0}B", bytes)
}
}
}
struct StartedAt(std::time::Instant);
impl StartedAt {
fn elapsed(&self) -> std::time::Duration {
self.0.elapsed()
}
}
impl std::fmt::Display for StartedAt {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let duration = self.0.elapsed().as_secs();
let hours = duration / 3600;
let minutes = duration % 3600 / 60;
let seconds = duration % 60;
write!(f, "runtime: ")?;
if hours > 0 {
write!(f, "{}h ", hours)?;
}
if hours > 0 || minutes > 0 {
write!(f, "{}m ", minutes)?;
}
write!(f, "{}s", seconds)
}
} | .queue(cursor::MoveTo(0, 0))? | random_line_split |
main.rs | #![feature(unboxed_closures, fn_traits)]
use std::fmt::Debug;
// error[E0635]: unknown feature `fnbox`
// #![feature(unboxed_closures, fn_traits, fnbox)]
// 返回闭包
// 放入 `Box<T>` 中是因为闭包的大小在编译期是未知的.
// fn counter(i: i32) -> Box<Fn(i32) -> i32> {
// Box::new(move |n: i32| n + i)
// }
// Rust 2018 中也可以写成 `impl Fn(i32) -> i32`
fn counter(i: i32) -> impl Fn(i32) -> i32 {
// `i` 为自由变量
// `i` 为复制语义类型, 所以它肯定会按引用被捕获. 此引用会妨碍闭包作为函数返回值, 编译器会报错
// 所以使用 `move` 关键字来把自由变量 `i` 的所有权转移到闭包中.
// 因为变量 `i` 是复制语义, 所以这里只会进行按位复制
Box::new(move |n: i32| n + i)
}
fn val() -> i32 { 5 }
// 模拟编译器对闭包的实现
struct Closure {
// 代表从环境中捕获的自由变量 | impl FnOnce<()> for Closure {
type Output = u32;
extern "rust-call" fn call_once(self, args: ()) -> u32 {
println!("call it FnOnce()");
self.env_var + 2
}
}
impl FnMut<()> for Closure {
extern "rust-call" fn call_mut(&mut self, args: ()) -> u32 {
println!("call it FnMut()");
self.env_var + 2
}
}
impl Fn<()> for Closure {
extern "rust-call" fn call(&self, args: ()) -> u32 {
println!("call it Fn()");
self.env_var + 2
}
}
fn call_it<F: Fn() -> u32>(f: &F) -> u32 { f() }
fn call_it_mut<F: FnMut() -> u32>(f: &mut F) -> u32 { f() }
fn call_it_once<F: FnOnce() -> u32>(f: F) -> u32 { f() }
// 闭包为翻译为匿名结构体和 trait 的情况
struct Closure2<'a> {
env_var: &'a u32,
}
impl<'a> FnOnce<()> for Closure2<'a> {
type Output = ();
extern "rust-call" fn call_once(self, args: ()) -> () {
println!("{:?}", self.env_var);
}
}
impl<'a> FnMut<()> for Closure2<'a> {
extern "rust-call" fn call_mut(&mut self, args: ()) -> () {
println!("{:?}", self.env_var);
}
}
impl<'a> Fn<()> for Closure2<'a> {
extern "rust-call" fn call(&self, args: ()) -> () {
println!("{:?}", self.env_var);
}
}
// 使用 `FnOnce()` 闭包作为参数
// 在函数体内执行闭包, 用于判断自身的所有权是否转移
fn call<F: FnOnce()>(f: F) { f() }
fn boxed_closure(c: &mut Vec<Box<Fn()>>) {
let s = "second";
c.push(Box::new(|| println!("first")));
// 以不可变方式捕获了环境变量 `s`,
// 但这里需要将闭包装箱稍后在迭代器中使用
// 所以这里必须使用 `move` 关键字将 `s` 的所有权转移到闭包中,
// 因为变量 `s` 是复制语义类型, 所以该闭包捕获的是原始变量 `s` 的副本
c.push(Box::new(move || println!("{}", s)));
c.push(Box::new(|| println!("third")));
}
// `Fn` 并不受孤儿规则限制, 可有可无
// use std::ops::Fn;
// 以 trait 限定的方式实现 any 方法
// 自定义的 Any 不同于标准库的 Any
// 该函数的泛型 `F` 的 trait 限定为 `Fn(u32) -> bool`
// 有别于一般的泛型限定 `<F: Fn<u32, bool>>`
trait Any {
fn any<F>(&self, f: F) -> bool
where
Self: Sized,
F: Fn(u32) -> bool;
}
impl Any for Vec<u32> {
fn any<F>(&self, f: F) -> bool
where
// Sized 限定该方法不能被动态调用, 这是一种优化策略
Self: Sized,
F: Fn(u32) -> bool,
{
// 迭代传递的闭包, 依次调用
for &x in self {
if f(x) {
return true;
}
}
false
}
}
// 函数指针也可以作为闭包参数
fn call_ptr<F>(closure: F) -> i32
where
F: Fn(i32) -> i32,
{
closure(1)
}
fn counter_ptr(i: i32) -> i32 { i + 1 }
// 将闭包作为 trait 对象进行动态分发
trait AnyDyn {
fn any_dyn(&self, f: &(Fn(u32) -> bool)) -> bool;
}
impl AnyDyn for Vec<u32> {
fn any_dyn(&self, f: &(Fn(u32) -> bool)) -> bool {
for &x in self.iter() {
if f(x) {
return true;
}
}
false
}
}
// 将闭包作为函数返回值
// `Fn` 可以多次调用
fn square() -> Box<Fn(i32) -> i32> { Box::new(|i| i * i) }
// 指定返回闭包为 `FnOnce`
fn square_once() -> Box<FnOnce(i32) -> i32> { Box::new(|i| i * i) }
// impl Trait 示例
// 在 impl 关键字后面加上了闭包 trait, 这样就可以直接返回一个 `FnOnce trait`
fn square_impl() -> impl FnOnce(i32) -> i32 { |i| i * i }
// 泛型 trait 作为 trait 对象时的生命周期参数
trait DoSomething<T> {
fn do_sth(&self, value: T);
}
// 为 usize 类型实现该 trait
impl<'a, T: Debug> DoSomething<T> for &'a usize {
fn do_sth(&self, value: T) {
println!("{:?}", value);
}
}
// usize 是从外部引入的, 跟 foo 函数没有直接关系
// fn foo<'a>(b: Box<DoSomething<&'a usize>>) {
// let s: usize = 10;
// // s 在调用结束被析构
// // &s 会成为悬垂指针
// b.do_sth(&s)
// }
// 使用高阶生命周期: `for<>` 语法
fn bar<'a>(b: Box<for<'f> DoSomething<&'f usize>>) {
let s: usize = 10;
// s 在调用结束被析构
// &s 会成为悬垂指针
b.do_sth(&s)
}
// 闭包参数和返回值都是引用类型的情况
struct Pick<F> {
data: (u32, u32),
func: F,
}
// 编译器自动补齐了生命周期参数
// impl<F> Pick<F>
// where
// F: Fn(&(u32, u32)) -> &u32,
// {
// fn call(&self) -> &u32 { (self.func)(&self.data) }
// }
// 实际生命周期
impl<F> Pick<F>
where
F: for<'f> Fn(&'f (u32, u32)) -> &'f u32,
{
fn call(&self) -> &u32 { (self.func)(&self.data) }
}
fn max(data: &(u32, u32)) -> &u32 {
if data.0 > data.1 {
&data.0
} else {
&data.1
}
}
fn main() {
//
let f = counter(3);
assert_eq!(4, f(1));
// 闭包的参数可以为任意类型
// a: 函数指针, (b, c): 元组, 会通过函数指针类型的信息自动推断元组内为 i32 类型
let add = |a: fn() -> i32, (b, c)| (a)() + b + c;
let r = add(val, (2, 3));
assert_eq!(r, 10);
// 两个相同定义的闭包却不属于同一种类型
// Rust 2018 已修复
let c1 = || {};
let c2 = || {};
let v = [c1, c2];
// 查看闭包的类型
// let c1: () = || println!("i'm a closure");
// | -- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `()`, found closure
// 模拟编译器对闭包的实现
let env_var = 1;
let mut c = Closure { env_var: env_var };
// 实例调用
// 实际由 ABI 实现("rust-call")
c();
// 必须显式指定一个单元值作为参数
c.call(());
// 必须显式指定一个单元值作为参数
c.call_mut(());
// 必须显式指定一个单元值作为参数
// `call_once` 调用之后, 之前的实例所有权被转移, 无法再次被使用.
c.call_once(());
let mut c = Closure { env_var: env_var };
{
assert_eq!(3, call_it(&c));
}
{
assert_eq!(3, call_it_mut(&mut c));
}
{
assert_eq!(3, call_it_once(c));
}
// 与上者等价的闭包示例
let env_var = 1;
let c = || env_var + 2;
assert_eq!(3, c());
// 显式指定闭包类型
let env_var = 1;
// 该类型为 trait 对象, 此处必须使用 trait 对象
let c: Box<Fn() -> i32> = Box::new(|| env_var + 2);
assert_eq!(3, c());
// 复制语义类型自动实现 `Fn`
// 绑定为字符串字面量, 为复制语义类型
let s = "hello";
// 闭包会按照不可变引用类型来捕获 `s`
// 该闭包默认自动实现了 `Fn` 这个 trait, 并且该闭包以不可变借用捕获环境中的自由变量
let c = || println!("{:?}", s);
c();
// 闭包 c 可以多次调用, 说明编译器自动为闭包表达式实现的结构体实例并未失去所有权.
c();
// 对 s 进行一次不可变借用
println!("{:?}", s);
// 闭包被翻译为匿名结构体和 trait 的情况
// 闭包被翻译为结构体 `Closure<'a>`, 因为环境变量是按不可变
let env_var = 42;
let mut c = Closure2 { env_var: &env_var };
c();
c.call_mut(());
c.call_once(());
// 实现了 `Fn` 的闭包也可以显式调用 `call_mut` 和 `call_once`
let s = "hello";
let mut c = || println!("{:?}", s);
c();
c();
// 依赖 `#[feature(fn_traits)]` 特性(如果不是默认的闭包调用, 并不需要此特性)
// 实现了 `Fn` 的闭包也可以显式调用 `call_mut` 和 `call_once` 方法
c.call_mut(());
c.call_once(());
c;
println!("{:?}", s);
// 移动语义类型自动实现 `FnOnce`
let s = "hello".to_string();
// 编译器翻译的闭包结构体中记录捕获变量的成员不是引用类型, 并且只实现 `FnOnce`
// error[E0525]: expected a closure that implements the `FnMut` trait, but this closure only implements `FnOnce`
// error[e0525]: expected a closure that implements the `fnmut` trait, but this closure only implements `fnonce`
let c = || s;
c();
// error[e0382]: use of moved value: `c`
// c();
// c.call(());
// c.call_mut(());
// 环境变量为复制语义类型时使用 `move` 关键字
let s = "hello";
// 虽然 `move` 关键字强制执行, 但闭包捕获的 `s` 执行的对象是复制语义后获取的新变量.
// 原始的 `s` 并未失去所有权.
// 所以肯定是 `&self` 和 `&mut self` 中的一种
// 又因为闭包 c 是不可变的, 所以只存在 `&self`;
// 可变借用需要使用 `mut` 关键字将 c 本身修改为可变
let c = move || println!("{:?}", s);
c();
c();
println!("{:?}", s);
// 环境变量为移动语义的情况
// 移动语义类型 `String`
let s = "hello".to_string();
// 使用 move 后无法再次使用
let c = move || println!("{:?}", s);
c();
c();
// error[E0382]: borrow of moved value: `s`
// println!("{:?}", s);
// move 关键字是否影响闭包本身
let mut x = 0;
let incr_x = || x += 1;
call(incr_x);
// error[E0382]: use of moved value: `incr_x`
// call(incr_x);
// 使用 move
let mut x = 0;
let incr_x = move || x += 1;
call(incr_x);
call(incr_x);
println!("x: {}", x);
// 对移动语义类型使用 `move`
let mut x = vec![];
let expand_x = move || x.push(42);
call(expand_x);
// error[E0382]: use of moved value: `expand_x`
// call(expand_x);
// 修改环境变量的闭包来自动实现 `FnMut`
// 使用 mut 关键字修改了其可变性, 变成了可变绑定
let mut s = "rust".to_string();
{
// 通过闭包实现自我修改, 所以需要声明 mut
// 如果想修改环境变量, 必须实现 `FnMut`
// 由编译器生成的闭包结构体实例在调用 `FnMut` 方法时, 需要 `&mut self`
let mut c = || s += " rust";
c();
// 改动源: https://github.com/ZhangHanDong/tao-of-rust-codes/issues/103
// 这行本应该出错, 但因为 NLL 的支持, 没有出错.
c();
println!("{:?}", s);
} // 归还了所有权
println!("{:?}", s);
// 实现了 `FnMut` 的闭包的情况
let mut s = "rust".to_string();
{
// error[E0525]: expected a closure that implements the `Fn` trait, but this closure only implements `FnMut`
let mut c = || s += " rust";
c();
// 闭包只实现了 `FnMut`, 没有实现 `Fn`
// c.call(());
c.call_once(());
println!("{:?}", s);
}
println!("{:?}", s);
// 没有捕获任何环境变量的闭包
// 没有捕获环境变量, 没有使用 `mut` 关键字, 然而可以多次调用
// 足以证明编译器为其自动实现的结构体实例并未失去所有权, 只可能是 `&self`
// 所以, 闭包一定实现了 `Fn`
let c = || println!("hhh");
c();
c();
// 把闭包当作 trait 对象
// `Box<Fn()>` 是一个 trait 对象
// 把闭包放到 `Box<T>` 中就可以构建一个闭包的 trait 对象
// trait 对象是动态分发的
let mut c: Vec<Box<Fn()>> = vec![];
boxed_closure(&mut c);
for f in c {
f();
}
// 以 trait 限定的方式实现 any 方法
let v = vec![1, 2, 3];
let b = v.any(|x| x == 3);
println!("{:?}", b);
// 函数指针也可以作为闭包参数
// 函数指针也实现了 `Fn`
let result = call_ptr(counter_ptr);
assert_eq!(2, result);
// 将闭包作为 trait 对象进行动态分发
let v = vec![1, 2, 3];
let b = v.any_dyn(&|x| x == 3);
println!("{:?}", b);
// 测试 `'static` 约束
let s = "hello";
// let c: Box<Fn() +'static> = Box::new(move || { s; });
// error[E0597]: `s` does not live long enough
// let c: Box<Fn() +'static> = Box::new(|| { s; });
// 将闭包作为函数返回值
let square_rt = square();
assert_eq!(4, square_rt(2));
assert_eq!(9, square_rt(3));
// 指定闭包返回为 `FnOnce`
// 内容有变动: https://github.com/ZhangHanDong/tao-of-rust-codes/issues/249
let square_once_rt = square_once();
assert_eq!(4, square_once_rt(2));
// impl Trait 示例
let square_impl_rt = square_impl();
assert_eq!(4, square_impl_rt(2));
// 泛型 trait 作为 trait 对象时的生命周期参数
let x = Box::new(&2usize);
// foo(x);
bar(x);
// 闭包参数和返回值都是引用类型的情况
let elm = Pick {
data: (3, 1),
func: max,
};
println!("{}", elm.call());
} | env_var: u32,
}
| random_line_split |
main.rs | #![feature(unboxed_closures, fn_traits)]
use std::fmt::Debug;
// error[E0635]: unknown feature `fnbox`
// #![feature(unboxed_closures, fn_traits, fnbox)]
// 返回闭包
// 放入 `Box<T>` 中是因为闭包的大小在编译期是未知的.
// fn counter(i: i32) -> Box<Fn(i32) -> i32> {
// Box::new(move |n: i32| n + i)
// }
// Rust 2018 中也可以写成 `impl Fn(i32) -> i32`
fn counter(i: i32) -> impl Fn(i32) -> i32 {
// `i` 为自由变量
// `i` 为复制语义类型, 所以它肯定会按引用被捕获. 此引用会妨碍闭包作为函数返回值, 编译器会报错
// 所以使用 `move` 关键字来把自由变量 `i` 的所有权转移到闭包中.
// 因为变量 `i` 是复制语义, 所以这里只会进行按位复制
Box::new(move |n: i32| n + i)
}
fn val() -> i32 { 5 }
// 模拟编译器对闭包的实现
struct Closure {
// 代表从环境中捕获的自由变量
env_var: u32,
}
impl FnOnce<()> for Closure {
type Output = u32;
extern "rust-call" fn call_once(self, args: ()) -> u32 {
println!("call it FnOnce()");
self.env_var + 2
}
}
impl FnMut<()> for Closure {
extern "rust-call" fn call_mut(&mut self, args: ()) -> u32 {
println!("call it FnMut()");
self.env_var + 2
}
}
impl Fn<()> for Closure {
extern "rust-call" fn call(&self, args: ()) -> u32 {
println!("call it Fn()");
self.env_var + 2
}
}
fn call_it<F: Fn() -> u32>(f: &F) -> u32 { f() }
fn call_it_mut<F: FnMut() -> u32>(f: &mut F) -> u32 { f() }
fn call_it_once<F: FnOnce() -> u32>(f: F) -> u32 { f() }
// 闭包为翻译为匿名结构体和 trait 的情况
struct Closure2<'a> {
env_var: &'a u32,
}
impl<'a> FnOnce<()> for Closure2<'a> {
type Output = ();
extern "rust-call" fn call_once(self, args: ()) -> () {
println!("{:?}", self.env_var);
}
}
impl<'a> FnMut<()> for Closure2<'a> {
extern "rust-call" fn call_mut(&mut self, args: ()) -> () {
println!("{:?}", self.env_var);
}
}
impl<'a> Fn<()> for Closure2<'a> {
extern "rust-call" fn call(&self, args: ()) -> () {
println!("{:?}", self.env_var);
}
}
// 使用 `FnOnce()` 闭包作为参数
// 在函数体内执行闭包, 用于判断自身的所有权是否转移
fn call<F: FnOnce()>(f: F) { f() }
fn boxed_closure(c: &mut Vec<Box<Fn()>>) {
let s = "second";
c.push(Box::new(|| println!("first")));
// 以不可变方式捕获了环境变量 `s`,
// 但这里需 | 装箱稍后在迭代器中使用
// 所以这里必须使用 `move` 关键字将 `s` 的所有权转移到闭包中,
// 因为变量 `s` 是复制语义类型, 所以该闭包捕获的是原始变量 `s` 的副本
c.push(Box::new(move || println!("{}", s)));
c.push(Box::new(|| println!("third")));
}
// `Fn` 并不受孤儿规则限制, 可有可无
// use std::ops::Fn;
// 以 trait 限定的方式实现 any 方法
// 自定义的 Any 不同于标准库的 Any
// 该函数的泛型 `F` 的 trait 限定为 `Fn(u32) -> bool`
// 有别于一般的泛型限定 `<F: Fn<u32, bool>>`
trait Any {
fn any<F>(&self, f: F) -> bool
where
Self: Sized,
F: Fn(u32) -> bool;
}
impl Any for Vec<u32> {
fn any<F>(&self, f: F) -> bool
where
// Sized 限定该方法不能被动态调用, 这是一种优化策略
Self: Sized,
F: Fn(u32) -> bool,
{
// 迭代传递的闭包, 依次调用
for &x in self {
if f(x) {
return true;
}
}
false
}
}
// 函数指针也可以作为闭包参数
fn call_ptr<F>(closure: F) -> i32
where
F: Fn(i32) -> i32,
{
closure(1)
}
fn counter_ptr(i: i32) -> i32 { i + 1 }
// 将闭包作为 trait 对象进行动态分发
trait AnyDyn {
fn any_dyn(&self, f: &(Fn(u32) -> bool)) -> bool;
}
impl AnyDyn for Vec<u32> {
fn any_dyn(&self, f: &(Fn(u32) -> bool)) -> bool {
for &x in self.iter() {
if f(x) {
return true;
}
}
false
}
}
// 将闭包作为函数返回值
// `Fn` 可以多次调用
fn square() -> Box<Fn(i32) -> i32> { Box::new(|i| i * i) }
// 指定返回闭包为 `FnOnce`
fn square_once() -> Box<FnOnce(i32) -> i32> { Box::new(|i| i * i) }
// impl Trait 示例
// 在 impl 关键字后面加上了闭包 trait, 这样就可以直接返回一个 `FnOnce trait`
fn square_impl() -> impl FnOnce(i32) -> i32 { |i| i * i }
// 泛型 trait 作为 trait 对象时的生命周期参数
trait DoSomething<T> {
fn do_sth(&self, value: T);
}
// 为 usize 类型实现该 trait
impl<'a, T: Debug> DoSomething<T> for &'a usize {
fn do_sth(&self, value: T) {
println!("{:?}", value);
}
}
// usize 是从外部引入的, 跟 foo 函数没有直接关系
// fn foo<'a>(b: Box<DoSomething<&'a usize>>) {
// let s: usize = 10;
// // s 在调用结束被析构
// // &s 会成为悬垂指针
// b.do_sth(&s)
// }
// 使用高阶生命周期: `for<>` 语法
fn bar<'a>(b: Box<for<'f> DoSomething<&'f usize>>) {
let s: usize = 10;
// s 在调用结束被析构
// &s 会成为悬垂指针
b.do_sth(&s)
}
// 闭包参数和返回值都是引用类型的情况
struct Pick<F> {
data: (u32, u32),
func: F,
}
// 编译器自动补齐了生命周期参数
// impl<F> Pick<F>
// where
// F: Fn(&(u32, u32)) -> &u32,
// {
// fn call(&self) -> &u32 { (self.func)(&self.data) }
// }
// 实际生命周期
impl<F> Pick<F>
where
F: for<'f> Fn(&'f (u32, u32)) -> &'f u32,
{
fn call(&self) -> &u32 { (self.func)(&self.data) }
}
fn max(data: &(u32, u32)) -> &u32 {
if data.0 > data.1 {
&data.0
} else {
&data.1
}
}
fn main() {
//
let f = counter(3);
assert_eq!(4, f(1));
// 闭包的参数可以为任意类型
// a: 函数指针, (b, c): 元组, 会通过函数指针类型的信息自动推断元组内为 i32 类型
let add = |a: fn() -> i32, (b, c)| (a)() + b + c;
let r = add(val, (2, 3));
assert_eq!(r, 10);
// 两个相同定义的闭包却不属于同一种类型
// Rust 2018 已修复
let c1 = || {};
let c2 = || {};
let v = [c1, c2];
// 查看闭包的类型
// let c1: () = || println!("i'm a closure");
// | -- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `()`, found closure
// 模拟编译器对闭包的实现
let env_var = 1;
let mut c = Closure { env_var: env_var };
// 实例调用
// 实际由 ABI 实现("rust-call")
c();
// 必须显式指定一个单元值作为参数
c.call(());
// 必须显式指定一个单元值作为参数
c.call_mut(());
// 必须显式指定一个单元值作为参数
// `call_once` 调用之后, 之前的实例所有权被转移, 无法再次被使用.
c.call_once(());
let mut c = Closure { env_var: env_var };
{
assert_eq!(3, call_it(&c));
}
{
assert_eq!(3, call_it_mut(&mut c));
}
{
assert_eq!(3, call_it_once(c));
}
// 与上者等价的闭包示例
let env_var = 1;
let c = || env_var + 2;
assert_eq!(3, c());
// 显式指定闭包类型
let env_var = 1;
// 该类型为 trait 对象, 此处必须使用 trait 对象
let c: Box<Fn() -> i32> = Box::new(|| env_var + 2);
assert_eq!(3, c());
// 复制语义类型自动实现 `Fn`
// 绑定为字符串字面量, 为复制语义类型
let s = "hello";
// 闭包会按照不可变引用类型来捕获 `s`
// 该闭包默认自动实现了 `Fn` 这个 trait, 并且该闭包以不可变借用捕获环境中的自由变量
let c = || println!("{:?}", s);
c();
// 闭包 c 可以多次调用, 说明编译器自动为闭包表达式实现的结构体实例并未失去所有权.
c();
// 对 s 进行一次不可变借用
println!("{:?}", s);
// 闭包被翻译为匿名结构体和 trait 的情况
// 闭包被翻译为结构体 `Closure<'a>`, 因为环境变量是按不可变
let env_var = 42;
let mut c = Closure2 { env_var: &env_var };
c();
c.call_mut(());
c.call_once(());
// 实现了 `Fn` 的闭包也可以显式调用 `call_mut` 和 `call_once`
let s = "hello";
let mut c = || println!("{:?}", s);
c();
c();
// 依赖 `#[feature(fn_traits)]` 特性(如果不是默认的闭包调用, 并不需要此特性)
// 实现了 `Fn` 的闭包也可以显式调用 `call_mut` 和 `call_once` 方法
c.call_mut(());
c.call_once(());
c;
println!("{:?}", s);
// 移动语义类型自动实现 `FnOnce`
let s = "hello".to_string();
// 编译器翻译的闭包结构体中记录捕获变量的成员不是引用类型, 并且只实现 `FnOnce`
// error[E0525]: expected a closure that implements the `FnMut` trait, but this closure only implements `FnOnce`
// error[e0525]: expected a closure that implements the `fnmut` trait, but this closure only implements `fnonce`
let c = || s;
c();
// error[e0382]: use of moved value: `c`
// c();
// c.call(());
// c.call_mut(());
// 环境变量为复制语义类型时使用 `move` 关键字
let s = "hello";
// 虽然 `move` 关键字强制执行, 但闭包捕获的 `s` 执行的对象是复制语义后获取的新变量.
// 原始的 `s` 并未失去所有权.
// 所以肯定是 `&self` 和 `&mut self` 中的一种
// 又因为闭包 c 是不可变的, 所以只存在 `&self`;
// 可变借用需要使用 `mut` 关键字将 c 本身修改为可变
let c = move || println!("{:?}", s);
c();
c();
println!("{:?}", s);
// 环境变量为移动语义的情况
// 移动语义类型 `String`
let s = "hello".to_string();
// 使用 move 后无法再次使用
let c = move || println!("{:?}", s);
c();
c();
// error[E0382]: borrow of moved value: `s`
// println!("{:?}", s);
// move 关键字是否影响闭包本身
let mut x = 0;
let incr_x = || x += 1;
call(incr_x);
// error[E0382]: use of moved value: `incr_x`
// call(incr_x);
// 使用 move
let mut x = 0;
let incr_x = move || x += 1;
call(incr_x);
call(incr_x);
println!("x: {}", x);
// 对移动语义类型使用 `move`
let mut x = vec![];
let expand_x = move || x.push(42);
call(expand_x);
// error[E0382]: use of moved value: `expand_x`
// call(expand_x);
// 修改环境变量的闭包来自动实现 `FnMut`
// 使用 mut 关键字修改了其可变性, 变成了可变绑定
let mut s = "rust".to_string();
{
// 通过闭包实现自我修改, 所以需要声明 mut
// 如果想修改环境变量, 必须实现 `FnMut`
// 由编译器生成的闭包结构体实例在调用 `FnMut` 方法时, 需要 `&mut self`
let mut c = || s += " rust";
c();
// 改动源: https://github.com/ZhangHanDong/tao-of-rust-codes/issues/103
// 这行本应该出错, 但因为 NLL 的支持, 没有出错.
c();
println!("{:?}", s);
} // 归还了所有权
println!("{:?}", s);
// 实现了 `FnMut` 的闭包的情况
let mut s = "rust".to_string();
{
// error[E0525]: expected a closure that implements the `Fn` trait, but this closure only implements `FnMut`
let mut c = || s += " rust";
c();
// 闭包只实现了 `FnMut`, 没有实现 `Fn`
// c.call(());
c.call_once(());
println!("{:?}", s);
}
println!("{:?}", s);
// 没有捕获任何环境变量的闭包
// 没有捕获环境变量, 没有使用 `mut` 关键字, 然而可以多次调用
// 足以证明编译器为其自动实现的结构体实例并未失去所有权, 只可能是 `&self`
// 所以, 闭包一定实现了 `Fn`
let c = || println!("hhh");
c();
c();
// 把闭包当作 trait 对象
// `Box<Fn()>` 是一个 trait 对象
// 把闭包放到 `Box<T>` 中就可以构建一个闭包的 trait 对象
// trait 对象是动态分发的
let mut c: Vec<Box<Fn()>> = vec![];
boxed_closure(&mut c);
for f in c {
f();
}
// 以 trait 限定的方式实现 any 方法
let v = vec![1, 2, 3];
let b = v.any(|x| x == 3);
println!("{:?}", b);
// 函数指针也可以作为闭包参数
// 函数指针也实现了 `Fn`
let result = call_ptr(counter_ptr);
assert_eq!(2, result);
// 将闭包作为 trait 对象进行动态分发
let v = vec![1, 2, 3];
let b = v.any_dyn(&|x| x == 3);
println!("{:?}", b);
// 测试 `'static` 约束
let s = "hello";
// let c: Box<Fn() +'static> = Box::new(move || { s; });
// error[E0597]: `s` does not live long enough
// let c: Box<Fn() +'static> = Box::new(|| { s; });
// 将闭包作为函数返回值
let square_rt = square();
assert_eq!(4, square_rt(2));
assert_eq!(9, square_rt(3));
// 指定闭包返回为 `FnOnce`
// 内容有变动: https://github.com/ZhangHanDong/tao-of-rust-codes/issues/249
let square_once_rt = square_once();
assert_eq!(4, square_once_rt(2));
// impl Trait 示例
let square_impl_rt = square_impl();
assert_eq!(4, square_impl_rt(2));
// 泛型 trait 作为 trait 对象时的生命周期参数
let x = Box::new(&2usize);
// foo(x);
bar(x);
// 闭包参数和返回值都是引用类型的情况
let elm = Pick {
data: (3, 1),
func: max,
};
println!("{}", elm.call());
}
| 要将闭包 | identifier_name |
main.rs | #![feature(unboxed_closures, fn_traits)]
use std::fmt::Debug;
// error[E0635]: unknown feature `fnbox`
// #![feature(unboxed_closures, fn_traits, fnbox)]
// 返回闭包
// 放入 `Box<T>` 中是因为闭包的大小在编译期是未知的.
// fn counter(i: i32) -> Box<Fn(i32) -> i32> {
// Box::new(move |n: i32| n + i)
// }
// Rust 2018 中也可以写成 `impl Fn(i32) -> i32`
fn counter(i: i32) -> impl Fn(i32) -> i32 {
// `i` 为自由变量
// `i` 为复制语义类型, 所以它肯定会按引用被捕获. 此引用会妨碍闭包作为函数返回值, 编译器会报错
// 所以使用 `move` 关键字来把自由变量 `i` 的所有权转移到闭包中.
// 因为变量 `i` 是复制语义, 所以这里只会进行按位复制
Box::new(move |n: i32| n + i)
}
fn val() -> i32 { 5 }
// 模拟编译器对闭包的实现
struct Closure {
// 代表从环境中捕获的自由变量
env_var: u32,
}
impl FnOnce<()> for Closure {
type Output = u32;
extern "rust-call" fn call_once(self, args: ()) -> u32 {
println!("call it FnOnce()");
self.env_var + 2
}
}
impl FnMut<()> for Closure {
extern "rust-call" fn call_mut(&mut self, args: ()) -> u32 {
println!("call it FnMut()");
self.env_var + 2
}
}
impl Fn<()> for Closure {
extern "rust-call" fn call(&self, args: ()) -> u32 {
println!("call it Fn()");
self.env_var + 2
}
}
fn call_it<F: Fn() -> u32>(f: &F) -> u32 { f() }
fn call_it_mut<F: FnMut() -> u32>(f: &mut F) -> u32 { f() }
fn call_it_once<F: FnOnce() -> u32>(f: F) -> u32 { f() }
// 闭包为翻译为匿名结构体和 trait 的情况
struct Closure2<'a> {
env_var: &'a u32,
}
impl<'a> FnOnce<()> for Closure2<'a> {
type Output = ();
extern "rust-call" fn call_once(self, args: ()) -> () {
println!("{:?}", self.env_var);
}
}
impl<'a> FnMut<()> for Closure2<'a> {
extern "rust-call" fn call_mut(&mut self, args: ()) -> () {
println!("{:?}", self.env_var);
}
}
impl<'a> Fn<()> for Closure2<'a> {
extern "rust-call" fn call(&self, args: ()) -> () {
println!("{:?}", self.env_var);
}
}
// 使用 `FnOnce()` 闭包作为参数
// 在函数体内执行闭包, 用于判断自身的所有权是否转移
fn call<F: FnOnce()>(f: F) { f() }
fn boxed_closure(c: &mut Vec<Box<Fn()>>) {
let s = "second";
c.push(Box::new(|| println!("first")));
// 以不可变方式捕获了环境变量 `s`,
// 但这里需要将闭包装箱稍后在迭代器中使用
// 所以这里必须使用 `move` 关键字将 `s` 的所有权转移到闭包中,
// 因为变量 `s` 是复制语义类型, 所以该闭包捕获的是原始变量 `s` 的副本
c.push(Box::new(move || println!("{}", s)));
c.push(Box::new(|| println!("third")));
}
// `Fn` 并不受孤儿规则限制, 可有可无
// use std::ops::Fn;
// 以 trait 限定的方式实现 any 方法
// 自定义的 Any 不同于标准库的 Any
// 该函数的泛型 `F` 的 trait 限定为 `Fn(u32) -> bool`
// 有别于一般的泛型限定 `<F: Fn<u32, bool>>`
trait Any {
fn any<F>(&self, f: F) -> bool
where
Self: Sized,
F: Fn(u32) -> bool;
}
impl Any for Vec<u32> {
fn any<F>(&self, f: F) -> bool
where
// Sized 限定该方法不能被动态调用, 这是一种优化策略
Self: Sized,
F: Fn(u32) -> bool,
{
// 迭代传递的闭包, 依次调用
for &x in self {
if f(x) {
return true;
}
}
false
}
}
// 函数指针也可以作为闭包参数
fn call_ptr<F>(closure: F) -> i32
where
F: Fn(i32) -> i32,
{
closure(1)
}
fn counter_ptr(i: i32) -> i32 { i + 1 }
// 将闭包作为 trait 对象进行动态分发
trait AnyDyn {
fn any_dyn(&self, f: &(Fn(u32) -> bool)) -> bool;
}
impl AnyDyn for Vec<u32> {
fn any_dyn(&self, f: &(Fn(u32) -> bool)) -> bool {
for &x in self.iter() {
if f(x) {
return true;
}
}
false
}
}
// 将闭包作为函数返回值
// `Fn` 可以多次调用
fn square() -> Box<Fn(i32) -> i32> { Box::new(|i| i * i) }
// 指定返回闭包为 `FnOnce`
fn square_once() -> Box<FnOnce(i32) -> i32> { Box::new(|i| i * i) }
// impl Trait 示例
// 在 impl 关键字后面加上了闭包 trait, 这样就可以直接返回一个 `FnOnce trait`
fn square_impl() -> impl FnOnce(i32) -> i32 { |i| i * i }
// 泛型 trait 作为 trait 对象时的生命周期参数
trait DoSomething<T> {
fn do_sth(&self, value: T);
}
// 为 usize 类型实现该 trait
impl<'a, T: Debug> DoSomething<T> for &'a usize {
fn do_sth(&self, value: T) {
println!("{:?}", value);
}
}
// usize 是从外部引入的, 跟 foo 函数没有直接关系
// fn foo<'a>(b: Box<DoSomething<&'a usize>>) {
// let s: usize = 10;
// // s 在调用结束被析构
// // &s 会成为悬垂指针
// b.do_sth(&s)
// }
// 使用高阶生命周期: `for<>` 语法
fn bar<'a>(b: Box<for<'f> DoSomething<&'f usize>>) {
let s: usize = 10;
// s 在调用结束被析构
// &s 会成为悬垂指针
b.do_sth(&s)
}
// 闭包参数和返回值都是引用类型的情况
struct Pick<F> {
data: (u32, u32),
func: F,
}
// 编译 | Pick<F>
// where
// F: Fn(&(u32, u32)) -> &u32,
// {
// fn call(&self) -> &u32 { (self.func)(&self.data) }
// }
// 实际生命周期
impl<F> Pick<F>
where
F: for<'f> Fn(&'f (u32, u32)) -> &'f u32,
{
fn call(&self) -> &u32 { (self.func)(&self.data) }
}
fn max(data: &(u32, u32)) -> &u32 {
if data.0 > data.1 {
&data.0
} else {
&data.1
}
}
fn main() {
//
let f = counter(3);
assert_eq!(4, f(1));
// 闭包的参数可以为任意类型
// a: 函数指针, (b, c): 元组, 会通过函数指针类型的信息自动推断元组内为 i32 类型
let add = |a: fn() -> i32, (b, c)| (a)() + b + c;
let r = add(val, (2, 3));
assert_eq!(r, 10);
// 两个相同定义的闭包却不属于同一种类型
// Rust 2018 已修复
let c1 = || {};
let c2 = || {};
let v = [c1, c2];
// 查看闭包的类型
// let c1: () = || println!("i'm a closure");
// | -- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `()`, found closure
// 模拟编译器对闭包的实现
let env_var = 1;
let mut c = Closure { env_var: env_var };
// 实例调用
// 实际由 ABI 实现("rust-call")
c();
// 必须显式指定一个单元值作为参数
c.call(());
// 必须显式指定一个单元值作为参数
c.call_mut(());
// 必须显式指定一个单元值作为参数
// `call_once` 调用之后, 之前的实例所有权被转移, 无法再次被使用.
c.call_once(());
let mut c = Closure { env_var: env_var };
{
assert_eq!(3, call_it(&c));
}
{
assert_eq!(3, call_it_mut(&mut c));
}
{
assert_eq!(3, call_it_once(c));
}
// 与上者等价的闭包示例
let env_var = 1;
let c = || env_var + 2;
assert_eq!(3, c());
// 显式指定闭包类型
let env_var = 1;
// 该类型为 trait 对象, 此处必须使用 trait 对象
let c: Box<Fn() -> i32> = Box::new(|| env_var + 2);
assert_eq!(3, c());
// 复制语义类型自动实现 `Fn`
// 绑定为字符串字面量, 为复制语义类型
let s = "hello";
// 闭包会按照不可变引用类型来捕获 `s`
// 该闭包默认自动实现了 `Fn` 这个 trait, 并且该闭包以不可变借用捕获环境中的自由变量
let c = || println!("{:?}", s);
c();
// 闭包 c 可以多次调用, 说明编译器自动为闭包表达式实现的结构体实例并未失去所有权.
c();
// 对 s 进行一次不可变借用
println!("{:?}", s);
// 闭包被翻译为匿名结构体和 trait 的情况
// 闭包被翻译为结构体 `Closure<'a>`, 因为环境变量是按不可变
let env_var = 42;
let mut c = Closure2 { env_var: &env_var };
c();
c.call_mut(());
c.call_once(());
// 实现了 `Fn` 的闭包也可以显式调用 `call_mut` 和 `call_once`
let s = "hello";
let mut c = || println!("{:?}", s);
c();
c();
// 依赖 `#[feature(fn_traits)]` 特性(如果不是默认的闭包调用, 并不需要此特性)
// 实现了 `Fn` 的闭包也可以显式调用 `call_mut` 和 `call_once` 方法
c.call_mut(());
c.call_once(());
c;
println!("{:?}", s);
// 移动语义类型自动实现 `FnOnce`
let s = "hello".to_string();
// 编译器翻译的闭包结构体中记录捕获变量的成员不是引用类型, 并且只实现 `FnOnce`
// error[E0525]: expected a closure that implements the `FnMut` trait, but this closure only implements `FnOnce`
// error[e0525]: expected a closure that implements the `fnmut` trait, but this closure only implements `fnonce`
let c = || s;
c();
// error[e0382]: use of moved value: `c`
// c();
// c.call(());
// c.call_mut(());
// 环境变量为复制语义类型时使用 `move` 关键字
let s = "hello";
// 虽然 `move` 关键字强制执行, 但闭包捕获的 `s` 执行的对象是复制语义后获取的新变量.
// 原始的 `s` 并未失去所有权.
// 所以肯定是 `&self` 和 `&mut self` 中的一种
// 又因为闭包 c 是不可变的, 所以只存在 `&self`;
// 可变借用需要使用 `mut` 关键字将 c 本身修改为可变
let c = move || println!("{:?}", s);
c();
c();
println!("{:?}", s);
// 环境变量为移动语义的情况
// 移动语义类型 `String`
let s = "hello".to_string();
// 使用 move 后无法再次使用
let c = move || println!("{:?}", s);
c();
c();
// error[E0382]: borrow of moved value: `s`
// println!("{:?}", s);
// move 关键字是否影响闭包本身
let mut x = 0;
let incr_x = || x += 1;
call(incr_x);
// error[E0382]: use of moved value: `incr_x`
// call(incr_x);
// 使用 move
let mut x = 0;
let incr_x = move || x += 1;
call(incr_x);
call(incr_x);
println!("x: {}", x);
// 对移动语义类型使用 `move`
let mut x = vec![];
let expand_x = move || x.push(42);
call(expand_x);
// error[E0382]: use of moved value: `expand_x`
// call(expand_x);
// 修改环境变量的闭包来自动实现 `FnMut`
// 使用 mut 关键字修改了其可变性, 变成了可变绑定
let mut s = "rust".to_string();
{
// 通过闭包实现自我修改, 所以需要声明 mut
// 如果想修改环境变量, 必须实现 `FnMut`
// 由编译器生成的闭包结构体实例在调用 `FnMut` 方法时, 需要 `&mut self`
let mut c = || s += " rust";
c();
// 改动源: https://github.com/ZhangHanDong/tao-of-rust-codes/issues/103
// 这行本应该出错, 但因为 NLL 的支持, 没有出错.
c();
println!("{:?}", s);
} // 归还了所有权
println!("{:?}", s);
// 实现了 `FnMut` 的闭包的情况
let mut s = "rust".to_string();
{
// error[E0525]: expected a closure that implements the `Fn` trait, but this closure only implements `FnMut`
let mut c = || s += " rust";
c();
// 闭包只实现了 `FnMut`, 没有实现 `Fn`
// c.call(());
c.call_once(());
println!("{:?}", s);
}
println!("{:?}", s);
// 没有捕获任何环境变量的闭包
// 没有捕获环境变量, 没有使用 `mut` 关键字, 然而可以多次调用
// 足以证明编译器为其自动实现的结构体实例并未失去所有权, 只可能是 `&self`
// 所以, 闭包一定实现了 `Fn`
let c = || println!("hhh");
c();
c();
// 把闭包当作 trait 对象
// `Box<Fn()>` 是一个 trait 对象
// 把闭包放到 `Box<T>` 中就可以构建一个闭包的 trait 对象
// trait 对象是动态分发的
let mut c: Vec<Box<Fn()>> = vec![];
boxed_closure(&mut c);
for f in c {
f();
}
// 以 trait 限定的方式实现 any 方法
let v = vec![1, 2, 3];
let b = v.any(|x| x == 3);
println!("{:?}", b);
// 函数指针也可以作为闭包参数
// 函数指针也实现了 `Fn`
let result = call_ptr(counter_ptr);
assert_eq!(2, result);
// 将闭包作为 trait 对象进行动态分发
let v = vec![1, 2, 3];
let b = v.any_dyn(&|x| x == 3);
println!("{:?}", b);
// 测试 `'static` 约束
let s = "hello";
// let c: Box<Fn() +'static> = Box::new(move || { s; });
// error[E0597]: `s` does not live long enough
// let c: Box<Fn() +'static> = Box::new(|| { s; });
// 将闭包作为函数返回值
let square_rt = square();
assert_eq!(4, square_rt(2));
assert_eq!(9, square_rt(3));
// 指定闭包返回为 `FnOnce`
// 内容有变动: https://github.com/ZhangHanDong/tao-of-rust-codes/issues/249
let square_once_rt = square_once();
assert_eq!(4, square_once_rt(2));
// impl Trait 示例
let square_impl_rt = square_impl();
assert_eq!(4, square_impl_rt(2));
// 泛型 trait 作为 trait 对象时的生命周期参数
let x = Box::new(&2usize);
// foo(x);
bar(x);
// 闭包参数和返回值都是引用类型的情况
let elm = Pick {
data: (3, 1),
func: max,
};
println!("{}", elm.call());
}
| 器自动补齐了生命周期参数
// impl<F> | identifier_body |
main.rs | #![feature(unboxed_closures, fn_traits)]
use std::fmt::Debug;
// error[E0635]: unknown feature `fnbox`
// #![feature(unboxed_closures, fn_traits, fnbox)]
// 返回闭包
// 放入 `Box<T>` 中是因为闭包的大小在编译期是未知的.
// fn counter(i: i32) -> Box<Fn(i32) -> i32> {
// Box::new(move |n: i32| n + i)
// }
// Rust 2018 中也可以写成 `impl Fn(i32) -> i32`
fn counter(i: i32) -> impl Fn(i32) -> i32 {
// `i` 为自由变量
// `i` 为复制语义类型, 所以它肯定会按引用被捕获. 此引用会妨碍闭包作为函数返回值, 编译器会报错
// 所以使用 `move` 关键字来把自由变量 `i` 的所有权转移到闭包中.
// 因为变量 `i` 是复制语义, 所以这里只会进行按位复制
Box::new(move |n: i32| n + i)
}
fn val() -> i32 { 5 }
// 模拟编译器对闭包的实现
struct Closure {
// 代表从环境中捕获的自由变量
env_var: u32,
}
impl FnOnce<()> for Closure {
type Output = u32;
extern "rust-call" fn call_once(self, args: ()) -> u32 {
println!("call it FnOnce()");
self.env_var + 2
}
}
impl FnMut<()> for Closure {
extern "rust-call" fn call_mut(&mut self, args: ()) -> u32 {
println!("call it FnMut()");
self.env_var + 2
}
}
impl Fn<()> for Closure {
extern "rust-call" fn call(&self, args: ()) -> u32 {
println!("call it Fn()");
self.env_var + 2
}
}
fn call_it<F: Fn() -> u32>(f: &F) -> u32 { f() }
fn call_it_mut<F: FnMut() -> u32>(f: &mut F) -> u32 { f() }
fn call_it_once<F: FnOnce() -> u32>(f: F) -> u32 { f() }
// 闭包为翻译为匿名结构体和 trait 的情况
struct Closure2<'a> {
env_var: &'a u32,
}
impl<'a> FnOnce<()> for Closure2<'a> {
type Output = ();
extern "rust-call" fn call_once(self, args: ()) -> () {
println!("{:?}", self.env_var);
}
}
impl<'a> FnMut<()> for Closure2<'a> {
extern "rust-call" fn call_mut(&mut self, args: ()) -> () {
println!("{:?}", self.env_var);
}
}
impl<'a> Fn<()> for Closure2<'a> {
extern "rust-call" fn call(&self, args: ()) -> () {
println!("{:?}", self.env_var);
}
}
// 使用 `FnOnce()` 闭包作为参数
// 在函数体内执行闭包, 用于判断自身的所有权是否转移
fn call<F: FnOnce()>(f: F) { f() }
fn boxed_closure(c: &mut Vec<Box<Fn()>>) {
let s = "second";
c.push(Box::new(|| println!("first")));
// 以不可变方式捕获了环境变量 `s`,
// 但这里需要将闭包装箱稍后在迭代器中使用
// 所以这里必须使用 `move` 关键字将 `s` 的所有权转移到闭包中,
// 因为变量 `s` 是复制语义类型, 所以该闭包捕获的是原始变量 `s` 的副本
c.push(Box::new(move || println!("{}", s)));
c.push(Box::new(|| println!("third")));
}
// `Fn` 并不受孤儿规则限制, 可有可无
// use std::ops::Fn;
// 以 trait 限定的方式实现 any 方法
// 自定义的 Any 不同于标准库的 Any
// 该函数的泛型 `F` 的 trait 限定为 `Fn(u32) -> bool`
// 有别于一般的泛型限定 `<F: Fn<u32, bool>>`
trait Any {
fn any<F>(&self, f: F) -> bool
where
Self: Sized,
F: Fn(u32) -> bool;
}
impl Any for Vec<u32> {
fn any<F>(&self, f: F) -> bool
where
// Sized 限定该方法不能被动态调用, 这是一种优化策略
Self: Sized,
F: Fn(u32) -> bool,
{
// 迭代传递的闭包, 依次调用
for &x in self {
if f(x) {
return true;
}
}
false
}
}
// 函数指针也可以作为闭包参数
fn call_ptr<F>(closure: F) -> i32
where
F: Fn(i32) -> i32,
{
closure(1)
}
fn counter_ptr(i: i32) -> i32 { i + 1 }
// 将闭包作为 trait 对象进行动态分发
trait AnyDyn {
fn any_dyn(&self, f: &(Fn(u32) -> bool)) -> bool;
}
impl AnyDyn for Vec<u32> {
fn any_dyn(&self, f: &(Fn(u32) -> bool)) -> bool {
for &x in self.iter() {
if f(x) {
return true;
}
}
false
}
}
// 将闭包作为函数返回值
// `Fn` 可以多次调用
fn square() -> Box<Fn(i32) -> i32> { Box::new(|i| i * i) }
// 指定返回闭包为 `FnOnce`
fn square_once() -> Box<FnOnce(i32) -> i32> { Box::new(|i| i * i) }
// impl Trait 示例
// 在 impl 关键字后面加上了闭包 trait, 这样就可以直接返回一个 `FnOnce trait`
fn square_impl() -> impl FnOnce(i32) -> i32 { |i| i * i }
// 泛型 trait 作为 trait 对象时的生命周期参数
trait DoSomething<T> {
fn do_sth(&self, value: T);
}
// 为 usize 类型实现该 trait
impl<'a, T: Debug> DoSomething<T> for &'a usize {
fn do_sth(&self, value: T) {
println!("{:?}", value);
}
}
// usize 是从外部引入的, 跟 foo 函数没有直接关系
// fn foo<'a>(b: Box<DoSomething<&'a usize>>) {
// let s: usize = 10;
// // s 在调用结束被析构
// // &s 会成为悬垂指针
// b.do_sth(&s)
// }
// 使用高阶生命周期: `for<>` 语法
fn bar<'a>(b: Box<for<'f> DoSomething<&'f usize>>) {
let s: usize = 10;
// s 在调用结束被析构
// &s 会成为悬垂指针
b.do_sth(&s)
}
// 闭包参数和返回值都是引用类型的情况
struct Pick<F> {
data: (u32, u32),
func: F,
}
// 编译器自动补齐了生命周期参数
// impl<F> Pick<F>
// where
// F: Fn(&(u32, u32)) -> &u32,
// {
// fn call(&self) -> &u32 { (self.func)(&self.data) }
// }
// 实际生命周期
impl<F> Pick<F>
where
F: for<'f> Fn(&'f (u32, u32)) -> &'f u32,
{
fn call(&self) -> &u32 { (self.func)(&self.data) }
}
fn max(data: &(u32, u32)) -> &u32 {
if data.0 > data.1 {
&data.0
} else {
&data.1
}
}
fn main() {
//
let f = counter(3);
assert_eq!(4, f(1));
// 闭包的参数可以为任意类型
// a: 函数指针, (b, c): 元组, 会通过函数指针类型的信息自动推断元组内为 i32 类型
let add = |a: fn() -> i32, (b, c)| (a)() + b + c;
let r = add(val, (2, 3));
assert_eq!(r, 10);
// 两个相同定义的闭包却不属于同一种类型
// Rust 2018 已修复
let c1 = || {};
let c2 = || {};
let v = [c1, c2];
// 查看闭包的类型
// let c1: () = || println!("i'm a closure");
// | -- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `()`, found closure
// 模拟编译器对闭包的实现
let env_var = 1;
let mut c = Closure { env_var: env_var };
// 实例调用
// 实际由 ABI 实现("rust-call")
c();
// 必须显式指定一个单元值作为参数
c.call(());
// 必须显式指定一个单元值作为参数
c.call_mut(());
// 必须显式指定一个单元值作为参数
// `call_once` 调用之后, 之前的实例所有权被转移, 无法再次被使用.
c.call_once(());
let mut c = Closure { env_var: env_var };
{
assert_eq!(3, call_it(&c));
}
{
assert_eq!(3, call_it_mut(&mut c));
}
{
assert_eq!(3, call_it_once(c));
| let env_var = 1;
let c = || env_var + 2;
assert_eq!(3, c());
// 显式指定闭包类型
let env_var = 1;
// 该类型为 trait 对象, 此处必须使用 trait 对象
let c: Box<Fn() -> i32> = Box::new(|| env_var + 2);
assert_eq!(3, c());
// 复制语义类型自动实现 `Fn`
// 绑定为字符串字面量, 为复制语义类型
let s = "hello";
// 闭包会按照不可变引用类型来捕获 `s`
// 该闭包默认自动实现了 `Fn` 这个 trait, 并且该闭包以不可变借用捕获环境中的自由变量
let c = || println!("{:?}", s);
c();
// 闭包 c 可以多次调用, 说明编译器自动为闭包表达式实现的结构体实例并未失去所有权.
c();
// 对 s 进行一次不可变借用
println!("{:?}", s);
// 闭包被翻译为匿名结构体和 trait 的情况
// 闭包被翻译为结构体 `Closure<'a>`, 因为环境变量是按不可变
let env_var = 42;
let mut c = Closure2 { env_var: &env_var };
c();
c.call_mut(());
c.call_once(());
// 实现了 `Fn` 的闭包也可以显式调用 `call_mut` 和 `call_once`
let s = "hello";
let mut c = || println!("{:?}", s);
c();
c();
// 依赖 `#[feature(fn_traits)]` 特性(如果不是默认的闭包调用, 并不需要此特性)
// 实现了 `Fn` 的闭包也可以显式调用 `call_mut` 和 `call_once` 方法
c.call_mut(());
c.call_once(());
c;
println!("{:?}", s);
// 移动语义类型自动实现 `FnOnce`
let s = "hello".to_string();
// 编译器翻译的闭包结构体中记录捕获变量的成员不是引用类型, 并且只实现 `FnOnce`
// error[E0525]: expected a closure that implements the `FnMut` trait, but this closure only implements `FnOnce`
// error[e0525]: expected a closure that implements the `fnmut` trait, but this closure only implements `fnonce`
let c = || s;
c();
// error[e0382]: use of moved value: `c`
// c();
// c.call(());
// c.call_mut(());
// 环境变量为复制语义类型时使用 `move` 关键字
let s = "hello";
// 虽然 `move` 关键字强制执行, 但闭包捕获的 `s` 执行的对象是复制语义后获取的新变量.
// 原始的 `s` 并未失去所有权.
// 所以肯定是 `&self` 和 `&mut self` 中的一种
// 又因为闭包 c 是不可变的, 所以只存在 `&self`;
// 可变借用需要使用 `mut` 关键字将 c 本身修改为可变
let c = move || println!("{:?}", s);
c();
c();
println!("{:?}", s);
// 环境变量为移动语义的情况
// 移动语义类型 `String`
let s = "hello".to_string();
// 使用 move 后无法再次使用
let c = move || println!("{:?}", s);
c();
c();
// error[E0382]: borrow of moved value: `s`
// println!("{:?}", s);
// move 关键字是否影响闭包本身
let mut x = 0;
let incr_x = || x += 1;
call(incr_x);
// error[E0382]: use of moved value: `incr_x`
// call(incr_x);
// 使用 move
let mut x = 0;
let incr_x = move || x += 1;
call(incr_x);
call(incr_x);
println!("x: {}", x);
// 对移动语义类型使用 `move`
let mut x = vec![];
let expand_x = move || x.push(42);
call(expand_x);
// error[E0382]: use of moved value: `expand_x`
// call(expand_x);
// 修改环境变量的闭包来自动实现 `FnMut`
// 使用 mut 关键字修改了其可变性, 变成了可变绑定
let mut s = "rust".to_string();
{
// 通过闭包实现自我修改, 所以需要声明 mut
// 如果想修改环境变量, 必须实现 `FnMut`
// 由编译器生成的闭包结构体实例在调用 `FnMut` 方法时, 需要 `&mut self`
let mut c = || s += " rust";
c();
// 改动源: https://github.com/ZhangHanDong/tao-of-rust-codes/issues/103
// 这行本应该出错, 但因为 NLL 的支持, 没有出错.
c();
println!("{:?}", s);
} // 归还了所有权
println!("{:?}", s);
// 实现了 `FnMut` 的闭包的情况
let mut s = "rust".to_string();
{
// error[E0525]: expected a closure that implements the `Fn` trait, but this closure only implements `FnMut`
let mut c = || s += " rust";
c();
// 闭包只实现了 `FnMut`, 没有实现 `Fn`
// c.call(());
c.call_once(());
println!("{:?}", s);
}
println!("{:?}", s);
// 没有捕获任何环境变量的闭包
// 没有捕获环境变量, 没有使用 `mut` 关键字, 然而可以多次调用
// 足以证明编译器为其自动实现的结构体实例并未失去所有权, 只可能是 `&self`
// 所以, 闭包一定实现了 `Fn`
let c = || println!("hhh");
c();
c();
// 把闭包当作 trait 对象
// `Box<Fn()>` 是一个 trait 对象
// 把闭包放到 `Box<T>` 中就可以构建一个闭包的 trait 对象
// trait 对象是动态分发的
let mut c: Vec<Box<Fn()>> = vec![];
boxed_closure(&mut c);
for f in c {
f();
}
// 以 trait 限定的方式实现 any 方法
let v = vec![1, 2, 3];
let b = v.any(|x| x == 3);
println!("{:?}", b);
// 函数指针也可以作为闭包参数
// 函数指针也实现了 `Fn`
let result = call_ptr(counter_ptr);
assert_eq!(2, result);
// 将闭包作为 trait 对象进行动态分发
let v = vec![1, 2, 3];
let b = v.any_dyn(&|x| x == 3);
println!("{:?}", b);
// 测试 `'static` 约束
let s = "hello";
// let c: Box<Fn() +'static> = Box::new(move || { s; });
// error[E0597]: `s` does not live long enough
// let c: Box<Fn() +'static> = Box::new(|| { s; });
// 将闭包作为函数返回值
let square_rt = square();
assert_eq!(4, square_rt(2));
assert_eq!(9, square_rt(3));
// 指定闭包返回为 `FnOnce`
// 内容有变动: https://github.com/ZhangHanDong/tao-of-rust-codes/issues/249
let square_once_rt = square_once();
assert_eq!(4, square_once_rt(2));
// impl Trait 示例
let square_impl_rt = square_impl();
assert_eq!(4, square_impl_rt(2));
// 泛型 trait 作为 trait 对象时的生命周期参数
let x = Box::new(&2usize);
// foo(x);
bar(x);
// 闭包参数和返回值都是引用类型的情况
let elm = Pick {
data: (3, 1),
func: max,
};
println!("{}", elm.call());
}
| }
// 与上者等价的闭包示例
| conditional_block |
deferred_call.rs | // Licensed under the Apache License, Version 2.0 or the MIT License.
// SPDX-License-Identifier: Apache-2.0 OR MIT
// Copyright Tock Contributors 2022.
//! Hardware-independent kernel interface for deferred calls
//!
//! This allows any struct in the kernel which implements
//! [DeferredCallClient](crate::deferred_call::DeferredCallClient)
//! to set and receive deferred calls, Tock's version of software
//! interrupts.
//!
//! These can be used to implement long-running in-kernel algorithms
//! or software devices that are supposed to work like hardware devices.
//! Essentially, this allows the chip to handle more important interrupts,
//! and lets a kernel component return the function call stack up to the scheduler,
//! automatically being called again.
//!
//! Usage
//! -----
//!
//! The `DEFCALLS` array size determines how many
//! [DeferredCall](crate::deferred_call::DeferredCall)s
//! may be registered. By default this is set to 32.
//! To support more deferred calls, this file would need to be modified
//! to use a larger variable for BITMASK (e.g. BITMASK could be a u64
//! and the array size increased to 64).
//! If more than 32 deferred calls are created, the kernel will panic
//! at the beginning of the kernel loop.
//!
//! ```rust
//! use kernel::deferred_call::{DeferredCall, DeferredCallClient};
//! use kernel::static_init;
//!
//! struct SomeCapsule {
//! deferred_call: DeferredCall
//! }
//! impl SomeCapsule {
//! pub fn new() -> Self {
//! Self {
//! deferred_call: DeferredCall::new(),
//! }
//! }
//! }
//! impl DeferredCallClient for SomeCapsule {
//! fn handle_deferred_call(&self) {
//! // Your action here
//! }
//!
//! fn register(&'static self) {
//! self.deferred_call.register(self);
//! }
//! }
//!
//! // main.rs or your component must register the capsule with
//! // its deferred call.
//! // This should look like:
//! let some_capsule = unsafe { static_init!(SomeCapsule, SomeCapsule::new()) };
//! some_capsule.register();
//! ```
use crate::utilities::cells::OptionalCell;
use core::cell::Cell;
use core::marker::Copy;
use core::marker::PhantomData;
// This trait is not intended to be used as a trait object;
// e.g. you should not create a `&dyn DeferredCallClient`.
// The `Sized` supertrait prevents this.
/// This trait should be implemented by clients which need to
/// receive DeferredCalls
pub trait DeferredCallClient: Sized {
fn handle_deferred_call(&self);
fn register(&'static self); // This function should be implemented as
// `self.deferred_call.register(&self);`
}
/// This struct serves as a lightweight alternative to the use of trait objects
/// (e.g. `&dyn DeferredCall`). Using a trait object, will include a 20 byte vtable
/// per instance, but this alternative stores only the data and function pointers,
/// 8 bytes per instance.
#[derive(Copy, Clone)]
struct DynDefCallRef<'a> {
data: *const (),
callback: fn(*const ()),
_lifetime: PhantomData<&'a ()>,
}
impl<'a> DynDefCallRef<'a> {
// SAFETY: We define the callback function as being a closure which casts
// the passed pointer to be the appropriate type (a pointer to `T`)
// and then calls `T::handle_deferred_call()`. In practice, the closure
// is optimized away by LLVM when the ABI of the closure and the underlying function
// are identical, making this zero-cost, but saving us from having to trust
// that `fn(*const ())` and `fn handle_deferred_call(&self)` will always have the same calling
// convention for any type.
fn new<T: DeferredCallClient>(x: &'a T) -> Self {
Self {
data: x as *const _ as *const (),
callback: |p| unsafe { T::handle_deferred_call(&*p.cast()) },
_lifetime: PhantomData,
}
}
}
impl DynDefCallRef<'_> {
// more efficient pass by `self` if we don't have to implement `DeferredCallClient` directly
fn handle_deferred_call(self) {
(self.callback)(self.data)
}
}
// The below constant lets us get around Rust not allowing short array initialization
// for non-default types
const EMPTY: OptionalCell<DynDefCallRef<'static>> = OptionalCell::empty();
// All 3 of the below global statics are accessed only in this file, and all accesses
// are via immutable references. Tock is single threaded, so each will only ever be
// accessed via an immutable reference from the single kernel thread.
// TODO: Once Tock decides on an approach to replace `static mut` with some sort of
// `SyncCell`, migrate all three of these to that approach
// (https://github.com/tock/tock/issues/1545)
/// Counter for the number of deferred calls that have been created, this is
/// used to track that no more than 32 deferred calls have been created.
static mut CTR: Cell<usize> = Cell::new(0);
/// This bitmask tracks which of the up to 32 existing deferred calls have been scheduled.
/// Any bit that is set in that mask indicates the deferred call with its `idx` field set
/// to the index of that bit has been scheduled and not yet serviced.
static mut BITMASK: Cell<u32> = Cell::new(0);
// This is a 256 byte array, but at least resides in.bss
/// An array that stores references to up to 32 `DeferredCall`s via the low-cost
/// `DynDefCallRef`.
static mut DEFCALLS: [OptionalCell<DynDefCallRef<'static>>; 32] = [EMPTY; 32];
pub struct DeferredCall {
idx: usize,
}
impl DeferredCall {
/// Creates a new deferred call with a unique ID.
pub fn new() -> Self |
// To reduce monomorphization bloat, the non-generic portion of register is moved into this
// function without generic parameters.
#[inline(never)]
fn register_internal_non_generic(&self, handler: DynDefCallRef<'static>) {
// SAFETY: No accesses to DEFCALLS are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let defcalls = unsafe { &DEFCALLS };
if self.idx >= defcalls.len() {
// This error will be caught by the scheduler at the beginning of the kernel loop,
// which is much better than panicking here, before the debug writer is setup.
// Also allows a single panic for creating too many deferred calls instead
// of NUM_DCS panics (this function is monomorphized).
return;
}
defcalls[self.idx].set(handler);
}
/// This function registers the passed client with this deferred call, such
/// that calls to `DeferredCall::set()` will schedule a callback on the
/// `handle_deferred_call()` method of the passed client.
pub fn register<DC: DeferredCallClient>(&self, client: &'static DC) {
let handler = DynDefCallRef::new(client);
self.register_internal_non_generic(handler);
}
/// Schedule a deferred callback on the client associated with this deferred call
pub fn set(&self) {
// SAFETY: No accesses to BITMASK are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let bitmask = unsafe { &BITMASK };
bitmask.set(bitmask.get() | (1 << self.idx));
}
/// Check if a deferred callback has been set and not yet serviced on this deferred call.
pub fn is_pending(&self) -> bool {
// SAFETY: No accesses to BITMASK are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let bitmask = unsafe { &BITMASK };
bitmask.get() & (1 << self.idx) == 1
}
/// Services and clears the next pending `DeferredCall`, returns which index
/// was serviced
pub fn service_next_pending() -> Option<usize> {
// SAFETY: No accesses to BITMASK/DEFCALLS are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let bitmask = unsafe { &BITMASK };
let defcalls = unsafe { &DEFCALLS };
let val = bitmask.get();
if val == 0 {
None
} else {
let bit = val.trailing_zeros() as usize;
let new_val = val &!(1 << bit);
bitmask.set(new_val);
defcalls[bit].map(|dc| {
dc.handle_deferred_call();
bit
})
}
}
/// Returns true if any deferred calls are waiting to be serviced,
/// false otherwise.
pub fn has_tasks() -> bool {
// SAFETY: No accesses to BITMASK are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let bitmask = unsafe { &BITMASK };
bitmask.get()!= 0
}
/// This function should be called at the beginning of the kernel loop
/// to verify that deferred calls have been correctly initialized. This function
/// verifies two things:
/// 1. That <= `DEFCALLS.len()` deferred calls have been created, which is the
/// maximum this interface supports
/// 2. That exactly as many deferred calls were registered as were created, which helps to
/// catch bugs if board maintainers forget to call `register()` on a created `DeferredCall`.
/// Neither of these checks are necessary for soundness, but they are necessary for confirming
/// that DeferredCalls will actually be delivered as expected. This function costs about 300
/// bytes, so you can remove it if you are confident your setup will not exceed 32 deferred
/// calls, and that all of your components register their deferred calls.
pub fn verify_setup() {
// SAFETY: No accesses to CTR/DEFCALLS are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let ctr = unsafe { &CTR };
let defcalls = unsafe { &DEFCALLS };
let num_deferred_calls = ctr.get();
if num_deferred_calls >= defcalls.len()
|| defcalls.iter().filter(|opt| opt.is_some()).count()!= num_deferred_calls
{
panic!(
"ERROR: > 32 deferred calls, or a component forgot to register a deferred call."
);
}
}
}
| {
// SAFETY: No accesses to CTR are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let ctr = unsafe { &CTR };
let idx = ctr.get() + 1;
ctr.set(idx);
DeferredCall { idx }
} | identifier_body |
deferred_call.rs | // Licensed under the Apache License, Version 2.0 or the MIT License.
// SPDX-License-Identifier: Apache-2.0 OR MIT
// Copyright Tock Contributors 2022.
//! Hardware-independent kernel interface for deferred calls
//!
//! This allows any struct in the kernel which implements
//! [DeferredCallClient](crate::deferred_call::DeferredCallClient)
//! to set and receive deferred calls, Tock's version of software
//! interrupts.
//!
//! These can be used to implement long-running in-kernel algorithms
//! or software devices that are supposed to work like hardware devices.
//! Essentially, this allows the chip to handle more important interrupts,
//! and lets a kernel component return the function call stack up to the scheduler,
//! automatically being called again.
//!
//! Usage
//! -----
//!
//! The `DEFCALLS` array size determines how many
//! [DeferredCall](crate::deferred_call::DeferredCall)s
//! may be registered. By default this is set to 32.
//! To support more deferred calls, this file would need to be modified
//! to use a larger variable for BITMASK (e.g. BITMASK could be a u64
//! and the array size increased to 64).
//! If more than 32 deferred calls are created, the kernel will panic
//! at the beginning of the kernel loop.
//!
//! ```rust
//! use kernel::deferred_call::{DeferredCall, DeferredCallClient};
//! use kernel::static_init;
//!
//! struct SomeCapsule {
//! deferred_call: DeferredCall
//! }
//! impl SomeCapsule {
//! pub fn new() -> Self {
//! Self {
//! deferred_call: DeferredCall::new(),
//! }
//! }
//! }
//! impl DeferredCallClient for SomeCapsule {
//! fn handle_deferred_call(&self) {
//! // Your action here
//! }
//!
//! fn register(&'static self) {
//! self.deferred_call.register(self);
//! }
//! }
//!
//! // main.rs or your component must register the capsule with
//! // its deferred call.
//! // This should look like:
//! let some_capsule = unsafe { static_init!(SomeCapsule, SomeCapsule::new()) };
//! some_capsule.register();
//! ```
use crate::utilities::cells::OptionalCell;
use core::cell::Cell;
use core::marker::Copy;
use core::marker::PhantomData;
// This trait is not intended to be used as a trait object;
// e.g. you should not create a `&dyn DeferredCallClient`.
// The `Sized` supertrait prevents this.
/// This trait should be implemented by clients which need to
/// receive DeferredCalls
pub trait DeferredCallClient: Sized {
fn handle_deferred_call(&self);
fn register(&'static self); // This function should be implemented as
// `self.deferred_call.register(&self);`
}
/// This struct serves as a lightweight alternative to the use of trait objects
/// (e.g. `&dyn DeferredCall`). Using a trait object, will include a 20 byte vtable
/// per instance, but this alternative stores only the data and function pointers,
/// 8 bytes per instance.
#[derive(Copy, Clone)]
struct DynDefCallRef<'a> {
data: *const (),
callback: fn(*const ()),
_lifetime: PhantomData<&'a ()>,
}
impl<'a> DynDefCallRef<'a> {
// SAFETY: We define the callback function as being a closure which casts
// the passed pointer to be the appropriate type (a pointer to `T`)
// and then calls `T::handle_deferred_call()`. In practice, the closure
// is optimized away by LLVM when the ABI of the closure and the underlying function
// are identical, making this zero-cost, but saving us from having to trust
// that `fn(*const ())` and `fn handle_deferred_call(&self)` will always have the same calling
// convention for any type.
fn new<T: DeferredCallClient>(x: &'a T) -> Self {
Self {
data: x as *const _ as *const (),
callback: |p| unsafe { T::handle_deferred_call(&*p.cast()) },
_lifetime: PhantomData,
}
}
}
impl DynDefCallRef<'_> {
// more efficient pass by `self` if we don't have to implement `DeferredCallClient` directly
fn handle_deferred_call(self) {
(self.callback)(self.data)
}
}
// The below constant lets us get around Rust not allowing short array initialization
// for non-default types
const EMPTY: OptionalCell<DynDefCallRef<'static>> = OptionalCell::empty();
// All 3 of the below global statics are accessed only in this file, and all accesses
// are via immutable references. Tock is single threaded, so each will only ever be
// accessed via an immutable reference from the single kernel thread.
// TODO: Once Tock decides on an approach to replace `static mut` with some sort of
// `SyncCell`, migrate all three of these to that approach
// (https://github.com/tock/tock/issues/1545)
/// Counter for the number of deferred calls that have been created, this is
/// used to track that no more than 32 deferred calls have been created.
static mut CTR: Cell<usize> = Cell::new(0);
/// This bitmask tracks which of the up to 32 existing deferred calls have been scheduled.
/// Any bit that is set in that mask indicates the deferred call with its `idx` field set
/// to the index of that bit has been scheduled and not yet serviced.
static mut BITMASK: Cell<u32> = Cell::new(0);
// This is a 256 byte array, but at least resides in.bss
/// An array that stores references to up to 32 `DeferredCall`s via the low-cost
/// `DynDefCallRef`.
static mut DEFCALLS: [OptionalCell<DynDefCallRef<'static>>; 32] = [EMPTY; 32];
pub struct DeferredCall {
idx: usize,
}
impl DeferredCall {
/// Creates a new deferred call with a unique ID.
pub fn new() -> Self {
// SAFETY: No accesses to CTR are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let ctr = unsafe { &CTR };
let idx = ctr.get() + 1;
ctr.set(idx);
DeferredCall { idx }
}
// To reduce monomorphization bloat, the non-generic portion of register is moved into this
// function without generic parameters.
#[inline(never)]
fn register_internal_non_generic(&self, handler: DynDefCallRef<'static>) {
// SAFETY: No accesses to DEFCALLS are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let defcalls = unsafe { &DEFCALLS };
if self.idx >= defcalls.len() {
// This error will be caught by the scheduler at the beginning of the kernel loop,
// which is much better than panicking here, before the debug writer is setup.
// Also allows a single panic for creating too many deferred calls instead
// of NUM_DCS panics (this function is monomorphized).
return;
}
defcalls[self.idx].set(handler);
}
/// This function registers the passed client with this deferred call, such
/// that calls to `DeferredCall::set()` will schedule a callback on the
/// `handle_deferred_call()` method of the passed client.
pub fn register<DC: DeferredCallClient>(&self, client: &'static DC) {
let handler = DynDefCallRef::new(client);
self.register_internal_non_generic(handler);
}
/// Schedule a deferred callback on the client associated with this deferred call
pub fn set(&self) {
// SAFETY: No accesses to BITMASK are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let bitmask = unsafe { &BITMASK };
bitmask.set(bitmask.get() | (1 << self.idx));
}
/// Check if a deferred callback has been set and not yet serviced on this deferred call.
pub fn is_pending(&self) -> bool {
// SAFETY: No accesses to BITMASK are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let bitmask = unsafe { &BITMASK };
bitmask.get() & (1 << self.idx) == 1
}
/// Services and clears the next pending `DeferredCall`, returns which index
/// was serviced
pub fn service_next_pending() -> Option<usize> {
// SAFETY: No accesses to BITMASK/DEFCALLS are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let bitmask = unsafe { &BITMASK };
let defcalls = unsafe { &DEFCALLS };
let val = bitmask.get();
if val == 0 {
None
} else {
let bit = val.trailing_zeros() as usize;
let new_val = val &!(1 << bit);
bitmask.set(new_val);
defcalls[bit].map(|dc| {
dc.handle_deferred_call();
bit
})
}
}
/// Returns true if any deferred calls are waiting to be serviced,
/// false otherwise.
pub fn | () -> bool {
// SAFETY: No accesses to BITMASK are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let bitmask = unsafe { &BITMASK };
bitmask.get()!= 0
}
/// This function should be called at the beginning of the kernel loop
/// to verify that deferred calls have been correctly initialized. This function
/// verifies two things:
/// 1. That <= `DEFCALLS.len()` deferred calls have been created, which is the
/// maximum this interface supports
/// 2. That exactly as many deferred calls were registered as were created, which helps to
/// catch bugs if board maintainers forget to call `register()` on a created `DeferredCall`.
/// Neither of these checks are necessary for soundness, but they are necessary for confirming
/// that DeferredCalls will actually be delivered as expected. This function costs about 300
/// bytes, so you can remove it if you are confident your setup will not exceed 32 deferred
/// calls, and that all of your components register their deferred calls.
pub fn verify_setup() {
// SAFETY: No accesses to CTR/DEFCALLS are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let ctr = unsafe { &CTR };
let defcalls = unsafe { &DEFCALLS };
let num_deferred_calls = ctr.get();
if num_deferred_calls >= defcalls.len()
|| defcalls.iter().filter(|opt| opt.is_some()).count()!= num_deferred_calls
{
panic!(
"ERROR: > 32 deferred calls, or a component forgot to register a deferred call."
);
}
}
}
| has_tasks | identifier_name |
deferred_call.rs | // Licensed under the Apache License, Version 2.0 or the MIT License.
// SPDX-License-Identifier: Apache-2.0 OR MIT
// Copyright Tock Contributors 2022.
//! Hardware-independent kernel interface for deferred calls
//!
//! This allows any struct in the kernel which implements
//! [DeferredCallClient](crate::deferred_call::DeferredCallClient)
//! to set and receive deferred calls, Tock's version of software
//! interrupts.
//!
//! These can be used to implement long-running in-kernel algorithms
//! or software devices that are supposed to work like hardware devices.
//! Essentially, this allows the chip to handle more important interrupts,
//! and lets a kernel component return the function call stack up to the scheduler,
//! automatically being called again.
//!
//! Usage
//! -----
//!
//! The `DEFCALLS` array size determines how many
//! [DeferredCall](crate::deferred_call::DeferredCall)s
//! may be registered. By default this is set to 32.
//! To support more deferred calls, this file would need to be modified
//! to use a larger variable for BITMASK (e.g. BITMASK could be a u64
//! and the array size increased to 64).
//! If more than 32 deferred calls are created, the kernel will panic
//! at the beginning of the kernel loop.
//!
//! ```rust
//! use kernel::deferred_call::{DeferredCall, DeferredCallClient};
//! use kernel::static_init;
//!
//! struct SomeCapsule {
//! deferred_call: DeferredCall
//! }
//! impl SomeCapsule {
//! pub fn new() -> Self {
//! Self {
//! deferred_call: DeferredCall::new(),
//! }
//! }
//! }
//! impl DeferredCallClient for SomeCapsule {
//! fn handle_deferred_call(&self) {
//! // Your action here
//! }
//!
//! fn register(&'static self) {
//! self.deferred_call.register(self);
//! }
//! }
//!
//! // main.rs or your component must register the capsule with
//! // its deferred call.
//! // This should look like:
//! let some_capsule = unsafe { static_init!(SomeCapsule, SomeCapsule::new()) };
//! some_capsule.register();
//! ```
use crate::utilities::cells::OptionalCell;
use core::cell::Cell;
use core::marker::Copy;
use core::marker::PhantomData;
// This trait is not intended to be used as a trait object;
// e.g. you should not create a `&dyn DeferredCallClient`.
// The `Sized` supertrait prevents this.
/// This trait should be implemented by clients which need to
/// receive DeferredCalls
pub trait DeferredCallClient: Sized {
fn handle_deferred_call(&self);
fn register(&'static self); // This function should be implemented as
// `self.deferred_call.register(&self);`
}
/// This struct serves as a lightweight alternative to the use of trait objects
/// (e.g. `&dyn DeferredCall`). Using a trait object, will include a 20 byte vtable
/// per instance, but this alternative stores only the data and function pointers,
/// 8 bytes per instance.
#[derive(Copy, Clone)]
struct DynDefCallRef<'a> {
data: *const (),
callback: fn(*const ()),
_lifetime: PhantomData<&'a ()>,
}
impl<'a> DynDefCallRef<'a> {
// SAFETY: We define the callback function as being a closure which casts
// the passed pointer to be the appropriate type (a pointer to `T`)
// and then calls `T::handle_deferred_call()`. In practice, the closure
// is optimized away by LLVM when the ABI of the closure and the underlying function
// are identical, making this zero-cost, but saving us from having to trust
// that `fn(*const ())` and `fn handle_deferred_call(&self)` will always have the same calling
// convention for any type.
fn new<T: DeferredCallClient>(x: &'a T) -> Self {
Self {
data: x as *const _ as *const (),
callback: |p| unsafe { T::handle_deferred_call(&*p.cast()) },
_lifetime: PhantomData,
}
}
}
impl DynDefCallRef<'_> {
// more efficient pass by `self` if we don't have to implement `DeferredCallClient` directly
fn handle_deferred_call(self) {
(self.callback)(self.data)
}
}
// The below constant lets us get around Rust not allowing short array initialization
// for non-default types
const EMPTY: OptionalCell<DynDefCallRef<'static>> = OptionalCell::empty();
// All 3 of the below global statics are accessed only in this file, and all accesses
// are via immutable references. Tock is single threaded, so each will only ever be
// accessed via an immutable reference from the single kernel thread.
// TODO: Once Tock decides on an approach to replace `static mut` with some sort of
// `SyncCell`, migrate all three of these to that approach
// (https://github.com/tock/tock/issues/1545)
/// Counter for the number of deferred calls that have been created, this is
/// used to track that no more than 32 deferred calls have been created.
static mut CTR: Cell<usize> = Cell::new(0);
/// This bitmask tracks which of the up to 32 existing deferred calls have been scheduled.
/// Any bit that is set in that mask indicates the deferred call with its `idx` field set
/// to the index of that bit has been scheduled and not yet serviced.
static mut BITMASK: Cell<u32> = Cell::new(0);
// This is a 256 byte array, but at least resides in.bss
/// An array that stores references to up to 32 `DeferredCall`s via the low-cost
/// `DynDefCallRef`.
static mut DEFCALLS: [OptionalCell<DynDefCallRef<'static>>; 32] = [EMPTY; 32];
pub struct DeferredCall {
idx: usize,
}
impl DeferredCall {
/// Creates a new deferred call with a unique ID.
pub fn new() -> Self {
// SAFETY: No accesses to CTR are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let ctr = unsafe { &CTR };
let idx = ctr.get() + 1;
ctr.set(idx);
DeferredCall { idx }
}
// To reduce monomorphization bloat, the non-generic portion of register is moved into this
// function without generic parameters.
#[inline(never)]
fn register_internal_non_generic(&self, handler: DynDefCallRef<'static>) {
// SAFETY: No accesses to DEFCALLS are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let defcalls = unsafe { &DEFCALLS };
if self.idx >= defcalls.len() {
// This error will be caught by the scheduler at the beginning of the kernel loop,
// which is much better than panicking here, before the debug writer is setup.
// Also allows a single panic for creating too many deferred calls instead
// of NUM_DCS panics (this function is monomorphized).
return;
}
defcalls[self.idx].set(handler);
}
/// This function registers the passed client with this deferred call, such
/// that calls to `DeferredCall::set()` will schedule a callback on the
/// `handle_deferred_call()` method of the passed client.
pub fn register<DC: DeferredCallClient>(&self, client: &'static DC) {
let handler = DynDefCallRef::new(client);
self.register_internal_non_generic(handler);
}
/// Schedule a deferred callback on the client associated with this deferred call
pub fn set(&self) {
// SAFETY: No accesses to BITMASK are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let bitmask = unsafe { &BITMASK };
bitmask.set(bitmask.get() | (1 << self.idx));
}
/// Check if a deferred callback has been set and not yet serviced on this deferred call.
pub fn is_pending(&self) -> bool {
// SAFETY: No accesses to BITMASK are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let bitmask = unsafe { &BITMASK };
bitmask.get() & (1 << self.idx) == 1
}
/// Services and clears the next pending `DeferredCall`, returns which index
/// was serviced
pub fn service_next_pending() -> Option<usize> {
// SAFETY: No accesses to BITMASK/DEFCALLS are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let bitmask = unsafe { &BITMASK };
let defcalls = unsafe { &DEFCALLS };
let val = bitmask.get();
if val == 0 {
None
} else {
let bit = val.trailing_zeros() as usize;
let new_val = val &!(1 << bit);
bitmask.set(new_val);
defcalls[bit].map(|dc| {
dc.handle_deferred_call();
bit
})
}
}
/// Returns true if any deferred calls are waiting to be serviced,
/// false otherwise.
pub fn has_tasks() -> bool {
// SAFETY: No accesses to BITMASK are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let bitmask = unsafe { &BITMASK };
bitmask.get()!= 0
}
/// This function should be called at the beginning of the kernel loop
/// to verify that deferred calls have been correctly initialized. This function
/// verifies two things:
/// 1. That <= `DEFCALLS.len()` deferred calls have been created, which is the
/// maximum this interface supports
/// 2. That exactly as many deferred calls were registered as were created, which helps to
/// catch bugs if board maintainers forget to call `register()` on a created `DeferredCall`.
/// Neither of these checks are necessary for soundness, but they are necessary for confirming
/// that DeferredCalls will actually be delivered as expected. This function costs about 300
/// bytes, so you can remove it if you are confident your setup will not exceed 32 deferred
/// calls, and that all of your components register their deferred calls.
pub fn verify_setup() {
// SAFETY: No accesses to CTR/DEFCALLS are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let ctr = unsafe { &CTR };
let defcalls = unsafe { &DEFCALLS };
let num_deferred_calls = ctr.get();
if num_deferred_calls >= defcalls.len()
|| defcalls.iter().filter(|opt| opt.is_some()).count()!= num_deferred_calls
|
}
}
| {
panic!(
"ERROR: > 32 deferred calls, or a component forgot to register a deferred call."
);
} | conditional_block |
deferred_call.rs | // Licensed under the Apache License, Version 2.0 or the MIT License.
// SPDX-License-Identifier: Apache-2.0 OR MIT
// Copyright Tock Contributors 2022.
//! Hardware-independent kernel interface for deferred calls
//!
//! This allows any struct in the kernel which implements
//! [DeferredCallClient](crate::deferred_call::DeferredCallClient)
//! to set and receive deferred calls, Tock's version of software
//! interrupts.
//!
//! These can be used to implement long-running in-kernel algorithms
//! or software devices that are supposed to work like hardware devices.
//! Essentially, this allows the chip to handle more important interrupts,
//! and lets a kernel component return the function call stack up to the scheduler,
//! automatically being called again.
//!
//! Usage
//! -----
//!
//! The `DEFCALLS` array size determines how many
//! [DeferredCall](crate::deferred_call::DeferredCall)s
//! may be registered. By default this is set to 32.
//! To support more deferred calls, this file would need to be modified
//! to use a larger variable for BITMASK (e.g. BITMASK could be a u64
//! and the array size increased to 64).
//! If more than 32 deferred calls are created, the kernel will panic
//! at the beginning of the kernel loop.
//!
//! ```rust
//! use kernel::deferred_call::{DeferredCall, DeferredCallClient};
//! use kernel::static_init;
//!
//! struct SomeCapsule {
//! deferred_call: DeferredCall
//! }
//! impl SomeCapsule {
//! pub fn new() -> Self {
//! Self {
//! deferred_call: DeferredCall::new(),
//! }
//! }
//! }
//! impl DeferredCallClient for SomeCapsule {
//! fn handle_deferred_call(&self) {
//! // Your action here
//! }
//!
//! fn register(&'static self) {
//! self.deferred_call.register(self);
//! }
//! }
//!
//! // main.rs or your component must register the capsule with
//! // its deferred call.
//! // This should look like:
//! let some_capsule = unsafe { static_init!(SomeCapsule, SomeCapsule::new()) };
//! some_capsule.register();
//! ```
use crate::utilities::cells::OptionalCell;
use core::cell::Cell;
use core::marker::Copy;
use core::marker::PhantomData;
// This trait is not intended to be used as a trait object;
// e.g. you should not create a `&dyn DeferredCallClient`.
// The `Sized` supertrait prevents this.
/// This trait should be implemented by clients which need to
/// receive DeferredCalls
pub trait DeferredCallClient: Sized {
fn handle_deferred_call(&self);
fn register(&'static self); // This function should be implemented as
// `self.deferred_call.register(&self);`
}
/// This struct serves as a lightweight alternative to the use of trait objects
/// (e.g. `&dyn DeferredCall`). Using a trait object, will include a 20 byte vtable
/// per instance, but this alternative stores only the data and function pointers,
/// 8 bytes per instance.
#[derive(Copy, Clone)]
struct DynDefCallRef<'a> {
data: *const (),
callback: fn(*const ()),
_lifetime: PhantomData<&'a ()>,
}
impl<'a> DynDefCallRef<'a> {
// SAFETY: We define the callback function as being a closure which casts
// the passed pointer to be the appropriate type (a pointer to `T`)
// and then calls `T::handle_deferred_call()`. In practice, the closure
// is optimized away by LLVM when the ABI of the closure and the underlying function
// are identical, making this zero-cost, but saving us from having to trust
// that `fn(*const ())` and `fn handle_deferred_call(&self)` will always have the same calling
// convention for any type.
fn new<T: DeferredCallClient>(x: &'a T) -> Self {
Self {
data: x as *const _ as *const (),
callback: |p| unsafe { T::handle_deferred_call(&*p.cast()) },
_lifetime: PhantomData,
}
}
}
impl DynDefCallRef<'_> {
// more efficient pass by `self` if we don't have to implement `DeferredCallClient` directly
fn handle_deferred_call(self) {
(self.callback)(self.data)
}
}
// The below constant lets us get around Rust not allowing short array initialization
// for non-default types
const EMPTY: OptionalCell<DynDefCallRef<'static>> = OptionalCell::empty();
// All 3 of the below global statics are accessed only in this file, and all accesses
// are via immutable references. Tock is single threaded, so each will only ever be
// accessed via an immutable reference from the single kernel thread.
// TODO: Once Tock decides on an approach to replace `static mut` with some sort of
// `SyncCell`, migrate all three of these to that approach
// (https://github.com/tock/tock/issues/1545)
/// Counter for the number of deferred calls that have been created, this is
/// used to track that no more than 32 deferred calls have been created.
static mut CTR: Cell<usize> = Cell::new(0);
/// This bitmask tracks which of the up to 32 existing deferred calls have been scheduled.
/// Any bit that is set in that mask indicates the deferred call with its `idx` field set
/// to the index of that bit has been scheduled and not yet serviced.
static mut BITMASK: Cell<u32> = Cell::new(0);
// This is a 256 byte array, but at least resides in.bss
/// An array that stores references to up to 32 `DeferredCall`s via the low-cost
/// `DynDefCallRef`.
static mut DEFCALLS: [OptionalCell<DynDefCallRef<'static>>; 32] = [EMPTY; 32];
pub struct DeferredCall {
idx: usize,
}
impl DeferredCall {
/// Creates a new deferred call with a unique ID.
pub fn new() -> Self { | DeferredCall { idx }
}
// To reduce monomorphization bloat, the non-generic portion of register is moved into this
// function without generic parameters.
#[inline(never)]
fn register_internal_non_generic(&self, handler: DynDefCallRef<'static>) {
// SAFETY: No accesses to DEFCALLS are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let defcalls = unsafe { &DEFCALLS };
if self.idx >= defcalls.len() {
// This error will be caught by the scheduler at the beginning of the kernel loop,
// which is much better than panicking here, before the debug writer is setup.
// Also allows a single panic for creating too many deferred calls instead
// of NUM_DCS panics (this function is monomorphized).
return;
}
defcalls[self.idx].set(handler);
}
/// This function registers the passed client with this deferred call, such
/// that calls to `DeferredCall::set()` will schedule a callback on the
/// `handle_deferred_call()` method of the passed client.
pub fn register<DC: DeferredCallClient>(&self, client: &'static DC) {
let handler = DynDefCallRef::new(client);
self.register_internal_non_generic(handler);
}
/// Schedule a deferred callback on the client associated with this deferred call
pub fn set(&self) {
// SAFETY: No accesses to BITMASK are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let bitmask = unsafe { &BITMASK };
bitmask.set(bitmask.get() | (1 << self.idx));
}
/// Check if a deferred callback has been set and not yet serviced on this deferred call.
pub fn is_pending(&self) -> bool {
// SAFETY: No accesses to BITMASK are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let bitmask = unsafe { &BITMASK };
bitmask.get() & (1 << self.idx) == 1
}
/// Services and clears the next pending `DeferredCall`, returns which index
/// was serviced
pub fn service_next_pending() -> Option<usize> {
// SAFETY: No accesses to BITMASK/DEFCALLS are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let bitmask = unsafe { &BITMASK };
let defcalls = unsafe { &DEFCALLS };
let val = bitmask.get();
if val == 0 {
None
} else {
let bit = val.trailing_zeros() as usize;
let new_val = val &!(1 << bit);
bitmask.set(new_val);
defcalls[bit].map(|dc| {
dc.handle_deferred_call();
bit
})
}
}
/// Returns true if any deferred calls are waiting to be serviced,
/// false otherwise.
pub fn has_tasks() -> bool {
// SAFETY: No accesses to BITMASK are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let bitmask = unsafe { &BITMASK };
bitmask.get()!= 0
}
/// This function should be called at the beginning of the kernel loop
/// to verify that deferred calls have been correctly initialized. This function
/// verifies two things:
/// 1. That <= `DEFCALLS.len()` deferred calls have been created, which is the
/// maximum this interface supports
/// 2. That exactly as many deferred calls were registered as were created, which helps to
/// catch bugs if board maintainers forget to call `register()` on a created `DeferredCall`.
/// Neither of these checks are necessary for soundness, but they are necessary for confirming
/// that DeferredCalls will actually be delivered as expected. This function costs about 300
/// bytes, so you can remove it if you are confident your setup will not exceed 32 deferred
/// calls, and that all of your components register their deferred calls.
pub fn verify_setup() {
// SAFETY: No accesses to CTR/DEFCALLS are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let ctr = unsafe { &CTR };
let defcalls = unsafe { &DEFCALLS };
let num_deferred_calls = ctr.get();
if num_deferred_calls >= defcalls.len()
|| defcalls.iter().filter(|opt| opt.is_some()).count()!= num_deferred_calls
{
panic!(
"ERROR: > 32 deferred calls, or a component forgot to register a deferred call."
);
}
}
} | // SAFETY: No accesses to CTR are via an &mut, and the Tock kernel is
// single-threaded so all accesses will occur from this thread.
let ctr = unsafe { &CTR };
let idx = ctr.get() + 1;
ctr.set(idx); | random_line_split |
date.rs | // This file is part of the uutils coreutils package.
//
// For the full copyright and license information, please view the LICENSE
// file that was distributed with this source code.
// spell-checker:ignore (chrono) Datelike Timelike ; (format) DATEFILE MMDDhhmm ; (vars) datetime datetimes
use chrono::format::{Item, StrftimeItems};
use chrono::{DateTime, Duration, FixedOffset, Local, Offset, Utc};
#[cfg(windows)]
use chrono::{Datelike, Timelike};
use clap::{crate_version, Arg, ArgAction, Command};
#[cfg(all(unix, not(target_os = "macos"), not(target_os = "redox")))]
use libc::{clock_settime, timespec, CLOCK_REALTIME};
use std::fs::File;
use std::io::{BufRead, BufReader};
use std::path::PathBuf;
use uucore::display::Quotable;
#[cfg(not(any(target_os = "redox")))]
use uucore::error::FromIo;
use uucore::error::{UResult, USimpleError};
use uucore::{format_usage, help_about, help_usage, show};
#[cfg(windows)]
use windows_sys::Win32::{Foundation::SYSTEMTIME, System::SystemInformation::SetSystemTime};
use uucore::shortcut_value_parser::ShortcutValueParser;
// Options
const DATE: &str = "date";
const HOURS: &str = "hours";
const MINUTES: &str = "minutes";
const SECONDS: &str = "seconds";
const NS: &str = "ns";
const ABOUT: &str = help_about!("date.md");
const USAGE: &str = help_usage!("date.md");
const OPT_DATE: &str = "date";
const OPT_FORMAT: &str = "format";
const OPT_FILE: &str = "file";
const OPT_DEBUG: &str = "debug";
const OPT_ISO_8601: &str = "iso-8601";
const OPT_RFC_EMAIL: &str = "rfc-email";
const OPT_RFC_3339: &str = "rfc-3339";
const OPT_SET: &str = "set";
const OPT_REFERENCE: &str = "reference";
const OPT_UNIVERSAL: &str = "universal";
const OPT_UNIVERSAL_2: &str = "utc";
// Help strings
static ISO_8601_HELP_STRING: &str = "output date/time in ISO 8601 format.
FMT='date' for date only (the default),
'hours','minutes','seconds', or 'ns'
for date and time to the indicated precision.
Example: 2006-08-14T02:34:56-06:00";
static RFC_5322_HELP_STRING: &str = "output date and time in RFC 5322 format.
Example: Mon, 14 Aug 2006 02:34:56 -0600";
static RFC_3339_HELP_STRING: &str = "output date/time in RFC 3339 format.
FMT='date','seconds', or 'ns'
for date and time to the indicated precision.
Example: 2006-08-14 02:34:56-06:00";
#[cfg(not(any(target_os = "macos", target_os = "redox")))]
static OPT_SET_HELP_STRING: &str = "set time described by STRING";
#[cfg(target_os = "macos")]
static OPT_SET_HELP_STRING: &str = "set time described by STRING (not available on mac yet)";
#[cfg(target_os = "redox")]
static OPT_SET_HELP_STRING: &str = "set time described by STRING (not available on redox yet)";
/// Settings for this program, parsed from the command line
struct Settings {
utc: bool,
format: Format,
date_source: DateSource,
set_to: Option<DateTime<FixedOffset>>,
}
/// Various ways of displaying the date
enum Format {
Iso8601(Iso8601Format),
Rfc5322,
Rfc3339(Rfc3339Format),
Custom(String),
Default,
}
/// Various places that dates can come from
enum DateSource {
Now,
Custom(String),
File(PathBuf),
Human(Duration),
}
enum Iso8601Format {
Date,
Hours,
Minutes,
Seconds,
Ns,
}
impl<'a> From<&'a str> for Iso8601Format {
fn from(s: &str) -> Self {
match s {
HOURS => Self::Hours,
MINUTES => Self::Minutes,
SECONDS => Self::Seconds,
NS => Self::Ns,
DATE => Self::Date,
// Note: This is caught by clap via `possible_values`
_ => unreachable!(),
}
}
}
enum | {
Date,
Seconds,
Ns,
}
impl<'a> From<&'a str> for Rfc3339Format {
fn from(s: &str) -> Self {
match s {
DATE => Self::Date,
SECONDS => Self::Seconds,
NS => Self::Ns,
// Should be caught by clap
_ => panic!("Invalid format: {s}"),
}
}
}
#[uucore::main]
#[allow(clippy::cognitive_complexity)]
pub fn uumain(args: impl uucore::Args) -> UResult<()> {
let matches = uu_app().try_get_matches_from(args)?;
let format = if let Some(form) = matches.get_one::<String>(OPT_FORMAT) {
if!form.starts_with('+') {
return Err(USimpleError::new(
1,
format!("invalid date {}", form.quote()),
));
}
let form = form[1..].to_string();
Format::Custom(form)
} else if let Some(fmt) = matches
.get_many::<String>(OPT_ISO_8601)
.map(|mut iter| iter.next().unwrap_or(&DATE.to_string()).as_str().into())
{
Format::Iso8601(fmt)
} else if matches.get_flag(OPT_RFC_EMAIL) {
Format::Rfc5322
} else if let Some(fmt) = matches
.get_one::<String>(OPT_RFC_3339)
.map(|s| s.as_str().into())
{
Format::Rfc3339(fmt)
} else {
Format::Default
};
let date_source = if let Some(date) = matches.get_one::<String>(OPT_DATE) {
if let Ok(duration) = parse_datetime::from_str(date.as_str()) {
DateSource::Human(duration)
} else {
DateSource::Custom(date.into())
}
} else if let Some(file) = matches.get_one::<String>(OPT_FILE) {
DateSource::File(file.into())
} else {
DateSource::Now
};
let set_to = match matches.get_one::<String>(OPT_SET).map(parse_date) {
None => None,
Some(Err((input, _err))) => {
return Err(USimpleError::new(
1,
format!("invalid date {}", input.quote()),
));
}
Some(Ok(date)) => Some(date),
};
let settings = Settings {
utc: matches.get_flag(OPT_UNIVERSAL),
format,
date_source,
set_to,
};
if let Some(date) = settings.set_to {
// All set time functions expect UTC datetimes.
let date: DateTime<Utc> = if settings.utc {
date.with_timezone(&Utc)
} else {
date.into()
};
return set_system_datetime(date);
} else {
// Get the current time, either in the local time zone or UTC.
let now: DateTime<FixedOffset> = if settings.utc {
let now = Utc::now();
now.with_timezone(&now.offset().fix())
} else {
let now = Local::now();
now.with_timezone(now.offset())
};
// Iterate over all dates - whether it's a single date or a file.
let dates: Box<dyn Iterator<Item = _>> = match settings.date_source {
DateSource::Custom(ref input) => {
let date = parse_date(input.clone());
let iter = std::iter::once(date);
Box::new(iter)
}
DateSource::Human(relative_time) => {
// Get the current DateTime<FixedOffset> for things like "1 year ago"
let current_time = DateTime::<FixedOffset>::from(Local::now());
// double check the result is overflow or not of the current_time + relative_time
// it may cause a panic of chrono::datetime::DateTime add
match current_time.checked_add_signed(relative_time) {
Some(date) => {
let iter = std::iter::once(Ok(date));
Box::new(iter)
}
None => {
return Err(USimpleError::new(
1,
format!("invalid date {}", relative_time),
));
}
}
}
DateSource::File(ref path) => {
if path.is_dir() {
return Err(USimpleError::new(
2,
format!("expected file, got directory {}", path.quote()),
));
}
let file = File::open(path)
.map_err_context(|| path.as_os_str().to_string_lossy().to_string())?;
let lines = BufReader::new(file).lines();
let iter = lines.map_while(Result::ok).map(parse_date);
Box::new(iter)
}
DateSource::Now => {
let iter = std::iter::once(Ok(now));
Box::new(iter)
}
};
let format_string = make_format_string(&settings);
// Format all the dates
for date in dates {
match date {
Ok(date) => {
// GNU `date` uses `%N` for nano seconds, however crate::chrono uses `%f`
let format_string = &format_string.replace("%N", "%f");
// Refuse to pass this string to chrono as it is crashing in this crate
if format_string.contains("%#z") {
return Err(USimpleError::new(
1,
format!("invalid format {}", format_string.replace("%f", "%N")),
));
}
// Hack to work around panic in chrono,
// TODO - remove when a fix for https://github.com/chronotope/chrono/issues/623 is released
let format_items = StrftimeItems::new(format_string);
if format_items.clone().any(|i| i == Item::Error) {
return Err(USimpleError::new(
1,
format!("invalid format {}", format_string.replace("%f", "%N")),
));
}
let formatted = date
.format_with_items(format_items)
.to_string()
.replace("%f", "%N");
println!("{formatted}");
}
Err((input, _err)) => show!(USimpleError::new(
1,
format!("invalid date {}", input.quote())
)),
}
}
}
Ok(())
}
pub fn uu_app() -> Command {
Command::new(uucore::util_name())
.version(crate_version!())
.about(ABOUT)
.override_usage(format_usage(USAGE))
.infer_long_args(true)
.arg(
Arg::new(OPT_DATE)
.short('d')
.long(OPT_DATE)
.value_name("STRING")
.help("display time described by STRING, not 'now'"),
)
.arg(
Arg::new(OPT_FILE)
.short('f')
.long(OPT_FILE)
.value_name("DATEFILE")
.value_hint(clap::ValueHint::FilePath)
.help("like --date; once for each line of DATEFILE"),
)
.arg(
Arg::new(OPT_ISO_8601)
.short('I')
.long(OPT_ISO_8601)
.value_name("FMT")
.value_parser(ShortcutValueParser::new([
DATE, HOURS, MINUTES, SECONDS, NS,
]))
.num_args(0..=1)
.default_missing_value(OPT_DATE)
.help(ISO_8601_HELP_STRING),
)
.arg(
Arg::new(OPT_RFC_EMAIL)
.short('R')
.long(OPT_RFC_EMAIL)
.help(RFC_5322_HELP_STRING)
.action(ArgAction::SetTrue),
)
.arg(
Arg::new(OPT_RFC_3339)
.long(OPT_RFC_3339)
.value_name("FMT")
.value_parser(ShortcutValueParser::new([DATE, SECONDS, NS]))
.help(RFC_3339_HELP_STRING),
)
.arg(
Arg::new(OPT_DEBUG)
.long(OPT_DEBUG)
.help("annotate the parsed date, and warn about questionable usage to stderr")
.action(ArgAction::SetTrue),
)
.arg(
Arg::new(OPT_REFERENCE)
.short('r')
.long(OPT_REFERENCE)
.value_name("FILE")
.value_hint(clap::ValueHint::AnyPath)
.help("display the last modification time of FILE"),
)
.arg(
Arg::new(OPT_SET)
.short('s')
.long(OPT_SET)
.value_name("STRING")
.help(OPT_SET_HELP_STRING),
)
.arg(
Arg::new(OPT_UNIVERSAL)
.short('u')
.long(OPT_UNIVERSAL)
.alias(OPT_UNIVERSAL_2)
.help("print or set Coordinated Universal Time (UTC)")
.action(ArgAction::SetTrue),
)
.arg(Arg::new(OPT_FORMAT))
}
/// Return the appropriate format string for the given settings.
fn make_format_string(settings: &Settings) -> &str {
match settings.format {
Format::Iso8601(ref fmt) => match *fmt {
Iso8601Format::Date => "%F",
Iso8601Format::Hours => "%FT%H%:z",
Iso8601Format::Minutes => "%FT%H:%M%:z",
Iso8601Format::Seconds => "%FT%T%:z",
Iso8601Format::Ns => "%FT%T,%f%:z",
},
Format::Rfc5322 => "%a, %d %h %Y %T %z",
Format::Rfc3339(ref fmt) => match *fmt {
Rfc3339Format::Date => "%F",
Rfc3339Format::Seconds => "%F %T%:z",
Rfc3339Format::Ns => "%F %T.%f%:z",
},
Format::Custom(ref fmt) => fmt,
Format::Default => "%c",
}
}
/// Parse a `String` into a `DateTime`.
/// If it fails, return a tuple of the `String` along with its `ParseError`.
fn parse_date<S: AsRef<str> + Clone>(
s: S,
) -> Result<DateTime<FixedOffset>, (String, chrono::format::ParseError)> {
// TODO: The GNU date command can parse a wide variety of inputs.
s.as_ref().parse().map_err(|e| (s.as_ref().into(), e))
}
#[cfg(not(any(unix, windows)))]
fn set_system_datetime(_date: DateTime<Utc>) -> UResult<()> {
unimplemented!("setting date not implemented (unsupported target)");
}
#[cfg(target_os = "macos")]
fn set_system_datetime(_date: DateTime<Utc>) -> UResult<()> {
Err(USimpleError::new(
1,
"setting the date is not supported by macOS".to_string(),
))
}
#[cfg(target_os = "redox")]
fn set_system_datetime(_date: DateTime<Utc>) -> UResult<()> {
Err(USimpleError::new(
1,
"setting the date is not supported by Redox".to_string(),
))
}
#[cfg(all(unix, not(target_os = "macos"), not(target_os = "redox")))]
/// System call to set date (unix).
/// See here for more:
/// `<https://doc.rust-lang.org/libc/i686-unknown-linux-gnu/libc/fn.clock_settime.html>`
/// `<https://linux.die.net/man/3/clock_settime>`
/// `<https://www.gnu.org/software/libc/manual/html_node/Time-Types.html>`
fn set_system_datetime(date: DateTime<Utc>) -> UResult<()> {
let timespec = timespec {
tv_sec: date.timestamp() as _,
tv_nsec: date.timestamp_subsec_nanos() as _,
};
let result = unsafe { clock_settime(CLOCK_REALTIME, ×pec) };
if result == 0 {
Ok(())
} else {
Err(std::io::Error::last_os_error().map_err_context(|| "cannot set date".to_string()))
}
}
#[cfg(windows)]
/// System call to set date (Windows).
/// See here for more:
/// https://docs.microsoft.com/en-us/windows/win32/api/sysinfoapi/nf-sysinfoapi-setsystemtime
/// https://docs.microsoft.com/en-us/windows/win32/api/minwinbase/ns-minwinbase-systemtime
fn set_system_datetime(date: DateTime<Utc>) -> UResult<()> {
let system_time = SYSTEMTIME {
wYear: date.year() as u16,
wMonth: date.month() as u16,
// Ignored
wDayOfWeek: 0,
wDay: date.day() as u16,
wHour: date.hour() as u16,
wMinute: date.minute() as u16,
wSecond: date.second() as u16,
// TODO: be careful of leap seconds - valid range is [0, 999] - how to handle?
wMilliseconds: ((date.nanosecond() / 1_000_000) % 1000) as u16,
};
let result = unsafe { SetSystemTime(&system_time) };
if result == 0 {
Err(std::io::Error::last_os_error().map_err_context(|| "cannot set date".to_string()))
} else {
Ok(())
}
}
| Rfc3339Format | identifier_name |
date.rs | // This file is part of the uutils coreutils package.
//
// For the full copyright and license information, please view the LICENSE
// file that was distributed with this source code.
// spell-checker:ignore (chrono) Datelike Timelike ; (format) DATEFILE MMDDhhmm ; (vars) datetime datetimes
use chrono::format::{Item, StrftimeItems};
use chrono::{DateTime, Duration, FixedOffset, Local, Offset, Utc};
#[cfg(windows)]
use chrono::{Datelike, Timelike};
use clap::{crate_version, Arg, ArgAction, Command};
#[cfg(all(unix, not(target_os = "macos"), not(target_os = "redox")))]
use libc::{clock_settime, timespec, CLOCK_REALTIME};
use std::fs::File;
use std::io::{BufRead, BufReader};
use std::path::PathBuf;
use uucore::display::Quotable;
#[cfg(not(any(target_os = "redox")))]
use uucore::error::FromIo;
use uucore::error::{UResult, USimpleError};
use uucore::{format_usage, help_about, help_usage, show};
#[cfg(windows)]
use windows_sys::Win32::{Foundation::SYSTEMTIME, System::SystemInformation::SetSystemTime};
use uucore::shortcut_value_parser::ShortcutValueParser;
// Options
const DATE: &str = "date";
const HOURS: &str = "hours";
const MINUTES: &str = "minutes";
const SECONDS: &str = "seconds";
const NS: &str = "ns";
const ABOUT: &str = help_about!("date.md");
const USAGE: &str = help_usage!("date.md");
const OPT_DATE: &str = "date";
const OPT_FORMAT: &str = "format";
const OPT_FILE: &str = "file";
const OPT_DEBUG: &str = "debug";
const OPT_ISO_8601: &str = "iso-8601";
const OPT_RFC_EMAIL: &str = "rfc-email";
const OPT_RFC_3339: &str = "rfc-3339";
const OPT_SET: &str = "set";
const OPT_REFERENCE: &str = "reference";
const OPT_UNIVERSAL: &str = "universal";
const OPT_UNIVERSAL_2: &str = "utc";
// Help strings
static ISO_8601_HELP_STRING: &str = "output date/time in ISO 8601 format.
FMT='date' for date only (the default),
'hours','minutes','seconds', or 'ns'
for date and time to the indicated precision.
Example: 2006-08-14T02:34:56-06:00";
static RFC_5322_HELP_STRING: &str = "output date and time in RFC 5322 format.
Example: Mon, 14 Aug 2006 02:34:56 -0600";
static RFC_3339_HELP_STRING: &str = "output date/time in RFC 3339 format.
FMT='date','seconds', or 'ns'
for date and time to the indicated precision.
Example: 2006-08-14 02:34:56-06:00";
#[cfg(not(any(target_os = "macos", target_os = "redox")))]
static OPT_SET_HELP_STRING: &str = "set time described by STRING";
#[cfg(target_os = "macos")]
static OPT_SET_HELP_STRING: &str = "set time described by STRING (not available on mac yet)";
#[cfg(target_os = "redox")]
static OPT_SET_HELP_STRING: &str = "set time described by STRING (not available on redox yet)";
/// Settings for this program, parsed from the command line
struct Settings {
utc: bool,
format: Format,
date_source: DateSource,
set_to: Option<DateTime<FixedOffset>>,
}
/// Various ways of displaying the date
enum Format {
Iso8601(Iso8601Format),
Rfc5322,
Rfc3339(Rfc3339Format),
Custom(String),
Default,
}
/// Various places that dates can come from
enum DateSource {
Now,
Custom(String),
File(PathBuf),
Human(Duration),
}
enum Iso8601Format {
Date,
Hours,
Minutes,
Seconds,
Ns,
}
impl<'a> From<&'a str> for Iso8601Format {
fn from(s: &str) -> Self {
match s {
HOURS => Self::Hours,
MINUTES => Self::Minutes,
SECONDS => Self::Seconds,
NS => Self::Ns,
DATE => Self::Date,
// Note: This is caught by clap via `possible_values`
_ => unreachable!(),
}
}
}
enum Rfc3339Format {
Date,
Seconds,
Ns,
}
impl<'a> From<&'a str> for Rfc3339Format {
fn from(s: &str) -> Self {
match s {
DATE => Self::Date,
SECONDS => Self::Seconds,
NS => Self::Ns,
// Should be caught by clap
_ => panic!("Invalid format: {s}"),
}
}
}
#[uucore::main]
#[allow(clippy::cognitive_complexity)]
pub fn uumain(args: impl uucore::Args) -> UResult<()> {
let matches = uu_app().try_get_matches_from(args)?;
let format = if let Some(form) = matches.get_one::<String>(OPT_FORMAT) {
if!form.starts_with('+') {
return Err(USimpleError::new(
1,
format!("invalid date {}", form.quote()),
));
}
let form = form[1..].to_string();
Format::Custom(form)
} else if let Some(fmt) = matches
.get_many::<String>(OPT_ISO_8601)
.map(|mut iter| iter.next().unwrap_or(&DATE.to_string()).as_str().into())
{
Format::Iso8601(fmt)
} else if matches.get_flag(OPT_RFC_EMAIL) {
Format::Rfc5322
} else if let Some(fmt) = matches
.get_one::<String>(OPT_RFC_3339)
.map(|s| s.as_str().into())
{
Format::Rfc3339(fmt)
} else {
Format::Default
};
let date_source = if let Some(date) = matches.get_one::<String>(OPT_DATE) {
if let Ok(duration) = parse_datetime::from_str(date.as_str()) {
DateSource::Human(duration)
} else {
DateSource::Custom(date.into())
}
} else if let Some(file) = matches.get_one::<String>(OPT_FILE) {
DateSource::File(file.into())
} else {
DateSource::Now
};
let set_to = match matches.get_one::<String>(OPT_SET).map(parse_date) {
None => None,
Some(Err((input, _err))) => {
return Err(USimpleError::new(
1,
format!("invalid date {}", input.quote()),
));
}
Some(Ok(date)) => Some(date),
};
let settings = Settings {
utc: matches.get_flag(OPT_UNIVERSAL),
format,
date_source,
set_to,
};
if let Some(date) = settings.set_to {
// All set time functions expect UTC datetimes.
let date: DateTime<Utc> = if settings.utc {
date.with_timezone(&Utc)
} else {
date.into()
};
return set_system_datetime(date);
} else {
// Get the current time, either in the local time zone or UTC.
let now: DateTime<FixedOffset> = if settings.utc {
let now = Utc::now();
now.with_timezone(&now.offset().fix())
} else {
let now = Local::now();
now.with_timezone(now.offset())
};
// Iterate over all dates - whether it's a single date or a file.
let dates: Box<dyn Iterator<Item = _>> = match settings.date_source {
DateSource::Custom(ref input) => {
let date = parse_date(input.clone());
let iter = std::iter::once(date);
Box::new(iter)
}
DateSource::Human(relative_time) => {
// Get the current DateTime<FixedOffset> for things like "1 year ago"
let current_time = DateTime::<FixedOffset>::from(Local::now());
// double check the result is overflow or not of the current_time + relative_time
// it may cause a panic of chrono::datetime::DateTime add
match current_time.checked_add_signed(relative_time) {
Some(date) => {
let iter = std::iter::once(Ok(date));
Box::new(iter)
}
None => {
return Err(USimpleError::new(
1,
format!("invalid date {}", relative_time),
));
}
}
}
DateSource::File(ref path) => {
if path.is_dir() {
return Err(USimpleError::new(
2,
format!("expected file, got directory {}", path.quote()),
));
}
let file = File::open(path)
.map_err_context(|| path.as_os_str().to_string_lossy().to_string())?;
let lines = BufReader::new(file).lines();
let iter = lines.map_while(Result::ok).map(parse_date);
Box::new(iter)
}
DateSource::Now => {
let iter = std::iter::once(Ok(now));
Box::new(iter)
}
};
let format_string = make_format_string(&settings);
// Format all the dates
for date in dates {
match date {
Ok(date) => {
// GNU `date` uses `%N` for nano seconds, however crate::chrono uses `%f`
let format_string = &format_string.replace("%N", "%f");
// Refuse to pass this string to chrono as it is crashing in this crate
if format_string.contains("%#z") {
return Err(USimpleError::new(
1,
format!("invalid format {}", format_string.replace("%f", "%N")),
));
}
// Hack to work around panic in chrono,
// TODO - remove when a fix for https://github.com/chronotope/chrono/issues/623 is released
let format_items = StrftimeItems::new(format_string);
if format_items.clone().any(|i| i == Item::Error) {
return Err(USimpleError::new(
1,
format!("invalid format {}", format_string.replace("%f", "%N")),
));
}
let formatted = date
.format_with_items(format_items)
.to_string()
.replace("%f", "%N");
println!("{formatted}");
}
Err((input, _err)) => show!(USimpleError::new(
1,
format!("invalid date {}", input.quote())
)),
}
}
}
Ok(())
}
pub fn uu_app() -> Command {
Command::new(uucore::util_name())
.version(crate_version!())
.about(ABOUT)
.override_usage(format_usage(USAGE))
.infer_long_args(true)
.arg(
Arg::new(OPT_DATE)
.short('d')
.long(OPT_DATE)
.value_name("STRING")
.help("display time described by STRING, not 'now'"),
)
.arg(
Arg::new(OPT_FILE)
.short('f')
.long(OPT_FILE)
.value_name("DATEFILE")
.value_hint(clap::ValueHint::FilePath)
.help("like --date; once for each line of DATEFILE"),
)
.arg(
Arg::new(OPT_ISO_8601)
.short('I')
.long(OPT_ISO_8601)
.value_name("FMT")
.value_parser(ShortcutValueParser::new([
DATE, HOURS, MINUTES, SECONDS, NS,
]))
.num_args(0..=1)
.default_missing_value(OPT_DATE)
.help(ISO_8601_HELP_STRING),
)
.arg(
Arg::new(OPT_RFC_EMAIL)
.short('R')
.long(OPT_RFC_EMAIL)
.help(RFC_5322_HELP_STRING)
.action(ArgAction::SetTrue),
)
.arg(
Arg::new(OPT_RFC_3339)
.long(OPT_RFC_3339)
.value_name("FMT")
.value_parser(ShortcutValueParser::new([DATE, SECONDS, NS]))
.help(RFC_3339_HELP_STRING),
)
.arg(
Arg::new(OPT_DEBUG)
.long(OPT_DEBUG)
.help("annotate the parsed date, and warn about questionable usage to stderr")
.action(ArgAction::SetTrue),
)
.arg(
Arg::new(OPT_REFERENCE)
.short('r')
.long(OPT_REFERENCE)
.value_name("FILE")
.value_hint(clap::ValueHint::AnyPath)
.help("display the last modification time of FILE"),
)
.arg(
Arg::new(OPT_SET)
.short('s')
.long(OPT_SET)
.value_name("STRING")
.help(OPT_SET_HELP_STRING),
)
.arg(
Arg::new(OPT_UNIVERSAL)
.short('u')
.long(OPT_UNIVERSAL)
.alias(OPT_UNIVERSAL_2)
.help("print or set Coordinated Universal Time (UTC)")
.action(ArgAction::SetTrue),
)
.arg(Arg::new(OPT_FORMAT))
}
/// Return the appropriate format string for the given settings.
fn make_format_string(settings: &Settings) -> &str {
match settings.format {
Format::Iso8601(ref fmt) => match *fmt {
Iso8601Format::Date => "%F",
Iso8601Format::Hours => "%FT%H%:z",
Iso8601Format::Minutes => "%FT%H:%M%:z",
Iso8601Format::Seconds => "%FT%T%:z",
Iso8601Format::Ns => "%FT%T,%f%:z",
},
Format::Rfc5322 => "%a, %d %h %Y %T %z",
Format::Rfc3339(ref fmt) => match *fmt {
Rfc3339Format::Date => "%F",
Rfc3339Format::Seconds => "%F %T%:z",
Rfc3339Format::Ns => "%F %T.%f%:z",
},
Format::Custom(ref fmt) => fmt,
Format::Default => "%c",
}
}
/// Parse a `String` into a `DateTime`.
/// If it fails, return a tuple of the `String` along with its `ParseError`.
fn parse_date<S: AsRef<str> + Clone>(
s: S,
) -> Result<DateTime<FixedOffset>, (String, chrono::format::ParseError)> {
// TODO: The GNU date command can parse a wide variety of inputs.
s.as_ref().parse().map_err(|e| (s.as_ref().into(), e))
}
#[cfg(not(any(unix, windows)))]
fn set_system_datetime(_date: DateTime<Utc>) -> UResult<()> {
unimplemented!("setting date not implemented (unsupported target)");
}
#[cfg(target_os = "macos")]
fn set_system_datetime(_date: DateTime<Utc>) -> UResult<()> { | Err(USimpleError::new(
1,
"setting the date is not supported by macOS".to_string(),
))
}
#[cfg(target_os = "redox")]
fn set_system_datetime(_date: DateTime<Utc>) -> UResult<()> {
Err(USimpleError::new(
1,
"setting the date is not supported by Redox".to_string(),
))
}
#[cfg(all(unix, not(target_os = "macos"), not(target_os = "redox")))]
/// System call to set date (unix).
/// See here for more:
/// `<https://doc.rust-lang.org/libc/i686-unknown-linux-gnu/libc/fn.clock_settime.html>`
/// `<https://linux.die.net/man/3/clock_settime>`
/// `<https://www.gnu.org/software/libc/manual/html_node/Time-Types.html>`
fn set_system_datetime(date: DateTime<Utc>) -> UResult<()> {
let timespec = timespec {
tv_sec: date.timestamp() as _,
tv_nsec: date.timestamp_subsec_nanos() as _,
};
let result = unsafe { clock_settime(CLOCK_REALTIME, ×pec) };
if result == 0 {
Ok(())
} else {
Err(std::io::Error::last_os_error().map_err_context(|| "cannot set date".to_string()))
}
}
#[cfg(windows)]
/// System call to set date (Windows).
/// See here for more:
/// https://docs.microsoft.com/en-us/windows/win32/api/sysinfoapi/nf-sysinfoapi-setsystemtime
/// https://docs.microsoft.com/en-us/windows/win32/api/minwinbase/ns-minwinbase-systemtime
fn set_system_datetime(date: DateTime<Utc>) -> UResult<()> {
let system_time = SYSTEMTIME {
wYear: date.year() as u16,
wMonth: date.month() as u16,
// Ignored
wDayOfWeek: 0,
wDay: date.day() as u16,
wHour: date.hour() as u16,
wMinute: date.minute() as u16,
wSecond: date.second() as u16,
// TODO: be careful of leap seconds - valid range is [0, 999] - how to handle?
wMilliseconds: ((date.nanosecond() / 1_000_000) % 1000) as u16,
};
let result = unsafe { SetSystemTime(&system_time) };
if result == 0 {
Err(std::io::Error::last_os_error().map_err_context(|| "cannot set date".to_string()))
} else {
Ok(())
}
} | random_line_split |
|
workerthread.rs | // Copyright (c) 2015-2016 Linus Färnstrand.
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::sync::atomic::{AtomicUsize,Ordering};
use std::sync::Arc;
use std::ptr::{Unique,write};
use std::sync::mpsc::{Receiver,Sender};
use std::mem;
use libc::usleep;
use thread_scoped;
use deque::{self,Worker,Stealer,Stolen};
use rand::{Rng,XorShiftRng,weak_rng};
use ::{Task,JoinBarrier,TaskResult,ResultReceiver,AlgoStyle,ReduceStyle,Algorithm};
use ::poolsupervisor::SupervisorMsg;
static STEAL_TRIES_UNTIL_BACKOFF: u32 = 30;
static BACKOFF_INC_US: u32 = 10;
pub struct WorkerThread<Arg: Send, Ret: Send + Sync> {
id: usize,
started: bool,
supervisor_port: Receiver<()>,
supervisor_channel: Sender<SupervisorMsg<Arg, Ret>>,
deque: Worker<Task<Arg, Ret>>,
stealer: Stealer<Task<Arg, Ret>>,
other_stealers: Vec<Stealer<Task<Arg, Ret>>>,
rng: XorShiftRng,
sleepers: Arc<AtomicUsize>,
threadcount: usize,
stats: ThreadStats,
}
impl<'a, Arg: Send + 'a, Ret: Send + Sync + 'a> WorkerThread<Arg,Ret> {
pub fn new(id: usize,
port: Receiver<()>,
channel: Sender<SupervisorMsg<Arg,Ret>>,
supervisor_queue: Stealer<Task<Arg, Ret>>,
sleepers: Arc<AtomicUsize>) -> WorkerThread<Arg,Ret> {
let (worker, stealer) = deque::new();
WorkerThread {
id: id,
started: false,
supervisor_port: port,
supervisor_channel: channel,
deque: worker,
stealer: stealer,
other_stealers: vec![supervisor_queue],
rng: weak_rng(),
sleepers: sleepers,
threadcount: 1, // Myself
stats: ThreadStats{exec_tasks: 0, steals: 0, steal_fails: 0, sleep_us: 0, first_after: 1},
}
}
pub fn get_stealer(&self) -> Stealer<Task<Arg,Ret>> {
assert!(!self.started);
self.stealer.clone()
}
pub fn add_other_stealer(&mut self, stealer: Stealer<Task<Arg,Ret>>) {
assert!(!self.started);
self.other_stealers.push(stealer);
self.threadcount += 1;
}
pub fn spawn(mut self) -> thread_scoped::JoinGuard<'a, ()> {
assert!(!self.started);
self.started = true;
unsafe {
thread_scoped::scoped(move|| {
self.main_loop();
})
}
}
fn main_loop(mut self) {
loop {
match self.supervisor_port.recv() {
Err(_) => break, // PoolSupervisor has been dropped, lets quit.
Ok(_) => { // Supervisor instruct to start working
loop {
self.process_queue();
match self.steal() {
Some(task) => self.execute_task(task),
None => break, // Give up for now
}
}
}
}
if self.supervisor_channel.send(SupervisorMsg::OutOfWork(self.id)).is_err() {
break; // Supervisor shut down, so we also shut down
}
}
}
fn process_queue(&mut self) {
while let Some(task) = self.deque.pop() {
self.execute_task(task);
}
}
fn execute_task(&mut self, task: Task<Arg, Ret>) {
let mut next_task: Option<Task<Arg,Ret>> = Some(task);
while let Some(task) = next_task {
if cfg!(feature = "threadstats") {self.stats.exec_tasks += 1;}
let fun = task.algo.fun;
match (fun)(task.arg) {
TaskResult::Done(ret) => {
self.handle_done(task.join, ret);
next_task = None;
},
TaskResult::Fork(args, joinarg) => {
next_task = self.handle_fork(task.algo, task.join, args, joinarg);
}
}
}
}
fn steal(&mut self) -> Option<Task<Arg,Ret>> {
if self.other_stealers.len() == 0 {
None // No one to steal from
} else {
let mut backoff_sleep: u32 = BACKOFF_INC_US;
for try in 0.. {
match self.try_steal() {
Some(task) => {
if cfg!(feature = "threadstats") && self.stats.first_after == 1 {
self.stats.first_after = self.stats.sleep_us;
}
return Some(task);
}
None => if try > STEAL_TRIES_UNTIL_BACKOFF {
self.sleepers.fetch_add(1, Ordering::SeqCst); // Check number here and set special state if last worker
if cfg!(feature = "threadstats") {self.stats.sleep_us += backoff_sleep as usize;}
unsafe { usleep(backoff_sleep); }
backoff_sleep = backoff_sleep + BACKOFF_INC_US;
if self.threadcount == self.sleepers.load(Ordering::SeqCst) {
break; // Give up
} else {
if self.threadcount == self.sleepers.fetch_sub(1, Ordering::SeqCst) {
self.sleepers.fetch_add(1, Ordering::SeqCst);
break; // Also give up
}
}
},
}
}
None
}
}
/// Try to steal tasks from the other workers.
/// Starts at a random worker and tries every worker until a task is stolen or
/// every worker has been tried once.
fn try_steal(&mut self) -> Option<Task<Arg,Ret>> {
let len = self.other_stealers.len();
let start_victim = self.rng.gen_range(0, len);
for offset in 0..len {
match self.other_stealers[(start_victim + offset) % len].steal() {
Stolen::Data(task) => {
if cfg!(feature = "threadstats") {self.stats.steals += 1;}
return Some(task);
}
Stolen::Empty | Stolen::Abort => {
if cfg!(feature = "threadstats") {self.stats.steal_fails += 1;}
continue;
}
}
}
None
}
fn handle_fork(&self,
algo: Algorithm<Arg, Ret>,
join: ResultReceiver<Ret>,
args: Vec<Arg>,
joinarg: Option<Ret>) -> Option<Task<Arg,Ret>>
{
let len: usize = args.len();
if len == 0 {
self.handle_fork_zero(algo, join, joinarg);
None
} else {
match algo.style {
AlgoStyle::Reduce(reducestyle) => {
let (vector, mut ptr_iter) = create_result_vec::<Ret>(len);
let mut sub_join = Box::new(JoinBarrier {
ret_counter: AtomicUsize::new(len),
joinfun: reducestyle,
joinarg: joinarg,
joinfunarg: vector,
parent: join,
});
let mut args_iter = args.into_iter();
let first_task = Task {
algo: algo.clone(),
arg: args_iter.next().unwrap(),
join: ResultReceiver::Join(ptr_iter.next().unwrap(), unsafe{Box::from_raw(&mut *sub_join)}),
};
loop {
match (args_iter.next(), ptr_iter.next()) {
(Some(arg), Some(ptr)) => {
let forked_task = Task {
algo: algo.clone(),
arg: arg,
join: ResultReceiver::Join(ptr, unsafe{Box::from_raw(&mut *sub_join)}),
};
self.deque.push(forked_task);
},
_ => break,
}
}
mem::forget(sub_join); // Don't drop here, last task will take care of that in handle_done
Some(first_task)
},
AlgoStyle::Search => {
for arg in args.into_iter() {
let forked_task = Task {
algo: algo.clone(),
arg: arg,
join: join.clone(),
};
self.deque.push(forked_task);
}
None
}
}
}
}
fn handle_fork_zero(&self, algo: Algorithm<Arg, Ret>, join: ResultReceiver<Ret>, joinarg: Option<Ret>) {
match algo.style {
AlgoStyle::Reduce(ref reducestyle) => {
let joinres = match *reducestyle {
ReduceStyle::NoArg(ref joinfun) => (joinfun)(&Vec::new()[..]),
ReduceStyle::Arg(ref joinfun) => {
let arg = joinarg.unwrap();
(joinfun)(&arg, &Vec::new()[..])
}
};
self.handle_done(join, joinres);
},
_ => (),
}
}
fn handle_done(&self, join: ResultReceiver<Ret>, value: Ret) {
match join {
ResultReceiver::Join(ptr, joinbarrier) => {
unsafe { write(*ptr, value); } // Writes without dropping since only null in place
if joinbarrier.ret_counter.fetch_sub(1, Ordering::SeqCst) == 1 {
let joinres = match joinbarrier.joinfun {
ReduceStyle::NoArg(ref joinfun) => (joinfun)(&joinbarrier.joinfunarg),
ReduceStyle::Arg(ref joinfun) => {
let joinarg = match joinbarrier.joinarg.as_ref() {
None => panic!("Algorithm has ReduceStyle::Arg, but no extra arg passed"),
Some(arg) => arg,
};
(joinfun)(joinarg, &joinbarrier.joinfunarg)
},
};
self.handle_done(joinbarrier.parent, joinres);
} else {
mem::forget(joinbarrier) // Don't drop if we are not last task
}
}
ResultReceiver::Channel(channel) => {
channel.lock().unwrap().send(value).unwrap();
}
}
}
}
#[cfg(feature = "threadstats")]
impl<Arg: Send, Ret: Send + Sync> Drop for WorkerThread<Arg, Ret> {
fn drop(&mut self) {
println!("Worker[{}] (t: {}, steals: {}, failed: {}, sleep: {}, first: {})",
self.id,
self.stats.exec_tasks,
self.stats.steals,
self.stats.steal_fails,
self.stats.sleep_us,
self.stats.first_after);
}
}
struct ThreadStats {
pub steals: usize,
pub steal_fails: usize,
pub exec_tasks: usize,
pub sleep_us: usize,
pub first_after: usize,
}
fn create_result_vec<Ret>(n: usize) -> (Vec<Ret>, PtrIter<Ret>) {
let mut rets: Vec<Ret> = Vec::with_capacity(n);
unsafe {
rets.set_len(n); // Force it to expand. Values in this will be invalid
let ptr_0: *mut Ret = rets.get_unchecked_mut(0);
let ptr_iter = PtrIter {
ptr_0: ptr_0,
offset: 0,
};
(rets, ptr_iter)
}
}
struct P | Ret> {
ptr_0: *mut Ret,
offset: isize,
}
impl<Ret> Iterator for PtrIter<Ret> {
type Item = Unique<Ret>;
fn next(&mut self) -> Option<Self::Item> {
let ptr = unsafe { Unique::new(self.ptr_0.offset(self.offset)) };
self.offset += 1;
Some(ptr)
}
}
| trIter< | identifier_name |
workerthread.rs | // Copyright (c) 2015-2016 Linus Färnstrand.
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::sync::atomic::{AtomicUsize,Ordering};
use std::sync::Arc;
use std::ptr::{Unique,write};
use std::sync::mpsc::{Receiver,Sender};
use std::mem;
use libc::usleep;
use thread_scoped;
use deque::{self,Worker,Stealer,Stolen};
use rand::{Rng,XorShiftRng,weak_rng};
use ::{Task,JoinBarrier,TaskResult,ResultReceiver,AlgoStyle,ReduceStyle,Algorithm};
use ::poolsupervisor::SupervisorMsg;
static STEAL_TRIES_UNTIL_BACKOFF: u32 = 30;
static BACKOFF_INC_US: u32 = 10;
pub struct WorkerThread<Arg: Send, Ret: Send + Sync> {
id: usize,
started: bool,
supervisor_port: Receiver<()>,
supervisor_channel: Sender<SupervisorMsg<Arg, Ret>>,
deque: Worker<Task<Arg, Ret>>,
stealer: Stealer<Task<Arg, Ret>>,
other_stealers: Vec<Stealer<Task<Arg, Ret>>>,
rng: XorShiftRng,
sleepers: Arc<AtomicUsize>,
threadcount: usize,
stats: ThreadStats,
}
impl<'a, Arg: Send + 'a, Ret: Send + Sync + 'a> WorkerThread<Arg,Ret> {
pub fn new(id: usize,
port: Receiver<()>,
channel: Sender<SupervisorMsg<Arg,Ret>>,
supervisor_queue: Stealer<Task<Arg, Ret>>,
sleepers: Arc<AtomicUsize>) -> WorkerThread<Arg,Ret> {
let (worker, stealer) = deque::new();
WorkerThread {
id: id,
started: false,
supervisor_port: port,
supervisor_channel: channel,
deque: worker,
stealer: stealer,
other_stealers: vec![supervisor_queue],
rng: weak_rng(),
sleepers: sleepers,
threadcount: 1, // Myself
stats: ThreadStats{exec_tasks: 0, steals: 0, steal_fails: 0, sleep_us: 0, first_after: 1},
}
}
pub fn get_stealer(&self) -> Stealer<Task<Arg,Ret>> {
assert!(!self.started);
self.stealer.clone()
}
pub fn add_other_stealer(&mut self, stealer: Stealer<Task<Arg,Ret>>) {
assert!(!self.started);
self.other_stealers.push(stealer);
self.threadcount += 1;
}
pub fn spawn(mut self) -> thread_scoped::JoinGuard<'a, ()> {
assert!(!self.started);
self.started = true;
unsafe {
thread_scoped::scoped(move|| {
self.main_loop();
})
}
}
fn main_loop(mut self) {
loop {
match self.supervisor_port.recv() {
Err(_) => break, // PoolSupervisor has been dropped, lets quit.
Ok(_) => { // Supervisor instruct to start working
loop {
self.process_queue();
match self.steal() {
Some(task) => self.execute_task(task),
None => break, // Give up for now
}
}
}
}
if self.supervisor_channel.send(SupervisorMsg::OutOfWork(self.id)).is_err() {
break; // Supervisor shut down, so we also shut down
}
}
}
fn process_queue(&mut self) {
while let Some(task) = self.deque.pop() {
self.execute_task(task);
}
}
fn execute_task(&mut self, task: Task<Arg, Ret>) {
let mut next_task: Option<Task<Arg,Ret>> = Some(task);
while let Some(task) = next_task {
if cfg!(feature = "threadstats") {self.stats.exec_tasks += 1;}
let fun = task.algo.fun;
match (fun)(task.arg) {
TaskResult::Done(ret) => {
self.handle_done(task.join, ret);
next_task = None;
},
TaskResult::Fork(args, joinarg) => {
next_task = self.handle_fork(task.algo, task.join, args, joinarg);
}
}
}
}
fn steal(&mut self) -> Option<Task<Arg,Ret>> {
if self.other_stealers.len() == 0 {
None // No one to steal from
} else {
let mut backoff_sleep: u32 = BACKOFF_INC_US;
for try in 0.. {
match self.try_steal() {
Some(task) => {
if cfg!(feature = "threadstats") && self.stats.first_after == 1 {
self.stats.first_after = self.stats.sleep_us;
}
return Some(task);
}
None => if try > STEAL_TRIES_UNTIL_BACKOFF {
self.sleepers.fetch_add(1, Ordering::SeqCst); // Check number here and set special state if last worker
if cfg!(feature = "threadstats") {self.stats.sleep_us += backoff_sleep as usize;}
unsafe { usleep(backoff_sleep); }
backoff_sleep = backoff_sleep + BACKOFF_INC_US;
if self.threadcount == self.sleepers.load(Ordering::SeqCst) {
break; // Give up
} else {
if self.threadcount == self.sleepers.fetch_sub(1, Ordering::SeqCst) {
self.sleepers.fetch_add(1, Ordering::SeqCst);
break; // Also give up
}
}
},
}
}
None
}
}
/// Try to steal tasks from the other workers.
/// Starts at a random worker and tries every worker until a task is stolen or
/// every worker has been tried once.
fn try_steal(&mut self) -> Option<Task<Arg,Ret>> {
let len = self.other_stealers.len();
let start_victim = self.rng.gen_range(0, len);
for offset in 0..len {
match self.other_stealers[(start_victim + offset) % len].steal() {
Stolen::Data(task) => {
if cfg!(feature = "threadstats") {self.stats.steals += 1;}
return Some(task);
}
Stolen::Empty | Stolen::Abort => {
if cfg!(feature = "threadstats") {self.stats.steal_fails += 1;}
continue;
}
}
}
None
}
fn handle_fork(&self,
algo: Algorithm<Arg, Ret>,
join: ResultReceiver<Ret>,
args: Vec<Arg>,
joinarg: Option<Ret>) -> Option<Task<Arg,Ret>>
{
let len: usize = args.len();
if len == 0 {
self.handle_fork_zero(algo, join, joinarg);
None
} else {
match algo.style {
AlgoStyle::Reduce(reducestyle) => {
let (vector, mut ptr_iter) = create_result_vec::<Ret>(len);
let mut sub_join = Box::new(JoinBarrier {
ret_counter: AtomicUsize::new(len),
joinfun: reducestyle,
joinarg: joinarg,
joinfunarg: vector,
parent: join,
});
let mut args_iter = args.into_iter();
let first_task = Task {
algo: algo.clone(),
arg: args_iter.next().unwrap(),
join: ResultReceiver::Join(ptr_iter.next().unwrap(), unsafe{Box::from_raw(&mut *sub_join)}),
};
loop {
match (args_iter.next(), ptr_iter.next()) {
(Some(arg), Some(ptr)) => {
let forked_task = Task {
algo: algo.clone(),
arg: arg,
join: ResultReceiver::Join(ptr, unsafe{Box::from_raw(&mut *sub_join)}),
};
self.deque.push(forked_task);
},
_ => break,
}
}
mem::forget(sub_join); // Don't drop here, last task will take care of that in handle_done
Some(first_task)
},
AlgoStyle::Search => {
for arg in args.into_iter() {
let forked_task = Task {
algo: algo.clone(),
arg: arg,
join: join.clone(),
};
self.deque.push(forked_task);
}
None
}
}
}
}
fn handle_fork_zero(&self, algo: Algorithm<Arg, Ret>, join: ResultReceiver<Ret>, joinarg: Option<Ret>) {
match algo.style {
AlgoStyle::Reduce(ref reducestyle) => {
let joinres = match *reducestyle {
ReduceStyle::NoArg(ref joinfun) => (joinfun)(&Vec::new()[..]),
ReduceStyle::Arg(ref joinfun) => {
let arg = joinarg.unwrap();
(joinfun)(&arg, &Vec::new()[..])
}
};
self.handle_done(join, joinres);
},
_ => (),
}
}
fn handle_done(&self, join: ResultReceiver<Ret>, value: Ret) {
match join {
ResultReceiver::Join(ptr, joinbarrier) => {
unsafe { write(*ptr, value); } // Writes without dropping since only null in place
if joinbarrier.ret_counter.fetch_sub(1, Ordering::SeqCst) == 1 {
let joinres = match joinbarrier.joinfun {
ReduceStyle::NoArg(ref joinfun) => (joinfun)(&joinbarrier.joinfunarg),
ReduceStyle::Arg(ref joinfun) => {
let joinarg = match joinbarrier.joinarg.as_ref() {
None => panic!("Algorithm has ReduceStyle::Arg, but no extra arg passed"),
Some(arg) => arg,
};
(joinfun)(joinarg, &joinbarrier.joinfunarg)
},
};
self.handle_done(joinbarrier.parent, joinres);
} else { | }
ResultReceiver::Channel(channel) => {
channel.lock().unwrap().send(value).unwrap();
}
}
}
}
#[cfg(feature = "threadstats")]
impl<Arg: Send, Ret: Send + Sync> Drop for WorkerThread<Arg, Ret> {
fn drop(&mut self) {
println!("Worker[{}] (t: {}, steals: {}, failed: {}, sleep: {}, first: {})",
self.id,
self.stats.exec_tasks,
self.stats.steals,
self.stats.steal_fails,
self.stats.sleep_us,
self.stats.first_after);
}
}
struct ThreadStats {
pub steals: usize,
pub steal_fails: usize,
pub exec_tasks: usize,
pub sleep_us: usize,
pub first_after: usize,
}
fn create_result_vec<Ret>(n: usize) -> (Vec<Ret>, PtrIter<Ret>) {
let mut rets: Vec<Ret> = Vec::with_capacity(n);
unsafe {
rets.set_len(n); // Force it to expand. Values in this will be invalid
let ptr_0: *mut Ret = rets.get_unchecked_mut(0);
let ptr_iter = PtrIter {
ptr_0: ptr_0,
offset: 0,
};
(rets, ptr_iter)
}
}
struct PtrIter<Ret> {
ptr_0: *mut Ret,
offset: isize,
}
impl<Ret> Iterator for PtrIter<Ret> {
type Item = Unique<Ret>;
fn next(&mut self) -> Option<Self::Item> {
let ptr = unsafe { Unique::new(self.ptr_0.offset(self.offset)) };
self.offset += 1;
Some(ptr)
}
}
|
mem::forget(joinbarrier) // Don't drop if we are not last task
}
| conditional_block |
workerthread.rs | // Copyright (c) 2015-2016 Linus Färnstrand.
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::sync::atomic::{AtomicUsize,Ordering};
use std::sync::Arc;
use std::ptr::{Unique,write};
use std::sync::mpsc::{Receiver,Sender};
use std::mem;
use libc::usleep;
use thread_scoped;
use deque::{self,Worker,Stealer,Stolen};
use rand::{Rng,XorShiftRng,weak_rng};
use ::{Task,JoinBarrier,TaskResult,ResultReceiver,AlgoStyle,ReduceStyle,Algorithm};
use ::poolsupervisor::SupervisorMsg;
static STEAL_TRIES_UNTIL_BACKOFF: u32 = 30;
static BACKOFF_INC_US: u32 = 10;
pub struct WorkerThread<Arg: Send, Ret: Send + Sync> {
id: usize,
started: bool,
supervisor_port: Receiver<()>,
supervisor_channel: Sender<SupervisorMsg<Arg, Ret>>,
deque: Worker<Task<Arg, Ret>>,
stealer: Stealer<Task<Arg, Ret>>,
other_stealers: Vec<Stealer<Task<Arg, Ret>>>,
rng: XorShiftRng,
sleepers: Arc<AtomicUsize>,
threadcount: usize,
stats: ThreadStats,
}
impl<'a, Arg: Send + 'a, Ret: Send + Sync + 'a> WorkerThread<Arg,Ret> {
pub fn new(id: usize,
port: Receiver<()>,
channel: Sender<SupervisorMsg<Arg,Ret>>,
supervisor_queue: Stealer<Task<Arg, Ret>>,
sleepers: Arc<AtomicUsize>) -> WorkerThread<Arg,Ret> {
let (worker, stealer) = deque::new();
WorkerThread {
id: id,
started: false,
supervisor_port: port,
supervisor_channel: channel,
deque: worker,
stealer: stealer,
other_stealers: vec![supervisor_queue],
rng: weak_rng(),
sleepers: sleepers,
threadcount: 1, // Myself
stats: ThreadStats{exec_tasks: 0, steals: 0, steal_fails: 0, sleep_us: 0, first_after: 1},
}
}
pub fn get_stealer(&self) -> Stealer<Task<Arg,Ret>> {
assert!(!self.started);
self.stealer.clone()
}
pub fn add_other_stealer(&mut self, stealer: Stealer<Task<Arg,Ret>>) {
assert!(!self.started);
self.other_stealers.push(stealer);
self.threadcount += 1;
}
pub fn spawn(mut self) -> thread_scoped::JoinGuard<'a, ()> {
assert!(!self.started);
self.started = true;
unsafe {
thread_scoped::scoped(move|| {
self.main_loop();
})
}
}
fn main_loop(mut self) {
loop {
match self.supervisor_port.recv() {
Err(_) => break, // PoolSupervisor has been dropped, lets quit.
Ok(_) => { // Supervisor instruct to start working
loop {
self.process_queue();
match self.steal() {
Some(task) => self.execute_task(task),
None => break, // Give up for now
}
}
}
}
if self.supervisor_channel.send(SupervisorMsg::OutOfWork(self.id)).is_err() {
break; // Supervisor shut down, so we also shut down
}
}
}
fn process_queue(&mut self) {
while let Some(task) = self.deque.pop() {
self.execute_task(task);
}
}
fn execute_task(&mut self, task: Task<Arg, Ret>) {
let mut next_task: Option<Task<Arg,Ret>> = Some(task);
while let Some(task) = next_task {
if cfg!(feature = "threadstats") {self.stats.exec_tasks += 1;}
let fun = task.algo.fun;
match (fun)(task.arg) {
TaskResult::Done(ret) => {
self.handle_done(task.join, ret);
next_task = None;
},
TaskResult::Fork(args, joinarg) => {
next_task = self.handle_fork(task.algo, task.join, args, joinarg);
}
}
}
}
fn steal(&mut self) -> Option<Task<Arg,Ret>> {
if self.other_stealers.len() == 0 {
None // No one to steal from
} else {
let mut backoff_sleep: u32 = BACKOFF_INC_US;
for try in 0.. {
match self.try_steal() {
Some(task) => {
if cfg!(feature = "threadstats") && self.stats.first_after == 1 {
self.stats.first_after = self.stats.sleep_us;
}
return Some(task);
}
None => if try > STEAL_TRIES_UNTIL_BACKOFF {
self.sleepers.fetch_add(1, Ordering::SeqCst); // Check number here and set special state if last worker
if cfg!(feature = "threadstats") {self.stats.sleep_us += backoff_sleep as usize;}
unsafe { usleep(backoff_sleep); }
backoff_sleep = backoff_sleep + BACKOFF_INC_US;
if self.threadcount == self.sleepers.load(Ordering::SeqCst) {
break; // Give up
} else {
if self.threadcount == self.sleepers.fetch_sub(1, Ordering::SeqCst) {
self.sleepers.fetch_add(1, Ordering::SeqCst);
break; // Also give up
}
}
},
}
}
None
}
}
/// Try to steal tasks from the other workers.
/// Starts at a random worker and tries every worker until a task is stolen or
/// every worker has been tried once.
fn try_steal(&mut self) -> Option<Task<Arg,Ret>> {
let len = self.other_stealers.len();
let start_victim = self.rng.gen_range(0, len);
for offset in 0..len {
match self.other_stealers[(start_victim + offset) % len].steal() {
Stolen::Data(task) => {
if cfg!(feature = "threadstats") {self.stats.steals += 1;}
return Some(task);
}
Stolen::Empty | Stolen::Abort => {
if cfg!(feature = "threadstats") {self.stats.steal_fails += 1;}
continue;
}
}
}
None
}
fn handle_fork(&self,
algo: Algorithm<Arg, Ret>,
join: ResultReceiver<Ret>,
args: Vec<Arg>,
joinarg: Option<Ret>) -> Option<Task<Arg,Ret>>
{
let len: usize = args.len();
if len == 0 {
self.handle_fork_zero(algo, join, joinarg);
None
} else {
match algo.style {
AlgoStyle::Reduce(reducestyle) => {
let (vector, mut ptr_iter) = create_result_vec::<Ret>(len);
let mut sub_join = Box::new(JoinBarrier {
ret_counter: AtomicUsize::new(len),
joinfun: reducestyle,
joinarg: joinarg,
joinfunarg: vector,
parent: join,
});
let mut args_iter = args.into_iter();
let first_task = Task {
algo: algo.clone(),
arg: args_iter.next().unwrap(),
join: ResultReceiver::Join(ptr_iter.next().unwrap(), unsafe{Box::from_raw(&mut *sub_join)}),
};
loop {
match (args_iter.next(), ptr_iter.next()) {
(Some(arg), Some(ptr)) => {
let forked_task = Task {
algo: algo.clone(),
arg: arg,
join: ResultReceiver::Join(ptr, unsafe{Box::from_raw(&mut *sub_join)}),
};
self.deque.push(forked_task);
},
_ => break,
}
}
mem::forget(sub_join); // Don't drop here, last task will take care of that in handle_done
Some(first_task)
},
AlgoStyle::Search => {
for arg in args.into_iter() {
let forked_task = Task {
algo: algo.clone(),
arg: arg,
join: join.clone(),
};
self.deque.push(forked_task);
}
None
}
}
}
}
fn handle_fork_zero(&self, algo: Algorithm<Arg, Ret>, join: ResultReceiver<Ret>, joinarg: Option<Ret>) {
match algo.style {
AlgoStyle::Reduce(ref reducestyle) => {
let joinres = match *reducestyle {
ReduceStyle::NoArg(ref joinfun) => (joinfun)(&Vec::new()[..]),
ReduceStyle::Arg(ref joinfun) => {
let arg = joinarg.unwrap();
(joinfun)(&arg, &Vec::new()[..])
}
};
self.handle_done(join, joinres);
},
_ => (),
}
}
fn handle_done(&self, join: ResultReceiver<Ret>, value: Ret) {
match join {
ResultReceiver::Join(ptr, joinbarrier) => {
unsafe { write(*ptr, value); } // Writes without dropping since only null in place
if joinbarrier.ret_counter.fetch_sub(1, Ordering::SeqCst) == 1 {
let joinres = match joinbarrier.joinfun {
ReduceStyle::NoArg(ref joinfun) => (joinfun)(&joinbarrier.joinfunarg),
ReduceStyle::Arg(ref joinfun) => {
let joinarg = match joinbarrier.joinarg.as_ref() {
None => panic!("Algorithm has ReduceStyle::Arg, but no extra arg passed"),
Some(arg) => arg,
};
(joinfun)(joinarg, &joinbarrier.joinfunarg)
},
};
self.handle_done(joinbarrier.parent, joinres);
} else {
mem::forget(joinbarrier) // Don't drop if we are not last task
}
}
ResultReceiver::Channel(channel) => {
channel.lock().unwrap().send(value).unwrap();
}
}
}
}
#[cfg(feature = "threadstats")]
impl<Arg: Send, Ret: Send + Sync> Drop for WorkerThread<Arg, Ret> {
fn drop(&mut self) {
println!("Worker[{}] (t: {}, steals: {}, failed: {}, sleep: {}, first: {})",
self.id,
self.stats.exec_tasks,
self.stats.steals,
self.stats.steal_fails,
self.stats.sleep_us,
self.stats.first_after);
}
}
struct ThreadStats {
pub steals: usize,
pub steal_fails: usize,
pub exec_tasks: usize,
pub sleep_us: usize,
pub first_after: usize,
}
fn create_result_vec<Ret>(n: usize) -> (Vec<Ret>, PtrIter<Ret>) {
let mut rets: Vec<Ret> = Vec::with_capacity(n);
unsafe {
rets.set_len(n); // Force it to expand. Values in this will be invalid
let ptr_0: *mut Ret = rets.get_unchecked_mut(0);
let ptr_iter = PtrIter {
ptr_0: ptr_0,
offset: 0,
};
(rets, ptr_iter)
}
}
struct PtrIter<Ret> {
ptr_0: *mut Ret,
offset: isize,
}
impl<Ret> Iterator for PtrIter<Ret> {
type Item = Unique<Ret>;
fn next(&mut self) -> Option<Self::Item> { | }
|
let ptr = unsafe { Unique::new(self.ptr_0.offset(self.offset)) };
self.offset += 1;
Some(ptr)
}
| identifier_body |
workerthread.rs | // Copyright (c) 2015-2016 Linus Färnstrand.
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::sync::atomic::{AtomicUsize,Ordering};
use std::sync::Arc;
use std::ptr::{Unique,write};
use std::sync::mpsc::{Receiver,Sender};
use std::mem;
use libc::usleep;
use thread_scoped;
use deque::{self,Worker,Stealer,Stolen};
use rand::{Rng,XorShiftRng,weak_rng};
use ::{Task,JoinBarrier,TaskResult,ResultReceiver,AlgoStyle,ReduceStyle,Algorithm};
use ::poolsupervisor::SupervisorMsg;
static STEAL_TRIES_UNTIL_BACKOFF: u32 = 30;
static BACKOFF_INC_US: u32 = 10;
pub struct WorkerThread<Arg: Send, Ret: Send + Sync> {
id: usize,
started: bool,
supervisor_port: Receiver<()>,
supervisor_channel: Sender<SupervisorMsg<Arg, Ret>>,
deque: Worker<Task<Arg, Ret>>,
stealer: Stealer<Task<Arg, Ret>>,
other_stealers: Vec<Stealer<Task<Arg, Ret>>>,
rng: XorShiftRng,
sleepers: Arc<AtomicUsize>,
threadcount: usize,
stats: ThreadStats,
}
impl<'a, Arg: Send + 'a, Ret: Send + Sync + 'a> WorkerThread<Arg,Ret> {
pub fn new(id: usize,
port: Receiver<()>,
channel: Sender<SupervisorMsg<Arg,Ret>>,
supervisor_queue: Stealer<Task<Arg, Ret>>,
sleepers: Arc<AtomicUsize>) -> WorkerThread<Arg,Ret> {
let (worker, stealer) = deque::new();
WorkerThread {
id: id,
started: false,
supervisor_port: port,
supervisor_channel: channel,
deque: worker,
stealer: stealer,
other_stealers: vec![supervisor_queue],
rng: weak_rng(),
sleepers: sleepers,
threadcount: 1, // Myself
stats: ThreadStats{exec_tasks: 0, steals: 0, steal_fails: 0, sleep_us: 0, first_after: 1},
}
}
pub fn get_stealer(&self) -> Stealer<Task<Arg,Ret>> {
assert!(!self.started);
self.stealer.clone()
}
pub fn add_other_stealer(&mut self, stealer: Stealer<Task<Arg,Ret>>) {
assert!(!self.started);
self.other_stealers.push(stealer);
self.threadcount += 1;
}
pub fn spawn(mut self) -> thread_scoped::JoinGuard<'a, ()> {
assert!(!self.started);
self.started = true;
unsafe {
thread_scoped::scoped(move|| {
self.main_loop();
})
}
}
fn main_loop(mut self) {
loop {
match self.supervisor_port.recv() {
Err(_) => break, // PoolSupervisor has been dropped, lets quit.
Ok(_) => { // Supervisor instruct to start working
loop {
self.process_queue();
match self.steal() {
Some(task) => self.execute_task(task),
None => break, // Give up for now
}
}
}
}
if self.supervisor_channel.send(SupervisorMsg::OutOfWork(self.id)).is_err() {
break; // Supervisor shut down, so we also shut down
}
}
}
fn process_queue(&mut self) {
while let Some(task) = self.deque.pop() {
self.execute_task(task);
}
}
fn execute_task(&mut self, task: Task<Arg, Ret>) {
let mut next_task: Option<Task<Arg,Ret>> = Some(task);
while let Some(task) = next_task {
if cfg!(feature = "threadstats") {self.stats.exec_tasks += 1;}
let fun = task.algo.fun;
match (fun)(task.arg) {
TaskResult::Done(ret) => {
self.handle_done(task.join, ret);
next_task = None;
},
TaskResult::Fork(args, joinarg) => {
next_task = self.handle_fork(task.algo, task.join, args, joinarg);
}
}
}
}
fn steal(&mut self) -> Option<Task<Arg,Ret>> {
if self.other_stealers.len() == 0 {
None // No one to steal from
} else {
let mut backoff_sleep: u32 = BACKOFF_INC_US;
for try in 0.. {
match self.try_steal() {
Some(task) => {
if cfg!(feature = "threadstats") && self.stats.first_after == 1 {
self.stats.first_after = self.stats.sleep_us;
}
return Some(task);
}
None => if try > STEAL_TRIES_UNTIL_BACKOFF {
self.sleepers.fetch_add(1, Ordering::SeqCst); // Check number here and set special state if last worker
if cfg!(feature = "threadstats") {self.stats.sleep_us += backoff_sleep as usize;}
unsafe { usleep(backoff_sleep); }
backoff_sleep = backoff_sleep + BACKOFF_INC_US;
if self.threadcount == self.sleepers.load(Ordering::SeqCst) {
break; // Give up
} else {
if self.threadcount == self.sleepers.fetch_sub(1, Ordering::SeqCst) {
self.sleepers.fetch_add(1, Ordering::SeqCst);
break; // Also give up
}
}
},
}
}
None | /// Try to steal tasks from the other workers.
/// Starts at a random worker and tries every worker until a task is stolen or
/// every worker has been tried once.
fn try_steal(&mut self) -> Option<Task<Arg,Ret>> {
let len = self.other_stealers.len();
let start_victim = self.rng.gen_range(0, len);
for offset in 0..len {
match self.other_stealers[(start_victim + offset) % len].steal() {
Stolen::Data(task) => {
if cfg!(feature = "threadstats") {self.stats.steals += 1;}
return Some(task);
}
Stolen::Empty | Stolen::Abort => {
if cfg!(feature = "threadstats") {self.stats.steal_fails += 1;}
continue;
}
}
}
None
}
fn handle_fork(&self,
algo: Algorithm<Arg, Ret>,
join: ResultReceiver<Ret>,
args: Vec<Arg>,
joinarg: Option<Ret>) -> Option<Task<Arg,Ret>>
{
let len: usize = args.len();
if len == 0 {
self.handle_fork_zero(algo, join, joinarg);
None
} else {
match algo.style {
AlgoStyle::Reduce(reducestyle) => {
let (vector, mut ptr_iter) = create_result_vec::<Ret>(len);
let mut sub_join = Box::new(JoinBarrier {
ret_counter: AtomicUsize::new(len),
joinfun: reducestyle,
joinarg: joinarg,
joinfunarg: vector,
parent: join,
});
let mut args_iter = args.into_iter();
let first_task = Task {
algo: algo.clone(),
arg: args_iter.next().unwrap(),
join: ResultReceiver::Join(ptr_iter.next().unwrap(), unsafe{Box::from_raw(&mut *sub_join)}),
};
loop {
match (args_iter.next(), ptr_iter.next()) {
(Some(arg), Some(ptr)) => {
let forked_task = Task {
algo: algo.clone(),
arg: arg,
join: ResultReceiver::Join(ptr, unsafe{Box::from_raw(&mut *sub_join)}),
};
self.deque.push(forked_task);
},
_ => break,
}
}
mem::forget(sub_join); // Don't drop here, last task will take care of that in handle_done
Some(first_task)
},
AlgoStyle::Search => {
for arg in args.into_iter() {
let forked_task = Task {
algo: algo.clone(),
arg: arg,
join: join.clone(),
};
self.deque.push(forked_task);
}
None
}
}
}
}
fn handle_fork_zero(&self, algo: Algorithm<Arg, Ret>, join: ResultReceiver<Ret>, joinarg: Option<Ret>) {
match algo.style {
AlgoStyle::Reduce(ref reducestyle) => {
let joinres = match *reducestyle {
ReduceStyle::NoArg(ref joinfun) => (joinfun)(&Vec::new()[..]),
ReduceStyle::Arg(ref joinfun) => {
let arg = joinarg.unwrap();
(joinfun)(&arg, &Vec::new()[..])
}
};
self.handle_done(join, joinres);
},
_ => (),
}
}
fn handle_done(&self, join: ResultReceiver<Ret>, value: Ret) {
match join {
ResultReceiver::Join(ptr, joinbarrier) => {
unsafe { write(*ptr, value); } // Writes without dropping since only null in place
if joinbarrier.ret_counter.fetch_sub(1, Ordering::SeqCst) == 1 {
let joinres = match joinbarrier.joinfun {
ReduceStyle::NoArg(ref joinfun) => (joinfun)(&joinbarrier.joinfunarg),
ReduceStyle::Arg(ref joinfun) => {
let joinarg = match joinbarrier.joinarg.as_ref() {
None => panic!("Algorithm has ReduceStyle::Arg, but no extra arg passed"),
Some(arg) => arg,
};
(joinfun)(joinarg, &joinbarrier.joinfunarg)
},
};
self.handle_done(joinbarrier.parent, joinres);
} else {
mem::forget(joinbarrier) // Don't drop if we are not last task
}
}
ResultReceiver::Channel(channel) => {
channel.lock().unwrap().send(value).unwrap();
}
}
}
}
#[cfg(feature = "threadstats")]
impl<Arg: Send, Ret: Send + Sync> Drop for WorkerThread<Arg, Ret> {
fn drop(&mut self) {
println!("Worker[{}] (t: {}, steals: {}, failed: {}, sleep: {}, first: {})",
self.id,
self.stats.exec_tasks,
self.stats.steals,
self.stats.steal_fails,
self.stats.sleep_us,
self.stats.first_after);
}
}
struct ThreadStats {
pub steals: usize,
pub steal_fails: usize,
pub exec_tasks: usize,
pub sleep_us: usize,
pub first_after: usize,
}
fn create_result_vec<Ret>(n: usize) -> (Vec<Ret>, PtrIter<Ret>) {
let mut rets: Vec<Ret> = Vec::with_capacity(n);
unsafe {
rets.set_len(n); // Force it to expand. Values in this will be invalid
let ptr_0: *mut Ret = rets.get_unchecked_mut(0);
let ptr_iter = PtrIter {
ptr_0: ptr_0,
offset: 0,
};
(rets, ptr_iter)
}
}
struct PtrIter<Ret> {
ptr_0: *mut Ret,
offset: isize,
}
impl<Ret> Iterator for PtrIter<Ret> {
type Item = Unique<Ret>;
fn next(&mut self) -> Option<Self::Item> {
let ptr = unsafe { Unique::new(self.ptr_0.offset(self.offset)) };
self.offset += 1;
Some(ptr)
}
} | }
}
| random_line_split |
lib.rs | //! <img src="https://raw.githubusercontent.com/maciejhirsz/logos/master/logos.svg?sanitize=true" alt="Logos logo" width="250" align="right">
//!
//! # Logos
//!
//! _Create ridiculously fast Lexers._
//!
//! **Logos** has two goals:
//!
//! + To make it easy to create a Lexer, so you can focus on more complex problems.
//! + To make the generated Lexer faster than anything you'd write by hand.
//!
//! To achieve those, **Logos**:
//!
//! + Combines all token definitions into a single [deterministic state machine](https://en.wikipedia.org/wiki/Deterministic_finite_automaton).
//! + Optimizes branches into [lookup tables](https://en.wikipedia.org/wiki/Lookup_table) or [jump tables](https://en.wikipedia.org/wiki/Branch_table).
//! + Prevents [backtracking](https://en.wikipedia.org/wiki/ReDoS) inside token definitions.
//! + [Unwinds loops](https://en.wikipedia.org/wiki/Loop_unrolling), and batches reads to minimize bounds checking.
//! + Does all of that heavy lifting at compile time.
//!
//! ## Example
//!
//! ```rust
//! use logos::Logos;
//!
//! #[derive(Logos, Debug, PartialEq)]
//! #[logos(skip r"[ \t\n\f]+")] // Ignore this regex pattern between tokens
//! enum Token {
//! // Tokens can be literal strings, of any length.
//! #[token("fast")]
//! Fast,
//!
//! #[token(".")]
//! Period,
//!
//! // Or regular expressions.
//! #[regex("[a-zA-Z]+")]
//! Text,
//! }
//!
//! fn main() {
//! let mut lex = Token::lexer("Create ridiculously fast Lexers.");
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Text)));
//! assert_eq!(lex.span(), 0..6);
//! assert_eq!(lex.slice(), "Create");
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Text)));
//! assert_eq!(lex.span(), 7..19);
//! assert_eq!(lex.slice(), "ridiculously");
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Fast)));
//! assert_eq!(lex.span(), 20..24);
//! assert_eq!(lex.slice(), "fast");
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Text)));
//! assert_eq!(lex.slice(), "Lexers");
//! assert_eq!(lex.span(), 25..31);
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Period)));
//! assert_eq!(lex.span(), 31..32);
//! assert_eq!(lex.slice(), ".");
//!
//! assert_eq!(lex.next(), None);
//! }
//! ```
//!
//! ### Callbacks
//!
//! **Logos** can also call arbitrary functions whenever a pattern is matched,
//! which can be used to put data into a variant:
//!
//! ```rust
//! use logos::{Logos, Lexer};
//!
//! // Note: callbacks can return `Option` or `Result`
//! fn kilo(lex: &mut Lexer<Token>) -> Option<u64> {
//! let slice = lex.slice();
//! let n: u64 = slice[..slice.len() - 1].parse().ok()?; // skip 'k'
//! Some(n * 1_000)
//! }
//!
//! fn mega(lex: &mut Lexer<Token>) -> Option<u64> {
//! let slice = lex.slice();
//! let n: u64 = slice[..slice.len() - 1].parse().ok()?; // skip'm'
//! Some(n * 1_000_000)
//! }
//!
//! #[derive(Logos, Debug, PartialEq)]
//! #[logos(skip r"[ \t\n\f]+")]
//! enum Token {
//! // Callbacks can use closure syntax, or refer
//! // to a function defined elsewhere.
//! //
//! // Each pattern can have its own callback.
//! #[regex("[0-9]+", |lex| lex.slice().parse().ok())]
//! #[regex("[0-9]+k", kilo)]
//! #[regex("[0-9]+m", mega)]
//! Number(u64),
//! }
//!
//! fn main() {
//! let mut lex = Token::lexer("5 42k 75m");
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Number(5))));
//! assert_eq!(lex.slice(), "5");
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Number(42_000))));
//! assert_eq!(lex.slice(), "42k");
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Number(75_000_000))));
//! assert_eq!(lex.slice(), "75m");
//!
//! assert_eq!(lex.next(), None);
//! }
//! ```
//!
//! Logos can handle callbacks with following return types:
//!
//! | Return type | Produces |
//! |--------------------------------------------------|-----------------------------------------------------------------------------------------------------|
//! | `()` | `Ok(Token::Unit)` |
//! | `bool` | `Ok(Token::Unit)` **or** `Err(<Token as Logos>::Error::default())` |
//! | `Result<(), E>` | `Ok(Token::Unit)` **or** `Err(<Token as Logos>::Error::from(err))` |
//! | `T` | `Ok(Token::Value(T))` |
//! | `Option<T>` | `Ok(Token::Value(T))` **or** `Err(<Token as Logos>::Error::default())` |
//! | `Result<T, E>` | `Ok(Token::Value(T))` **or** `Err(<Token as Logos>::Error::from(err))` |
//! | [`Skip`](./struct.Skip.html) | _skips matched input_ |
//! | [`Filter<T>`](./enum.Filter.html) | `Ok(Token::Value(T))` **or** _skips matched input_ |
//! | [`FilterResult<T, E>`](./enum.FilterResult.html) | `Ok(Token::Value(T))` **or** `Err(<Token as Logos>::Error::from(err))` **or** _skips matched input_ |
//!
//! Callbacks can be also used to do perform more specialized lexing in place
//! where regular expressions are too limiting. For specifics look at
//! [`Lexer::remainder`](./struct.Lexer.html#method.remainder) and
//! [`Lexer::bump`](./struct.Lexer.html#method.bump).
//!
//! ## Errors
//!
//! By default, **Logos** uses `()` as the error type, which means that it
//! doesn't store any information about the error.
//! This can be changed by using `#[logos(error = T)]` attribute on the enum.
//! The type `T` can be any type that implements `Clone`, `PartialEq`,
//! `Default` and `From<E>` for each callback's error type.
//!
//! ## Token disambiguation
//!
//! Rule of thumb is:
//!
//! + Longer beats shorter.
//! + Specific beats generic.
//!
//! If any two definitions could match the same input, like `fast` and `[a-zA-Z]+`
//! in the example above, it's the longer and more specific definition of `Token::Fast`
//! that will be the result.
//!
//! This is done by comparing numeric priority attached to each definition. Every consecutive,
//! non-repeating single byte adds 2 to the priority, while every range or regex class adds 1.
//! Loops or optional blocks are ignored, while alternations count the shortest alternative:
//!
//! + `[a-zA-Z]+` has a priority of 1 (lowest possible), because at minimum it can match a single byte to a class.
//! + `foobar` has a priority of 12.
//! + `(foo|hello)(bar)?` has a priority of 6, `foo` being it's shortest possible match.
#![cfg_attr(not(feature = "std"), no_std)]
#![warn(missing_docs)]
#![doc(html_logo_url = "https://maciej.codes/kosz/logos.png")]
#[cfg(not(feature = "std"))]
extern crate core as std;
#[cfg(feature = "export_derive")]
pub use logos_derive::Logos;
use std::fmt::Debug;
mod lexer;
pub mod source;
#[doc(hidden)]
pub mod internal;
pub use crate::lexer::{Lexer, Span, SpannedIter};
pub use crate::source::Source;
/// Trait implemented for an enum representing all tokens. You should never have
/// to implement it manually, use the `#[derive(Logos)]` attribute on your enum.
pub trait Logos<'source>: Sized {
/// Associated type `Extras` for the particular lexer. This can be set using
/// `#[logos(extras = MyExtras)]` and accessed inside callbacks.
type Extras;
/// Source type this token can be lexed from. This will default to `str`,
/// unless one of the defined patterns explicitly uses non-unicode byte values
/// or byte slices, in which case that implementation will use `[u8]`.
type Source: Source +?Sized +'source;
/// Error type returned by the lexer. This can be set using
/// `#[logos(error = MyError)]`. Defaults to `()` if not set.
type Error: Default + Clone + PartialEq + Debug +'source;
/// The heart of Logos. Called by the `Lexer`. The implementation for this function
/// is generated by the `logos-derive` crate.
fn lex(lexer: &mut Lexer<'source, Self>);
/// Create a new instance of a `Lexer` that will produce tokens implementing
/// this `Logos`.
fn lexer(source: &'source Self::Source) -> Lexer<'source, Self>
where
Self::Extras: Default,
{
Lexer::new(source)
}
/// Create a new instance of a `Lexer` with the provided `Extras` that will
/// produce tokens implementing this `Logos`.
fn lexer_with_extras(
source: &'source Self::Source,
extras: Self::Extras,
) -> Lexer<'source, Self> {
Lexer::with_extras(source, extras)
}
}
/// Type that can be returned from a callback, informing the `Lexer`, to skip
/// current token match. See also [`logos::skip`](./fn.skip.html).
///
/// # Example
///
/// ```rust
/// use logos::{Logos, Skip};
///
/// #[derive(Logos, Debug, PartialEq)]
/// enum Token<'a> {
/// // We will treat "abc" as if it was whitespace.
/// // This is identical to using `logos::skip`.
/// #[regex(" |abc", |_| Skip)]
/// Ignored,
///
/// #[regex("[a-zA-Z]+")]
/// Text(&'a str),
/// }
///
/// let tokens: Vec<_> = Token::lexer("Hello abc world").collect();
///
/// assert_eq!(
/// tokens,
/// &[
/// Ok(Token::Text("Hello")),
/// Ok(Token::Text("world")),
/// ],
/// );
/// ```
pub struct Skip;
/// Type that can be returned from a callback, either producing a field
/// for a token, or skipping it.
///
/// # Example
///
/// ```rust
/// use logos::{Logos, Filter};
///
/// #[derive(Logos, Debug, PartialEq)]
/// enum Token {
/// #[regex(r"[ \n\f\t]+", logos::skip)]
/// Ignored,
///
/// #[regex("[0-9]+", |lex| {
/// let n: u64 = lex.slice().parse().unwrap();
///
/// // Only emit a token if `n` is an even number
/// match n % 2 {
/// 0 => Filter::Emit(n),
/// _ => Filter::Skip,
/// }
/// })]
/// EvenNumber(u64)
/// }
///
/// let tokens: Vec<_> = Token::lexer("20 11 42 23 100 8002").collect();
///
/// assert_eq!(
/// tokens,
/// &[
/// Ok(Token::EvenNumber(20)),
/// // skipping 11
/// Ok(Token::EvenNumber(42)),
/// // skipping 23
/// Ok(Token::EvenNumber(100)),
/// Ok(Token::EvenNumber(8002))
/// ]
/// );
/// ```
pub enum Filter<T> {
/// Emit a token with a given value `T`. Use `()` for unit variants without fields.
Emit(T),
/// Skip current match, analog to [`Skip`](./struct.Skip.html).
Skip,
}
/// Type that can be returned from a callback, either producing a field
/// for a token, skipping it, or emitting an error.
///
/// # Example
///
/// ```rust
/// use logos::{Logos, FilterResult};
///
/// #[derive(Debug, PartialEq, Clone, Default)]
/// enum LexingError {
/// NumberParseError,
/// NumberIsTen,
/// #[default]
/// Other,
/// }
///
/// impl From<std::num::ParseIntError> for LexingError {
/// fn from(_: std::num::ParseIntError) -> Self {
/// LexingError::NumberParseError
/// }
/// }
///
/// #[derive(Logos, Debug, PartialEq)]
/// #[logos(error = LexingError)]
/// enum Token {
/// #[regex(r"[ \n\f\t]+", logos::skip)]
/// Ignored,
///
/// #[regex("[0-9]+", |lex| {
/// let n: u64 = lex.slice().parse().unwrap();
///
/// // Only emit a token if `n` is an even number.
/// if n % 2 == 0 {
/// // Emit an error if `n` is 10.
/// if n == 10 {
/// FilterResult::Error(LexingError::NumberIsTen)
/// } else {
/// FilterResult::Emit(n)
/// }
/// } else {
/// FilterResult::Skip
/// }
/// })]
/// NiceEvenNumber(u64)
/// }
///
/// let tokens: Vec<_> = Token::lexer("20 11 42 23 100 10").collect();
///
/// assert_eq!(
/// tokens,
/// &[
/// Ok(Token::NiceEvenNumber(20)),
/// // skipping 11
/// Ok(Token::NiceEvenNumber(42)),
/// // skipping 23
/// Ok(Token::NiceEvenNumber(100)),
/// // error at 10
/// Err(LexingError::NumberIsTen),
/// ]
/// );
/// ```
pub enum FilterResult<T, E> {
/// Emit a token with a given value `T`. Use `()` for unit variants without fields.
Emit(T),
/// Skip current match, analog to [`Skip`](./struct.Skip.html).
Skip,
/// Emit a `<Token as Logos>::ERROR` token.
Error(E),
}
/// Predefined callback that will inform the `Lexer` to skip a definition.
///
/// # Example
///
/// ```rust
/// use logos::Logos;
///
/// #[derive(Logos, Debug, PartialEq)]
/// enum Token<'a> {
/// // We will treat "abc" as if it was whitespace
/// #[regex(" |abc", logos::skip)]
/// Ignored,
///
/// #[regex("[a-zA-Z]+")]
/// Text(&'a str),
/// }
///
/// let tokens: Vec<_> = Token::lexer("Hello abc world").collect();
///
/// assert_eq!(
/// tokens,
/// &[
/// Ok(Token::Text("Hello")),
/// Ok(Token::Text("world")),
/// ],
/// );
/// ```
#[inline]
pub fn skip<'source, Token: Logos<'source>>(_: &mut Lexer<'source, Token>) -> Skip |
#[cfg(doctest)]
mod test_readme {
macro_rules! external_doc_test {
($x:expr) => {
#[doc = $x]
extern "C" {}
};
}
external_doc_test!(include_str!("../../README.md"));
}
| {
Skip
} | identifier_body |
lib.rs | //! <img src="https://raw.githubusercontent.com/maciejhirsz/logos/master/logos.svg?sanitize=true" alt="Logos logo" width="250" align="right">
//!
//! # Logos
//!
//! _Create ridiculously fast Lexers._
//!
//! **Logos** has two goals:
//!
//! + To make it easy to create a Lexer, so you can focus on more complex problems.
//! + To make the generated Lexer faster than anything you'd write by hand.
//!
//! To achieve those, **Logos**:
//!
//! + Combines all token definitions into a single [deterministic state machine](https://en.wikipedia.org/wiki/Deterministic_finite_automaton).
//! + Optimizes branches into [lookup tables](https://en.wikipedia.org/wiki/Lookup_table) or [jump tables](https://en.wikipedia.org/wiki/Branch_table).
//! + Prevents [backtracking](https://en.wikipedia.org/wiki/ReDoS) inside token definitions.
//! + [Unwinds loops](https://en.wikipedia.org/wiki/Loop_unrolling), and batches reads to minimize bounds checking.
//! + Does all of that heavy lifting at compile time.
//!
//! ## Example
//!
//! ```rust
//! use logos::Logos;
//!
//! #[derive(Logos, Debug, PartialEq)]
//! #[logos(skip r"[ \t\n\f]+")] // Ignore this regex pattern between tokens
//! enum Token {
//! // Tokens can be literal strings, of any length.
//! #[token("fast")]
//! Fast,
//!
//! #[token(".")]
//! Period,
//!
//! // Or regular expressions.
//! #[regex("[a-zA-Z]+")]
//! Text,
//! }
//!
//! fn main() {
//! let mut lex = Token::lexer("Create ridiculously fast Lexers.");
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Text)));
//! assert_eq!(lex.span(), 0..6);
//! assert_eq!(lex.slice(), "Create");
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Text)));
//! assert_eq!(lex.span(), 7..19);
//! assert_eq!(lex.slice(), "ridiculously");
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Fast)));
//! assert_eq!(lex.span(), 20..24);
//! assert_eq!(lex.slice(), "fast");
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Text)));
//! assert_eq!(lex.slice(), "Lexers");
//! assert_eq!(lex.span(), 25..31);
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Period)));
//! assert_eq!(lex.span(), 31..32);
//! assert_eq!(lex.slice(), ".");
//!
//! assert_eq!(lex.next(), None);
//! }
//! ```
//!
//! ### Callbacks
//!
//! **Logos** can also call arbitrary functions whenever a pattern is matched,
//! which can be used to put data into a variant:
//!
//! ```rust
//! use logos::{Logos, Lexer};
//!
//! // Note: callbacks can return `Option` or `Result`
//! fn kilo(lex: &mut Lexer<Token>) -> Option<u64> {
//! let slice = lex.slice();
//! let n: u64 = slice[..slice.len() - 1].parse().ok()?; // skip 'k'
//! Some(n * 1_000)
//! }
//!
//! fn mega(lex: &mut Lexer<Token>) -> Option<u64> {
//! let slice = lex.slice();
//! let n: u64 = slice[..slice.len() - 1].parse().ok()?; // skip'm'
//! Some(n * 1_000_000)
//! }
//!
//! #[derive(Logos, Debug, PartialEq)]
//! #[logos(skip r"[ \t\n\f]+")]
//! enum Token {
//! // Callbacks can use closure syntax, or refer
//! // to a function defined elsewhere.
//! //
//! // Each pattern can have its own callback.
//! #[regex("[0-9]+", |lex| lex.slice().parse().ok())]
//! #[regex("[0-9]+k", kilo)]
//! #[regex("[0-9]+m", mega)]
//! Number(u64),
//! }
//!
//! fn main() {
//! let mut lex = Token::lexer("5 42k 75m");
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Number(5))));
//! assert_eq!(lex.slice(), "5");
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Number(42_000))));
//! assert_eq!(lex.slice(), "42k");
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Number(75_000_000))));
//! assert_eq!(lex.slice(), "75m");
//!
//! assert_eq!(lex.next(), None);
//! }
//! ```
//!
//! Logos can handle callbacks with following return types:
//!
//! | Return type | Produces |
//! |--------------------------------------------------|-----------------------------------------------------------------------------------------------------|
//! | `()` | `Ok(Token::Unit)` |
//! | `bool` | `Ok(Token::Unit)` **or** `Err(<Token as Logos>::Error::default())` |
//! | `Result<(), E>` | `Ok(Token::Unit)` **or** `Err(<Token as Logos>::Error::from(err))` |
//! | `T` | `Ok(Token::Value(T))` |
//! | `Option<T>` | `Ok(Token::Value(T))` **or** `Err(<Token as Logos>::Error::default())` |
//! | `Result<T, E>` | `Ok(Token::Value(T))` **or** `Err(<Token as Logos>::Error::from(err))` |
//! | [`Skip`](./struct.Skip.html) | _skips matched input_ |
//! | [`Filter<T>`](./enum.Filter.html) | `Ok(Token::Value(T))` **or** _skips matched input_ |
//! | [`FilterResult<T, E>`](./enum.FilterResult.html) | `Ok(Token::Value(T))` **or** `Err(<Token as Logos>::Error::from(err))` **or** _skips matched input_ |
//!
//! Callbacks can be also used to do perform more specialized lexing in place
//! where regular expressions are too limiting. For specifics look at
//! [`Lexer::remainder`](./struct.Lexer.html#method.remainder) and
//! [`Lexer::bump`](./struct.Lexer.html#method.bump).
//!
//! ## Errors
//!
//! By default, **Logos** uses `()` as the error type, which means that it
//! doesn't store any information about the error.
//! This can be changed by using `#[logos(error = T)]` attribute on the enum.
//! The type `T` can be any type that implements `Clone`, `PartialEq`,
//! `Default` and `From<E>` for each callback's error type.
//!
//! ## Token disambiguation
//!
//! Rule of thumb is:
//!
//! + Longer beats shorter.
//! + Specific beats generic.
//!
//! If any two definitions could match the same input, like `fast` and `[a-zA-Z]+`
//! in the example above, it's the longer and more specific definition of `Token::Fast`
//! that will be the result.
//!
//! This is done by comparing numeric priority attached to each definition. Every consecutive,
//! non-repeating single byte adds 2 to the priority, while every range or regex class adds 1.
//! Loops or optional blocks are ignored, while alternations count the shortest alternative:
//!
//! + `[a-zA-Z]+` has a priority of 1 (lowest possible), because at minimum it can match a single byte to a class.
//! + `foobar` has a priority of 12.
//! + `(foo|hello)(bar)?` has a priority of 6, `foo` being it's shortest possible match.
#![cfg_attr(not(feature = "std"), no_std)]
#![warn(missing_docs)]
#![doc(html_logo_url = "https://maciej.codes/kosz/logos.png")]
#[cfg(not(feature = "std"))]
extern crate core as std;
#[cfg(feature = "export_derive")]
pub use logos_derive::Logos;
use std::fmt::Debug;
mod lexer;
pub mod source;
#[doc(hidden)]
pub mod internal;
pub use crate::lexer::{Lexer, Span, SpannedIter};
pub use crate::source::Source;
/// Trait implemented for an enum representing all tokens. You should never have
/// to implement it manually, use the `#[derive(Logos)]` attribute on your enum.
pub trait Logos<'source>: Sized {
/// Associated type `Extras` for the particular lexer. This can be set using
/// `#[logos(extras = MyExtras)]` and accessed inside callbacks.
type Extras;
/// Source type this token can be lexed from. This will default to `str`,
/// unless one of the defined patterns explicitly uses non-unicode byte values
/// or byte slices, in which case that implementation will use `[u8]`.
type Source: Source +?Sized +'source;
/// Error type returned by the lexer. This can be set using
/// `#[logos(error = MyError)]`. Defaults to `()` if not set.
type Error: Default + Clone + PartialEq + Debug +'source;
/// The heart of Logos. Called by the `Lexer`. The implementation for this function
/// is generated by the `logos-derive` crate.
fn lex(lexer: &mut Lexer<'source, Self>);
/// Create a new instance of a `Lexer` that will produce tokens implementing
/// this `Logos`.
fn lexer(source: &'source Self::Source) -> Lexer<'source, Self>
where
Self::Extras: Default,
{
Lexer::new(source)
}
/// Create a new instance of a `Lexer` with the provided `Extras` that will
/// produce tokens implementing this `Logos`.
fn lexer_with_extras(
source: &'source Self::Source,
extras: Self::Extras,
) -> Lexer<'source, Self> {
Lexer::with_extras(source, extras)
}
}
/// Type that can be returned from a callback, informing the `Lexer`, to skip
/// current token match. See also [`logos::skip`](./fn.skip.html).
///
/// # Example
///
/// ```rust
/// use logos::{Logos, Skip};
///
/// #[derive(Logos, Debug, PartialEq)]
/// enum Token<'a> {
/// // We will treat "abc" as if it was whitespace.
/// // This is identical to using `logos::skip`.
/// #[regex(" |abc", |_| Skip)]
/// Ignored,
///
/// #[regex("[a-zA-Z]+")]
/// Text(&'a str),
/// }
///
/// let tokens: Vec<_> = Token::lexer("Hello abc world").collect();
///
/// assert_eq!(
/// tokens,
/// &[
/// Ok(Token::Text("Hello")),
/// Ok(Token::Text("world")),
/// ],
/// );
/// ```
pub struct Skip;
/// Type that can be returned from a callback, either producing a field
/// for a token, or skipping it.
///
/// # Example
///
/// ```rust
/// use logos::{Logos, Filter};
///
/// #[derive(Logos, Debug, PartialEq)]
/// enum Token {
/// #[regex(r"[ \n\f\t]+", logos::skip)]
/// Ignored,
///
/// #[regex("[0-9]+", |lex| {
/// let n: u64 = lex.slice().parse().unwrap();
///
/// // Only emit a token if `n` is an even number
/// match n % 2 {
/// 0 => Filter::Emit(n),
/// _ => Filter::Skip,
/// }
/// })]
/// EvenNumber(u64)
/// }
///
/// let tokens: Vec<_> = Token::lexer("20 11 42 23 100 8002").collect();
///
/// assert_eq!(
/// tokens,
/// &[
/// Ok(Token::EvenNumber(20)),
/// // skipping 11
/// Ok(Token::EvenNumber(42)),
/// // skipping 23
/// Ok(Token::EvenNumber(100)),
/// Ok(Token::EvenNumber(8002))
/// ]
/// );
/// ```
pub enum Filter<T> {
/// Emit a token with a given value `T`. Use `()` for unit variants without fields.
Emit(T),
/// Skip current match, analog to [`Skip`](./struct.Skip.html).
Skip,
}
/// Type that can be returned from a callback, either producing a field
/// for a token, skipping it, or emitting an error.
///
/// # Example
///
/// ```rust
/// use logos::{Logos, FilterResult};
///
/// #[derive(Debug, PartialEq, Clone, Default)]
/// enum LexingError {
/// NumberParseError,
/// NumberIsTen,
/// #[default]
/// Other,
/// }
///
/// impl From<std::num::ParseIntError> for LexingError {
/// fn from(_: std::num::ParseIntError) -> Self {
/// LexingError::NumberParseError
/// }
/// }
///
/// #[derive(Logos, Debug, PartialEq)]
/// #[logos(error = LexingError)]
/// enum Token {
/// #[regex(r"[ \n\f\t]+", logos::skip)]
/// Ignored,
///
/// #[regex("[0-9]+", |lex| {
/// let n: u64 = lex.slice().parse().unwrap();
///
/// // Only emit a token if `n` is an even number.
/// if n % 2 == 0 {
/// // Emit an error if `n` is 10.
/// if n == 10 {
/// FilterResult::Error(LexingError::NumberIsTen)
/// } else {
/// FilterResult::Emit(n)
/// }
/// } else {
/// FilterResult::Skip
/// }
/// })]
/// NiceEvenNumber(u64)
/// }
///
/// let tokens: Vec<_> = Token::lexer("20 11 42 23 100 10").collect();
///
/// assert_eq!(
/// tokens,
/// &[
/// Ok(Token::NiceEvenNumber(20)),
/// // skipping 11
/// Ok(Token::NiceEvenNumber(42)),
/// // skipping 23
/// Ok(Token::NiceEvenNumber(100)),
/// // error at 10
/// Err(LexingError::NumberIsTen),
/// ]
/// );
/// ```
pub enum | <T, E> {
/// Emit a token with a given value `T`. Use `()` for unit variants without fields.
Emit(T),
/// Skip current match, analog to [`Skip`](./struct.Skip.html).
Skip,
/// Emit a `<Token as Logos>::ERROR` token.
Error(E),
}
/// Predefined callback that will inform the `Lexer` to skip a definition.
///
/// # Example
///
/// ```rust
/// use logos::Logos;
///
/// #[derive(Logos, Debug, PartialEq)]
/// enum Token<'a> {
/// // We will treat "abc" as if it was whitespace
/// #[regex(" |abc", logos::skip)]
/// Ignored,
///
/// #[regex("[a-zA-Z]+")]
/// Text(&'a str),
/// }
///
/// let tokens: Vec<_> = Token::lexer("Hello abc world").collect();
///
/// assert_eq!(
/// tokens,
/// &[
/// Ok(Token::Text("Hello")),
/// Ok(Token::Text("world")),
/// ],
/// );
/// ```
#[inline]
pub fn skip<'source, Token: Logos<'source>>(_: &mut Lexer<'source, Token>) -> Skip {
Skip
}
#[cfg(doctest)]
mod test_readme {
macro_rules! external_doc_test {
($x:expr) => {
#[doc = $x]
extern "C" {}
};
}
external_doc_test!(include_str!("../../README.md"));
}
| FilterResult | identifier_name |
lib.rs | //! <img src="https://raw.githubusercontent.com/maciejhirsz/logos/master/logos.svg?sanitize=true" alt="Logos logo" width="250" align="right">
//!
//! # Logos
//!
//! _Create ridiculously fast Lexers._
//!
//! **Logos** has two goals:
//!
//! + To make it easy to create a Lexer, so you can focus on more complex problems.
//! + To make the generated Lexer faster than anything you'd write by hand.
//!
//! To achieve those, **Logos**:
//!
//! + Combines all token definitions into a single [deterministic state machine](https://en.wikipedia.org/wiki/Deterministic_finite_automaton).
//! + Optimizes branches into [lookup tables](https://en.wikipedia.org/wiki/Lookup_table) or [jump tables](https://en.wikipedia.org/wiki/Branch_table).
//! + Prevents [backtracking](https://en.wikipedia.org/wiki/ReDoS) inside token definitions.
//! + [Unwinds loops](https://en.wikipedia.org/wiki/Loop_unrolling), and batches reads to minimize bounds checking.
//! + Does all of that heavy lifting at compile time.
//!
//! ## Example
//!
//! ```rust
//! use logos::Logos;
//!
//! #[derive(Logos, Debug, PartialEq)]
//! #[logos(skip r"[ \t\n\f]+")] // Ignore this regex pattern between tokens
//! enum Token {
//! // Tokens can be literal strings, of any length.
//! #[token("fast")]
//! Fast,
//!
//! #[token(".")]
//! Period,
//!
//! // Or regular expressions.
//! #[regex("[a-zA-Z]+")]
//! Text,
//! }
//!
//! fn main() {
//! let mut lex = Token::lexer("Create ridiculously fast Lexers.");
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Text)));
//! assert_eq!(lex.span(), 0..6);
//! assert_eq!(lex.slice(), "Create");
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Text)));
//! assert_eq!(lex.span(), 7..19);
//! assert_eq!(lex.slice(), "ridiculously");
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Fast)));
//! assert_eq!(lex.span(), 20..24);
//! assert_eq!(lex.slice(), "fast");
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Text)));
//! assert_eq!(lex.slice(), "Lexers");
//! assert_eq!(lex.span(), 25..31);
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Period)));
//! assert_eq!(lex.span(), 31..32);
//! assert_eq!(lex.slice(), ".");
//!
//! assert_eq!(lex.next(), None);
//! }
//! ```
//!
//! ### Callbacks
//!
//! **Logos** can also call arbitrary functions whenever a pattern is matched,
//! which can be used to put data into a variant:
//!
//! ```rust
//! use logos::{Logos, Lexer};
//!
//! // Note: callbacks can return `Option` or `Result`
//! fn kilo(lex: &mut Lexer<Token>) -> Option<u64> {
//! let slice = lex.slice();
//! let n: u64 = slice[..slice.len() - 1].parse().ok()?; // skip 'k'
//! Some(n * 1_000)
//! }
//!
//! fn mega(lex: &mut Lexer<Token>) -> Option<u64> {
//! let slice = lex.slice();
//! let n: u64 = slice[..slice.len() - 1].parse().ok()?; // skip'm'
//! Some(n * 1_000_000)
//! }
//!
//! #[derive(Logos, Debug, PartialEq)]
//! #[logos(skip r"[ \t\n\f]+")]
//! enum Token {
//! // Callbacks can use closure syntax, or refer
//! // to a function defined elsewhere.
//! //
//! // Each pattern can have its own callback.
//! #[regex("[0-9]+", |lex| lex.slice().parse().ok())]
//! #[regex("[0-9]+k", kilo)]
//! #[regex("[0-9]+m", mega)]
//! Number(u64),
//! }
//!
//! fn main() {
//! let mut lex = Token::lexer("5 42k 75m");
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Number(5))));
//! assert_eq!(lex.slice(), "5");
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Number(42_000))));
//! assert_eq!(lex.slice(), "42k");
//!
//! assert_eq!(lex.next(), Some(Ok(Token::Number(75_000_000))));
//! assert_eq!(lex.slice(), "75m");
//!
//! assert_eq!(lex.next(), None);
//! }
//! ```
//!
//! Logos can handle callbacks with following return types:
//!
//! | Return type | Produces |
//! |--------------------------------------------------|-----------------------------------------------------------------------------------------------------|
//! | `()` | `Ok(Token::Unit)` |
//! | `bool` | `Ok(Token::Unit)` **or** `Err(<Token as Logos>::Error::default())` |
//! | `Result<(), E>` | `Ok(Token::Unit)` **or** `Err(<Token as Logos>::Error::from(err))` |
//! | `T` | `Ok(Token::Value(T))` |
//! | `Option<T>` | `Ok(Token::Value(T))` **or** `Err(<Token as Logos>::Error::default())` |
//! | `Result<T, E>` | `Ok(Token::Value(T))` **or** `Err(<Token as Logos>::Error::from(err))` |
//! | [`Skip`](./struct.Skip.html) | _skips matched input_ |
//! | [`Filter<T>`](./enum.Filter.html) | `Ok(Token::Value(T))` **or** _skips matched input_ |
//! | [`FilterResult<T, E>`](./enum.FilterResult.html) | `Ok(Token::Value(T))` **or** `Err(<Token as Logos>::Error::from(err))` **or** _skips matched input_ |
//!
//! Callbacks can be also used to do perform more specialized lexing in place
//! where regular expressions are too limiting. For specifics look at
//! [`Lexer::remainder`](./struct.Lexer.html#method.remainder) and
//! [`Lexer::bump`](./struct.Lexer.html#method.bump).
//!
//! ## Errors
//!
//! By default, **Logos** uses `()` as the error type, which means that it
//! doesn't store any information about the error.
//! This can be changed by using `#[logos(error = T)]` attribute on the enum.
//! The type `T` can be any type that implements `Clone`, `PartialEq`,
//! `Default` and `From<E>` for each callback's error type.
//!
//! ## Token disambiguation
//!
//! Rule of thumb is:
//!
//! + Longer beats shorter.
//! + Specific beats generic.
//!
//! If any two definitions could match the same input, like `fast` and `[a-zA-Z]+`
//! in the example above, it's the longer and more specific definition of `Token::Fast`
//! that will be the result.
//!
//! This is done by comparing numeric priority attached to each definition. Every consecutive,
//! non-repeating single byte adds 2 to the priority, while every range or regex class adds 1.
//! Loops or optional blocks are ignored, while alternations count the shortest alternative:
//!
//! + `[a-zA-Z]+` has a priority of 1 (lowest possible), because at minimum it can match a single byte to a class.
//! + `foobar` has a priority of 12.
//! + `(foo|hello)(bar)?` has a priority of 6, `foo` being it's shortest possible match.
#![cfg_attr(not(feature = "std"), no_std)]
#![warn(missing_docs)]
#![doc(html_logo_url = "https://maciej.codes/kosz/logos.png")]
#[cfg(not(feature = "std"))]
extern crate core as std;
#[cfg(feature = "export_derive")]
pub use logos_derive::Logos;
use std::fmt::Debug;
mod lexer;
pub mod source;
#[doc(hidden)]
pub mod internal;
pub use crate::lexer::{Lexer, Span, SpannedIter};
pub use crate::source::Source;
/// Trait implemented for an enum representing all tokens. You should never have
/// to implement it manually, use the `#[derive(Logos)]` attribute on your enum.
pub trait Logos<'source>: Sized {
/// Associated type `Extras` for the particular lexer. This can be set using
/// `#[logos(extras = MyExtras)]` and accessed inside callbacks.
type Extras;
/// Source type this token can be lexed from. This will default to `str`,
/// unless one of the defined patterns explicitly uses non-unicode byte values
/// or byte slices, in which case that implementation will use `[u8]`.
type Source: Source +?Sized +'source;
/// Error type returned by the lexer. This can be set using
/// `#[logos(error = MyError)]`. Defaults to `()` if not set.
type Error: Default + Clone + PartialEq + Debug +'source;
/// The heart of Logos. Called by the `Lexer`. The implementation for this function
/// is generated by the `logos-derive` crate.
fn lex(lexer: &mut Lexer<'source, Self>);
/// Create a new instance of a `Lexer` that will produce tokens implementing
/// this `Logos`.
fn lexer(source: &'source Self::Source) -> Lexer<'source, Self>
where
Self::Extras: Default,
{
Lexer::new(source)
}
/// Create a new instance of a `Lexer` with the provided `Extras` that will
/// produce tokens implementing this `Logos`.
fn lexer_with_extras(
source: &'source Self::Source,
extras: Self::Extras,
) -> Lexer<'source, Self> {
Lexer::with_extras(source, extras)
}
}
/// Type that can be returned from a callback, informing the `Lexer`, to skip
/// current token match. See also [`logos::skip`](./fn.skip.html).
///
/// # Example
///
/// ```rust
/// use logos::{Logos, Skip};
///
/// #[derive(Logos, Debug, PartialEq)]
/// enum Token<'a> {
/// // We will treat "abc" as if it was whitespace.
/// // This is identical to using `logos::skip`.
/// #[regex(" |abc", |_| Skip)]
/// Ignored,
///
/// #[regex("[a-zA-Z]+")]
/// Text(&'a str),
/// }
///
/// let tokens: Vec<_> = Token::lexer("Hello abc world").collect();
///
/// assert_eq!(
/// tokens,
/// &[
/// Ok(Token::Text("Hello")),
/// Ok(Token::Text("world")),
/// ],
/// );
/// ```
pub struct Skip;
/// Type that can be returned from a callback, either producing a field
/// for a token, or skipping it.
///
/// # Example
///
/// ```rust
/// use logos::{Logos, Filter};
///
/// #[derive(Logos, Debug, PartialEq)]
/// enum Token {
/// #[regex(r"[ \n\f\t]+", logos::skip)]
/// Ignored,
///
/// #[regex("[0-9]+", |lex| {
/// let n: u64 = lex.slice().parse().unwrap();
///
/// // Only emit a token if `n` is an even number
/// match n % 2 {
/// 0 => Filter::Emit(n),
/// _ => Filter::Skip,
/// }
/// })]
/// EvenNumber(u64)
/// }
///
/// let tokens: Vec<_> = Token::lexer("20 11 42 23 100 8002").collect();
///
/// assert_eq!(
/// tokens,
/// &[
/// Ok(Token::EvenNumber(20)),
/// // skipping 11
/// Ok(Token::EvenNumber(42)),
/// // skipping 23
/// Ok(Token::EvenNumber(100)),
/// Ok(Token::EvenNumber(8002))
/// ]
/// );
/// ```
pub enum Filter<T> {
/// Emit a token with a given value `T`. Use `()` for unit variants without fields.
Emit(T),
/// Skip current match, analog to [`Skip`](./struct.Skip.html).
Skip, | }
/// Type that can be returned from a callback, either producing a field
/// for a token, skipping it, or emitting an error.
///
/// # Example
///
/// ```rust
/// use logos::{Logos, FilterResult};
///
/// #[derive(Debug, PartialEq, Clone, Default)]
/// enum LexingError {
/// NumberParseError,
/// NumberIsTen,
/// #[default]
/// Other,
/// }
///
/// impl From<std::num::ParseIntError> for LexingError {
/// fn from(_: std::num::ParseIntError) -> Self {
/// LexingError::NumberParseError
/// }
/// }
///
/// #[derive(Logos, Debug, PartialEq)]
/// #[logos(error = LexingError)]
/// enum Token {
/// #[regex(r"[ \n\f\t]+", logos::skip)]
/// Ignored,
///
/// #[regex("[0-9]+", |lex| {
/// let n: u64 = lex.slice().parse().unwrap();
///
/// // Only emit a token if `n` is an even number.
/// if n % 2 == 0 {
/// // Emit an error if `n` is 10.
/// if n == 10 {
/// FilterResult::Error(LexingError::NumberIsTen)
/// } else {
/// FilterResult::Emit(n)
/// }
/// } else {
/// FilterResult::Skip
/// }
/// })]
/// NiceEvenNumber(u64)
/// }
///
/// let tokens: Vec<_> = Token::lexer("20 11 42 23 100 10").collect();
///
/// assert_eq!(
/// tokens,
/// &[
/// Ok(Token::NiceEvenNumber(20)),
/// // skipping 11
/// Ok(Token::NiceEvenNumber(42)),
/// // skipping 23
/// Ok(Token::NiceEvenNumber(100)),
/// // error at 10
/// Err(LexingError::NumberIsTen),
/// ]
/// );
/// ```
pub enum FilterResult<T, E> {
/// Emit a token with a given value `T`. Use `()` for unit variants without fields.
Emit(T),
/// Skip current match, analog to [`Skip`](./struct.Skip.html).
Skip,
/// Emit a `<Token as Logos>::ERROR` token.
Error(E),
}
/// Predefined callback that will inform the `Lexer` to skip a definition.
///
/// # Example
///
/// ```rust
/// use logos::Logos;
///
/// #[derive(Logos, Debug, PartialEq)]
/// enum Token<'a> {
/// // We will treat "abc" as if it was whitespace
/// #[regex(" |abc", logos::skip)]
/// Ignored,
///
/// #[regex("[a-zA-Z]+")]
/// Text(&'a str),
/// }
///
/// let tokens: Vec<_> = Token::lexer("Hello abc world").collect();
///
/// assert_eq!(
/// tokens,
/// &[
/// Ok(Token::Text("Hello")),
/// Ok(Token::Text("world")),
/// ],
/// );
/// ```
#[inline]
pub fn skip<'source, Token: Logos<'source>>(_: &mut Lexer<'source, Token>) -> Skip {
Skip
}
#[cfg(doctest)]
mod test_readme {
macro_rules! external_doc_test {
($x:expr) => {
#[doc = $x]
extern "C" {}
};
}
external_doc_test!(include_str!("../../README.md"));
} | random_line_split |
|
grid.rs | // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License in the LICENSE-APACHE file or at:
// https://www.apache.org/licenses/LICENSE-2.0
//! A grid widget
use kas::layout::{DynGridStorage, GridChildInfo, GridDimensions};
use kas::layout::{GridSetter, GridSolver, RulesSetter, RulesSolver};
use kas::{layout, prelude::*};
use std::ops::{Index, IndexMut};
/// A grid of boxed widgets
///
/// This is a parameterisation of [`Grid`]
/// This is parameterised over the handler message type.
///
/// See documentation of [`Grid`] type.
pub type BoxGrid<Data> = Grid<Box<dyn Widget<Data = Data>>>;
impl_scope! {
/// A generic grid widget
///
/// Child widgets are displayed in a grid, according to each child's
/// [`GridChildInfo`]. This allows spans and overlapping widgets. The numbers
/// of rows and columns is determined automatically while the sizes of rows and
/// columns are determined based on their contents (including special handling
/// for spans, *mostly* with good results).
///
/// Note that all child widgets are stored in a list internally. The order of
/// widgets in that list does not affect display position, but does have a few
/// effects: (a) widgets may be accessed in this order via indexing, (b) widgets
/// are configured and drawn in this order, (c) navigating
/// through widgets with the Tab key currently uses the list order (though it
/// may be changed in the future to use display order).
///
/// There is no protection against multiple widgets occupying the same cell.
/// If this does happen, the last widget in that cell will appear on top, but
/// overlapping widget drawing may not be pretty.
///
/// ## Alternatives
///
/// Where the entries are fixed, also consider custom [`Widget`] implementations.
///
/// ## Performance
///
/// Most operations are `O(n)` in the number of children.
///
/// # Messages
///
/// If a handler is specified via [`Self::on_messages`] then this handler is
/// called when a child pushes a message.
#[autoimpl(Default)]
#[widget]
pub struct Grid<W: Widget> {
core: widget_core!(),
widgets: Vec<(GridChildInfo, W)>,
data: DynGridStorage,
dim: GridDimensions,
on_messages: Option<Box<dyn Fn(&mut EventCx, usize)>>,
}
impl Widget for Self {
type Data = W::Data;
fn for_child_node(
&mut self,
data: &W::Data,
index: usize,
closure: Box<dyn FnOnce(Node<'_>) + '_>,
) {
if let Some(w) = self.widgets.get_mut(index) {
closure(w.1.as_node(data));
}
}
}
impl Layout for Self {
#[inline]
fn num_children(&self) -> usize {
self.widgets.len()
}
fn get_child(&self, index: usize) -> Option<&dyn Layout> {
self.widgets.get(index).map(|w| w.1.as_layout())
}
fn size_rules(&mut self, sizer: SizeCx, axis: AxisInfo) -> SizeRules {
let mut solver = GridSolver::<Vec<_>, Vec<_>, _>::new(axis, self.dim, &mut self.data);
for (info, child) in &mut self.widgets {
solver.for_child(&mut self.data, *info, |axis| {
child.size_rules(sizer.re(), axis)
});
}
solver.finish(&mut self.data)
}
fn set_rect(&mut self, cx: &mut ConfigCx, rect: Rect) {
self.core.rect = rect;
let mut setter = GridSetter::<Vec<_>, Vec<_>, _>::new(rect, self.dim, &mut self.data);
for (info, child) in &mut self.widgets {
child.set_rect(cx, setter.child_rect(&mut self.data, *info));
}
}
fn find_id(&mut self, coord: Coord) -> Option<WidgetId> {
if!self.rect().contains(coord) {
return None;
}
self.widgets
.iter_mut()
.find_map(|(_, child)| child.find_id(coord))
.or_else(|| Some(self.id()))
}
fn draw(&mut self, mut draw: DrawCx) {
for (_, child) in &mut self.widgets {
draw.recurse(child);
}
}
}
impl Events for Self {
fn handle_messages(&mut self, cx: &mut EventCx, _: &Self::Data) {
if let Some(ref f) = self.on_messages {
let index = cx.last_child().expect("message not sent from self");
f(cx, index);
}
}
}
}
impl<W: Widget> Grid<W> {
/// Construct a new instance
#[inline]
pub fn new() -> Self {
Self::new_vec(vec![])
}
/// Construct a new instance
#[inline]
pub fn new_vec(widgets: Vec<(GridChildInfo, W)>) -> Self {
let mut grid = Grid {
widgets,
..Default::default()
};
grid.calc_dim();
grid
}
/// Assign a child message handler (inline style)
///
/// This handler is called when a child pushes a message:
/// `f(cx, index)`, where `index` is the child's index.
#[inline]
pub fn on_messages(mut self, f: impl Fn(&mut EventCx, usize) +'static) -> Self {
self.on_messages = Some(Box::new(f));
self
}
/// Get grid dimensions
///
/// The numbers of rows, columns and spans is determined automatically.
#[inline]
pub fn dimensions(&self) -> GridDimensions {
self.dim
}
/// Access layout storage
///
/// Use [`Self::dimensions`] to get expected dimensions.
#[inline]
pub fn layout_storage(&mut self) -> &mut impl layout::GridStorage {
&mut self.data
}
fn calc_dim(&mut self) {
let mut dim = GridDimensions::default();
for child in &self.widgets {
dim.cols = dim.cols.max(child.0.col_end);
dim.rows = dim.rows.max(child.0.row_end);
if child.0.col_end - child.0.col > 1 {
dim.col_spans += 1;
}
if child.0.row_end - child.0.row > 1 {
dim.row_spans += 1;
}
}
self.dim = dim;
}
/// Construct via a builder
pub fn build<F: FnOnce(GridBuilder<W>)>(f: F) -> Self {
let mut grid = Self::default();
let _ = grid.edit(f);
grid
}
/// Edit an existing grid via a builder
///
/// This may be used to edit children before window construction. It may
/// also be used from a running UI, but in this case a full reconfigure
/// of the window's widgets is required (triggered by the the return
/// value, [`Action::RECONFIGURE`]).
pub fn edit<F: FnOnce(GridBuilder<W>)>(&mut self, f: F) -> Action {
f(GridBuilder(&mut self.widgets));
self.calc_dim();
Action::RECONFIGURE
}
/// True if there are no child widgets
pub fn is_empty(&self) -> bool {
self.widgets.is_empty()
}
/// Returns the number of child widgets
pub fn len(&self) -> usize {
self.widgets.len()
}
/// Returns a reference to the child, if any
pub fn get(&self, index: usize) -> Option<&W> {
self.widgets.get(index).map(|t| &t.1)
}
/// Returns a mutable reference to the child, if any
pub fn get_mut(&mut self, index: usize) -> Option<&mut W> {
self.widgets.get_mut(index).map(|t| &mut t.1)
}
/// Iterate over childern
pub fn iter(&self) -> impl Iterator<Item = &(GridChildInfo, W)> {
ListIter {
list: &self.widgets,
}
}
/// Mutably iterate over childern
pub fn iter_mut(&mut self) -> impl Iterator<Item = &mut (GridChildInfo, W)> {
ListIterMut {
list: &mut self.widgets,
}
}
}
pub struct GridBuilder<'a, W: Widget>(&'a mut Vec<(GridChildInfo, W)>);
impl<'a, W: Widget> GridBuilder<'a, W> {
/// True if there are no child widgets
pub fn is_empty(&self) -> bool {
self.0.is_empty()
}
/// Returns the number of child widgets
pub fn len(&self) -> usize {
self.0.len()
}
/// Returns the number of elements the vector can hold without reallocating.
pub fn capacity(&self) -> usize {
self.0.capacity()
}
/// Reserves capacity for at least `additional` more elements to be inserted
/// into the list. See documentation of [`Vec::reserve`].
pub fn reserve(&mut self, additional: usize) {
self.0.reserve(additional);
}
/// Remove all child widgets
pub fn clear(&mut self) {
self.0.clear();
}
/// Add a child widget
///
/// The child is added to the end of the "list", thus appears last in
/// navigation order.
pub fn push(&mut self, info: GridChildInfo, widget: W) {
self.0.push((info, widget));
}
/// Add a child widget to the given cell
///
/// The child is added to the end of the "list", thus appears last in
/// navigation order.
pub fn push_cell(&mut self, col: u32, row: u32, widget: W) {
let info = GridChildInfo::new(col, row);
self.push(info, widget);
}
/// Add a child widget to the given cell, builder style
///
/// The child is added to the end of the "list", thus appears last in
/// navigation order.
#[must_use]
pub fn with_cell(self, col: u32, row: u32, widget: W) -> Self {
self.with_cell_span(col, row, 1, 1, widget)
}
/// Add a child widget to the given cell, with spans
///
/// Parameters `col_span` and `row_span` are the number of columns/rows
/// spanned and should each be at least 1.
///
/// The child is added to the end of the "list", thus appears last in
/// navigation order.
pub fn push_cell_span(&mut self, col: u32, row: u32, col_span: u32, row_span: u32, widget: W) {
let info = GridChildInfo {
col,
col_end: col + col_span,
row,
row_end: row + row_span,
};
self.push(info, widget);
}
/// Add a child widget to the given cell, with spans, builder style
///
/// Parameters `col_span` and `row_span` are the number of columns/rows
/// spanned and should each be at least 1.
///
/// The child is added to the end of the "list", thus appears last in
/// navigation order.
#[must_use]
pub fn with_cell_span(
mut self,
col: u32,
row: u32,
col_span: u32,
row_span: u32,
widget: W,
) -> Self {
self.push_cell_span(col, row, col_span, row_span, widget);
self
}
/// Remove the last child widget
///
/// Returns `None` if there are no children. Otherwise, this
/// triggers a reconfigure before the next draw operation.
pub fn pop(&mut self) -> Option<(GridChildInfo, W)> {
self.0.pop()
}
/// Inserts a child widget position `index`
///
/// Panics if `index > len`.
pub fn insert(&mut self, index: usize, info: GridChildInfo, widget: W) {
self.0.insert(index, (info, widget));
}
/// Removes the child widget at position `index`
///
/// Panics if `index` is out of bounds.
pub fn remove(&mut self, index: usize) -> (GridChildInfo, W) {
self.0.remove(index)
}
/// Replace the child at `index`
///
/// Panics if `index` is out of bounds.
pub fn replace(&mut self, index: usize, info: GridChildInfo, widget: W) -> (GridChildInfo, W) {
let mut item = (info, widget);
std::mem::swap(&mut item, &mut self.0[index]);
item
}
/// Append child widgets from an iterator
pub fn extend<T: IntoIterator<Item = (GridChildInfo, W)>>(&mut self, iter: T) {
self.0.extend(iter);
}
/// Resize, using the given closure to construct new widgets
pub fn resize_with<F: Fn(usize) -> (GridChildInfo, W)>(&mut self, len: usize, f: F) {
let l0 = self.0.len();
if l0 > len {
self.0.truncate(len);
} else if l0 < len {
self.0.reserve(len);
for i in l0..len {
self.0.push(f(i));
}
}
}
/// Retain only widgets satisfying predicate `f`
///
/// See documentation of [`Vec::retain`].
pub fn retain<F: FnMut(&(GridChildInfo, W)) -> bool>(&mut self, f: F) {
self.0.retain(f);
}
/// Get the first index of a child occupying the given cell, if any
pub fn find_child_cell(&self, col: u32, row: u32) -> Option<usize> {
for (i, (info, _)) in self.0.iter().enumerate() {
if info.col <= col && col < info.col_end && info.row <= row && row < info.row_end {
return Some(i);
}
}
None
}
/// Iterate over childern
pub fn iter(&self) -> impl Iterator<Item = &(GridChildInfo, W)> {
ListIter { list: self.0 }
}
/// Mutably iterate over childern
pub fn iter_mut(&mut self) -> impl Iterator<Item = &mut (GridChildInfo, W)> {
ListIterMut { list: self.0 }
}
}
impl<W: Widget> FromIterator<(GridChildInfo, W)> for Grid<W> {
#[inline]
fn from_iter<T>(iter: T) -> Self
where
T: IntoIterator<Item = (GridChildInfo, W)>,
{
Self::new_vec(iter.into_iter().collect())
}
}
impl<W: Widget> Index<usize> for Grid<W> {
type Output = (GridChildInfo, W);
fn index(&self, index: usize) -> &Self::Output {
&self.widgets[index]
}
}
impl<W: Widget> IndexMut<usize> for Grid<W> {
fn index_mut(&mut self, index: usize) -> &mut Self::Output {
&mut self.widgets[index]
}
}
struct | <'a, W: Widget> {
list: &'a [(GridChildInfo, W)],
}
impl<'a, W: Widget> Iterator for ListIter<'a, W> {
type Item = &'a (GridChildInfo, W);
fn next(&mut self) -> Option<Self::Item> {
if let Some((first, rest)) = self.list.split_first() {
self.list = rest;
Some(first)
} else {
None
}
}
fn size_hint(&self) -> (usize, Option<usize>) {
let len = self.len();
(len, Some(len))
}
}
impl<'a, W: Widget> ExactSizeIterator for ListIter<'a, W> {
fn len(&self) -> usize {
self.list.len()
}
}
struct ListIterMut<'a, W: Widget> {
list: &'a mut [(GridChildInfo, W)],
}
impl<'a, W: Widget> Iterator for ListIterMut<'a, W> {
type Item = &'a mut (GridChildInfo, W);
fn next(&mut self) -> Option<Self::Item> {
let list = std::mem::take(&mut self.list);
if let Some((first, rest)) = list.split_first_mut() {
self.list = rest;
Some(first)
} else {
None
}
}
fn size_hint(&self) -> (usize, Option<usize>) {
let len = self.len();
(len, Some(len))
}
}
impl<'a, W: Widget> ExactSizeIterator for ListIterMut<'a, W> {
fn len(&self) -> usize {
self.list.len()
}
}
| ListIter | identifier_name |
grid.rs | // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License in the LICENSE-APACHE file or at:
// https://www.apache.org/licenses/LICENSE-2.0
//! A grid widget
use kas::layout::{DynGridStorage, GridChildInfo, GridDimensions};
use kas::layout::{GridSetter, GridSolver, RulesSetter, RulesSolver};
use kas::{layout, prelude::*};
use std::ops::{Index, IndexMut};
/// A grid of boxed widgets
///
/// This is a parameterisation of [`Grid`]
/// This is parameterised over the handler message type.
///
/// See documentation of [`Grid`] type.
pub type BoxGrid<Data> = Grid<Box<dyn Widget<Data = Data>>>;
impl_scope! {
/// A generic grid widget
///
/// Child widgets are displayed in a grid, according to each child's
/// [`GridChildInfo`]. This allows spans and overlapping widgets. The numbers
/// of rows and columns is determined automatically while the sizes of rows and
/// columns are determined based on their contents (including special handling
/// for spans, *mostly* with good results).
///
/// Note that all child widgets are stored in a list internally. The order of
/// widgets in that list does not affect display position, but does have a few
/// effects: (a) widgets may be accessed in this order via indexing, (b) widgets
/// are configured and drawn in this order, (c) navigating
/// through widgets with the Tab key currently uses the list order (though it
/// may be changed in the future to use display order).
///
/// There is no protection against multiple widgets occupying the same cell.
/// If this does happen, the last widget in that cell will appear on top, but
/// overlapping widget drawing may not be pretty.
///
/// ## Alternatives
///
/// Where the entries are fixed, also consider custom [`Widget`] implementations.
///
/// ## Performance
///
/// Most operations are `O(n)` in the number of children.
///
/// # Messages
///
/// If a handler is specified via [`Self::on_messages`] then this handler is
/// called when a child pushes a message.
#[autoimpl(Default)]
#[widget]
pub struct Grid<W: Widget> {
core: widget_core!(),
widgets: Vec<(GridChildInfo, W)>,
data: DynGridStorage,
dim: GridDimensions,
on_messages: Option<Box<dyn Fn(&mut EventCx, usize)>>,
}
impl Widget for Self {
type Data = W::Data;
fn for_child_node(
&mut self,
data: &W::Data,
index: usize,
closure: Box<dyn FnOnce(Node<'_>) + '_>,
) {
if let Some(w) = self.widgets.get_mut(index) {
closure(w.1.as_node(data));
}
}
}
impl Layout for Self {
#[inline]
fn num_children(&self) -> usize {
self.widgets.len()
}
fn get_child(&self, index: usize) -> Option<&dyn Layout> {
self.widgets.get(index).map(|w| w.1.as_layout())
}
fn size_rules(&mut self, sizer: SizeCx, axis: AxisInfo) -> SizeRules {
let mut solver = GridSolver::<Vec<_>, Vec<_>, _>::new(axis, self.dim, &mut self.data);
for (info, child) in &mut self.widgets {
solver.for_child(&mut self.data, *info, |axis| {
child.size_rules(sizer.re(), axis)
});
}
solver.finish(&mut self.data)
}
fn set_rect(&mut self, cx: &mut ConfigCx, rect: Rect) {
self.core.rect = rect;
let mut setter = GridSetter::<Vec<_>, Vec<_>, _>::new(rect, self.dim, &mut self.data);
for (info, child) in &mut self.widgets {
child.set_rect(cx, setter.child_rect(&mut self.data, *info));
}
}
fn find_id(&mut self, coord: Coord) -> Option<WidgetId> {
if!self.rect().contains(coord) {
return None;
}
self.widgets
.iter_mut()
.find_map(|(_, child)| child.find_id(coord))
.or_else(|| Some(self.id()))
}
fn draw(&mut self, mut draw: DrawCx) {
for (_, child) in &mut self.widgets {
draw.recurse(child);
}
}
}
impl Events for Self {
fn handle_messages(&mut self, cx: &mut EventCx, _: &Self::Data) {
if let Some(ref f) = self.on_messages {
let index = cx.last_child().expect("message not sent from self");
f(cx, index);
}
}
}
}
impl<W: Widget> Grid<W> {
/// Construct a new instance
#[inline]
pub fn new() -> Self {
Self::new_vec(vec![])
}
/// Construct a new instance
#[inline]
pub fn new_vec(widgets: Vec<(GridChildInfo, W)>) -> Self {
let mut grid = Grid {
widgets,
..Default::default()
};
grid.calc_dim();
grid
}
/// Assign a child message handler (inline style)
///
/// This handler is called when a child pushes a message:
/// `f(cx, index)`, where `index` is the child's index.
#[inline]
pub fn on_messages(mut self, f: impl Fn(&mut EventCx, usize) +'static) -> Self {
self.on_messages = Some(Box::new(f));
self
}
/// Get grid dimensions
///
/// The numbers of rows, columns and spans is determined automatically.
#[inline]
pub fn dimensions(&self) -> GridDimensions {
self.dim
}
/// Access layout storage
///
/// Use [`Self::dimensions`] to get expected dimensions.
#[inline]
pub fn layout_storage(&mut self) -> &mut impl layout::GridStorage {
&mut self.data
}
fn calc_dim(&mut self) {
let mut dim = GridDimensions::default();
for child in &self.widgets {
dim.cols = dim.cols.max(child.0.col_end);
dim.rows = dim.rows.max(child.0.row_end);
if child.0.col_end - child.0.col > 1 {
dim.col_spans += 1;
}
if child.0.row_end - child.0.row > 1 {
dim.row_spans += 1;
}
}
self.dim = dim;
}
/// Construct via a builder
pub fn build<F: FnOnce(GridBuilder<W>)>(f: F) -> Self {
let mut grid = Self::default();
let _ = grid.edit(f);
grid
}
/// Edit an existing grid via a builder
///
/// This may be used to edit children before window construction. It may
/// also be used from a running UI, but in this case a full reconfigure
/// of the window's widgets is required (triggered by the the return
/// value, [`Action::RECONFIGURE`]).
pub fn edit<F: FnOnce(GridBuilder<W>)>(&mut self, f: F) -> Action {
f(GridBuilder(&mut self.widgets));
self.calc_dim();
Action::RECONFIGURE
}
/// True if there are no child widgets
pub fn is_empty(&self) -> bool {
self.widgets.is_empty()
}
/// Returns the number of child widgets
pub fn len(&self) -> usize {
self.widgets.len()
}
/// Returns a reference to the child, if any
pub fn get(&self, index: usize) -> Option<&W> {
self.widgets.get(index).map(|t| &t.1)
}
/// Returns a mutable reference to the child, if any
pub fn get_mut(&mut self, index: usize) -> Option<&mut W> {
self.widgets.get_mut(index).map(|t| &mut t.1)
}
/// Iterate over childern
pub fn iter(&self) -> impl Iterator<Item = &(GridChildInfo, W)> {
ListIter {
list: &self.widgets,
}
}
/// Mutably iterate over childern
pub fn iter_mut(&mut self) -> impl Iterator<Item = &mut (GridChildInfo, W)> {
ListIterMut {
list: &mut self.widgets,
}
}
}
pub struct GridBuilder<'a, W: Widget>(&'a mut Vec<(GridChildInfo, W)>);
impl<'a, W: Widget> GridBuilder<'a, W> {
/// True if there are no child widgets
pub fn is_empty(&self) -> bool {
self.0.is_empty()
}
/// Returns the number of child widgets
pub fn len(&self) -> usize {
self.0.len()
}
/// Returns the number of elements the vector can hold without reallocating.
pub fn capacity(&self) -> usize {
self.0.capacity()
}
/// Reserves capacity for at least `additional` more elements to be inserted
/// into the list. See documentation of [`Vec::reserve`].
pub fn reserve(&mut self, additional: usize) {
self.0.reserve(additional);
}
/// Remove all child widgets
pub fn clear(&mut self) {
self.0.clear();
}
/// Add a child widget
///
/// The child is added to the end of the "list", thus appears last in
/// navigation order.
pub fn push(&mut self, info: GridChildInfo, widget: W) {
self.0.push((info, widget));
}
/// Add a child widget to the given cell
///
/// The child is added to the end of the "list", thus appears last in
/// navigation order.
pub fn push_cell(&mut self, col: u32, row: u32, widget: W) {
let info = GridChildInfo::new(col, row);
self.push(info, widget);
}
/// Add a child widget to the given cell, builder style
///
/// The child is added to the end of the "list", thus appears last in
/// navigation order.
#[must_use]
pub fn with_cell(self, col: u32, row: u32, widget: W) -> Self {
self.with_cell_span(col, row, 1, 1, widget)
}
/// Add a child widget to the given cell, with spans
///
/// Parameters `col_span` and `row_span` are the number of columns/rows
/// spanned and should each be at least 1.
///
/// The child is added to the end of the "list", thus appears last in
/// navigation order.
pub fn push_cell_span(&mut self, col: u32, row: u32, col_span: u32, row_span: u32, widget: W) {
let info = GridChildInfo {
col,
col_end: col + col_span,
row,
row_end: row + row_span,
};
self.push(info, widget);
}
/// Add a child widget to the given cell, with spans, builder style
///
/// Parameters `col_span` and `row_span` are the number of columns/rows
/// spanned and should each be at least 1.
///
/// The child is added to the end of the "list", thus appears last in
/// navigation order.
#[must_use]
pub fn with_cell_span(
mut self,
col: u32,
row: u32,
col_span: u32,
row_span: u32,
widget: W,
) -> Self {
self.push_cell_span(col, row, col_span, row_span, widget);
self
}
/// Remove the last child widget
///
/// Returns `None` if there are no children. Otherwise, this
/// triggers a reconfigure before the next draw operation.
pub fn pop(&mut self) -> Option<(GridChildInfo, W)> {
self.0.pop()
}
/// Inserts a child widget position `index`
///
/// Panics if `index > len`.
pub fn insert(&mut self, index: usize, info: GridChildInfo, widget: W) {
self.0.insert(index, (info, widget));
}
/// Removes the child widget at position `index`
///
/// Panics if `index` is out of bounds.
pub fn remove(&mut self, index: usize) -> (GridChildInfo, W) {
self.0.remove(index)
}
/// Replace the child at `index`
///
/// Panics if `index` is out of bounds.
pub fn replace(&mut self, index: usize, info: GridChildInfo, widget: W) -> (GridChildInfo, W) {
let mut item = (info, widget);
std::mem::swap(&mut item, &mut self.0[index]);
item
}
/// Append child widgets from an iterator
pub fn extend<T: IntoIterator<Item = (GridChildInfo, W)>>(&mut self, iter: T) {
self.0.extend(iter);
}
/// Resize, using the given closure to construct new widgets
pub fn resize_with<F: Fn(usize) -> (GridChildInfo, W)>(&mut self, len: usize, f: F) {
let l0 = self.0.len();
if l0 > len {
self.0.truncate(len);
} else if l0 < len {
self.0.reserve(len);
for i in l0..len {
self.0.push(f(i));
}
}
}
/// Retain only widgets satisfying predicate `f`
///
/// See documentation of [`Vec::retain`].
pub fn retain<F: FnMut(&(GridChildInfo, W)) -> bool>(&mut self, f: F) {
self.0.retain(f);
}
/// Get the first index of a child occupying the given cell, if any
pub fn find_child_cell(&self, col: u32, row: u32) -> Option<usize> {
for (i, (info, _)) in self.0.iter().enumerate() {
if info.col <= col && col < info.col_end && info.row <= row && row < info.row_end {
return Some(i);
}
}
None
}
/// Iterate over childern
pub fn iter(&self) -> impl Iterator<Item = &(GridChildInfo, W)> {
ListIter { list: self.0 }
}
/// Mutably iterate over childern
pub fn iter_mut(&mut self) -> impl Iterator<Item = &mut (GridChildInfo, W)> {
ListIterMut { list: self.0 }
}
}
impl<W: Widget> FromIterator<(GridChildInfo, W)> for Grid<W> {
#[inline]
fn from_iter<T>(iter: T) -> Self
where
T: IntoIterator<Item = (GridChildInfo, W)>,
{
Self::new_vec(iter.into_iter().collect())
}
}
impl<W: Widget> Index<usize> for Grid<W> {
type Output = (GridChildInfo, W);
fn index(&self, index: usize) -> &Self::Output {
&self.widgets[index]
}
}
impl<W: Widget> IndexMut<usize> for Grid<W> {
fn index_mut(&mut self, index: usize) -> &mut Self::Output {
&mut self.widgets[index]
}
}
struct ListIter<'a, W: Widget> {
list: &'a [(GridChildInfo, W)],
}
impl<'a, W: Widget> Iterator for ListIter<'a, W> {
type Item = &'a (GridChildInfo, W);
fn next(&mut self) -> Option<Self::Item> {
if let Some((first, rest)) = self.list.split_first() {
self.list = rest;
Some(first)
} else {
None
}
}
fn size_hint(&self) -> (usize, Option<usize>) {
let len = self.len();
(len, Some(len))
}
}
impl<'a, W: Widget> ExactSizeIterator for ListIter<'a, W> {
fn len(&self) -> usize {
self.list.len()
}
}
struct ListIterMut<'a, W: Widget> {
list: &'a mut [(GridChildInfo, W)],
}
impl<'a, W: Widget> Iterator for ListIterMut<'a, W> {
type Item = &'a mut (GridChildInfo, W);
fn next(&mut self) -> Option<Self::Item> {
let list = std::mem::take(&mut self.list);
if let Some((first, rest)) = list.split_first_mut() | else {
None
}
}
fn size_hint(&self) -> (usize, Option<usize>) {
let len = self.len();
(len, Some(len))
}
}
impl<'a, W: Widget> ExactSizeIterator for ListIterMut<'a, W> {
fn len(&self) -> usize {
self.list.len()
}
}
| {
self.list = rest;
Some(first)
} | conditional_block |
grid.rs | // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License in the LICENSE-APACHE file or at:
// https://www.apache.org/licenses/LICENSE-2.0
//! A grid widget
use kas::layout::{DynGridStorage, GridChildInfo, GridDimensions};
use kas::layout::{GridSetter, GridSolver, RulesSetter, RulesSolver};
use kas::{layout, prelude::*};
use std::ops::{Index, IndexMut};
/// A grid of boxed widgets
///
/// This is a parameterisation of [`Grid`]
/// This is parameterised over the handler message type.
///
/// See documentation of [`Grid`] type.
pub type BoxGrid<Data> = Grid<Box<dyn Widget<Data = Data>>>;
impl_scope! {
/// A generic grid widget
///
/// Child widgets are displayed in a grid, according to each child's
/// [`GridChildInfo`]. This allows spans and overlapping widgets. The numbers
/// of rows and columns is determined automatically while the sizes of rows and
/// columns are determined based on their contents (including special handling
/// for spans, *mostly* with good results).
///
/// Note that all child widgets are stored in a list internally. The order of
/// widgets in that list does not affect display position, but does have a few
/// effects: (a) widgets may be accessed in this order via indexing, (b) widgets
/// are configured and drawn in this order, (c) navigating
/// through widgets with the Tab key currently uses the list order (though it
/// may be changed in the future to use display order).
///
/// There is no protection against multiple widgets occupying the same cell.
/// If this does happen, the last widget in that cell will appear on top, but
/// overlapping widget drawing may not be pretty.
///
/// ## Alternatives
///
/// Where the entries are fixed, also consider custom [`Widget`] implementations.
///
/// ## Performance
///
/// Most operations are `O(n)` in the number of children.
///
/// # Messages
///
/// If a handler is specified via [`Self::on_messages`] then this handler is
/// called when a child pushes a message.
#[autoimpl(Default)]
#[widget]
pub struct Grid<W: Widget> {
core: widget_core!(),
widgets: Vec<(GridChildInfo, W)>,
data: DynGridStorage,
dim: GridDimensions,
on_messages: Option<Box<dyn Fn(&mut EventCx, usize)>>,
}
impl Widget for Self {
type Data = W::Data;
fn for_child_node(
&mut self,
data: &W::Data,
index: usize,
closure: Box<dyn FnOnce(Node<'_>) + '_>,
) {
if let Some(w) = self.widgets.get_mut(index) {
closure(w.1.as_node(data));
}
}
}
impl Layout for Self {
#[inline]
fn num_children(&self) -> usize {
self.widgets.len()
}
fn get_child(&self, index: usize) -> Option<&dyn Layout> {
self.widgets.get(index).map(|w| w.1.as_layout()) | for (info, child) in &mut self.widgets {
solver.for_child(&mut self.data, *info, |axis| {
child.size_rules(sizer.re(), axis)
});
}
solver.finish(&mut self.data)
}
fn set_rect(&mut self, cx: &mut ConfigCx, rect: Rect) {
self.core.rect = rect;
let mut setter = GridSetter::<Vec<_>, Vec<_>, _>::new(rect, self.dim, &mut self.data);
for (info, child) in &mut self.widgets {
child.set_rect(cx, setter.child_rect(&mut self.data, *info));
}
}
fn find_id(&mut self, coord: Coord) -> Option<WidgetId> {
if!self.rect().contains(coord) {
return None;
}
self.widgets
.iter_mut()
.find_map(|(_, child)| child.find_id(coord))
.or_else(|| Some(self.id()))
}
fn draw(&mut self, mut draw: DrawCx) {
for (_, child) in &mut self.widgets {
draw.recurse(child);
}
}
}
impl Events for Self {
fn handle_messages(&mut self, cx: &mut EventCx, _: &Self::Data) {
if let Some(ref f) = self.on_messages {
let index = cx.last_child().expect("message not sent from self");
f(cx, index);
}
}
}
}
impl<W: Widget> Grid<W> {
/// Construct a new instance
#[inline]
pub fn new() -> Self {
Self::new_vec(vec![])
}
/// Construct a new instance
#[inline]
pub fn new_vec(widgets: Vec<(GridChildInfo, W)>) -> Self {
let mut grid = Grid {
widgets,
..Default::default()
};
grid.calc_dim();
grid
}
/// Assign a child message handler (inline style)
///
/// This handler is called when a child pushes a message:
/// `f(cx, index)`, where `index` is the child's index.
#[inline]
pub fn on_messages(mut self, f: impl Fn(&mut EventCx, usize) +'static) -> Self {
self.on_messages = Some(Box::new(f));
self
}
/// Get grid dimensions
///
/// The numbers of rows, columns and spans is determined automatically.
#[inline]
pub fn dimensions(&self) -> GridDimensions {
self.dim
}
/// Access layout storage
///
/// Use [`Self::dimensions`] to get expected dimensions.
#[inline]
pub fn layout_storage(&mut self) -> &mut impl layout::GridStorage {
&mut self.data
}
fn calc_dim(&mut self) {
let mut dim = GridDimensions::default();
for child in &self.widgets {
dim.cols = dim.cols.max(child.0.col_end);
dim.rows = dim.rows.max(child.0.row_end);
if child.0.col_end - child.0.col > 1 {
dim.col_spans += 1;
}
if child.0.row_end - child.0.row > 1 {
dim.row_spans += 1;
}
}
self.dim = dim;
}
/// Construct via a builder
pub fn build<F: FnOnce(GridBuilder<W>)>(f: F) -> Self {
let mut grid = Self::default();
let _ = grid.edit(f);
grid
}
/// Edit an existing grid via a builder
///
/// This may be used to edit children before window construction. It may
/// also be used from a running UI, but in this case a full reconfigure
/// of the window's widgets is required (triggered by the the return
/// value, [`Action::RECONFIGURE`]).
pub fn edit<F: FnOnce(GridBuilder<W>)>(&mut self, f: F) -> Action {
f(GridBuilder(&mut self.widgets));
self.calc_dim();
Action::RECONFIGURE
}
/// True if there are no child widgets
pub fn is_empty(&self) -> bool {
self.widgets.is_empty()
}
/// Returns the number of child widgets
pub fn len(&self) -> usize {
self.widgets.len()
}
/// Returns a reference to the child, if any
pub fn get(&self, index: usize) -> Option<&W> {
self.widgets.get(index).map(|t| &t.1)
}
/// Returns a mutable reference to the child, if any
pub fn get_mut(&mut self, index: usize) -> Option<&mut W> {
self.widgets.get_mut(index).map(|t| &mut t.1)
}
/// Iterate over childern
pub fn iter(&self) -> impl Iterator<Item = &(GridChildInfo, W)> {
ListIter {
list: &self.widgets,
}
}
/// Mutably iterate over childern
pub fn iter_mut(&mut self) -> impl Iterator<Item = &mut (GridChildInfo, W)> {
ListIterMut {
list: &mut self.widgets,
}
}
}
pub struct GridBuilder<'a, W: Widget>(&'a mut Vec<(GridChildInfo, W)>);
impl<'a, W: Widget> GridBuilder<'a, W> {
/// True if there are no child widgets
pub fn is_empty(&self) -> bool {
self.0.is_empty()
}
/// Returns the number of child widgets
pub fn len(&self) -> usize {
self.0.len()
}
/// Returns the number of elements the vector can hold without reallocating.
pub fn capacity(&self) -> usize {
self.0.capacity()
}
/// Reserves capacity for at least `additional` more elements to be inserted
/// into the list. See documentation of [`Vec::reserve`].
pub fn reserve(&mut self, additional: usize) {
self.0.reserve(additional);
}
/// Remove all child widgets
pub fn clear(&mut self) {
self.0.clear();
}
/// Add a child widget
///
/// The child is added to the end of the "list", thus appears last in
/// navigation order.
pub fn push(&mut self, info: GridChildInfo, widget: W) {
self.0.push((info, widget));
}
/// Add a child widget to the given cell
///
/// The child is added to the end of the "list", thus appears last in
/// navigation order.
pub fn push_cell(&mut self, col: u32, row: u32, widget: W) {
let info = GridChildInfo::new(col, row);
self.push(info, widget);
}
/// Add a child widget to the given cell, builder style
///
/// The child is added to the end of the "list", thus appears last in
/// navigation order.
#[must_use]
pub fn with_cell(self, col: u32, row: u32, widget: W) -> Self {
self.with_cell_span(col, row, 1, 1, widget)
}
/// Add a child widget to the given cell, with spans
///
/// Parameters `col_span` and `row_span` are the number of columns/rows
/// spanned and should each be at least 1.
///
/// The child is added to the end of the "list", thus appears last in
/// navigation order.
pub fn push_cell_span(&mut self, col: u32, row: u32, col_span: u32, row_span: u32, widget: W) {
let info = GridChildInfo {
col,
col_end: col + col_span,
row,
row_end: row + row_span,
};
self.push(info, widget);
}
/// Add a child widget to the given cell, with spans, builder style
///
/// Parameters `col_span` and `row_span` are the number of columns/rows
/// spanned and should each be at least 1.
///
/// The child is added to the end of the "list", thus appears last in
/// navigation order.
#[must_use]
pub fn with_cell_span(
mut self,
col: u32,
row: u32,
col_span: u32,
row_span: u32,
widget: W,
) -> Self {
self.push_cell_span(col, row, col_span, row_span, widget);
self
}
/// Remove the last child widget
///
/// Returns `None` if there are no children. Otherwise, this
/// triggers a reconfigure before the next draw operation.
pub fn pop(&mut self) -> Option<(GridChildInfo, W)> {
self.0.pop()
}
/// Inserts a child widget position `index`
///
/// Panics if `index > len`.
pub fn insert(&mut self, index: usize, info: GridChildInfo, widget: W) {
self.0.insert(index, (info, widget));
}
/// Removes the child widget at position `index`
///
/// Panics if `index` is out of bounds.
pub fn remove(&mut self, index: usize) -> (GridChildInfo, W) {
self.0.remove(index)
}
/// Replace the child at `index`
///
/// Panics if `index` is out of bounds.
pub fn replace(&mut self, index: usize, info: GridChildInfo, widget: W) -> (GridChildInfo, W) {
let mut item = (info, widget);
std::mem::swap(&mut item, &mut self.0[index]);
item
}
/// Append child widgets from an iterator
pub fn extend<T: IntoIterator<Item = (GridChildInfo, W)>>(&mut self, iter: T) {
self.0.extend(iter);
}
/// Resize, using the given closure to construct new widgets
pub fn resize_with<F: Fn(usize) -> (GridChildInfo, W)>(&mut self, len: usize, f: F) {
let l0 = self.0.len();
if l0 > len {
self.0.truncate(len);
} else if l0 < len {
self.0.reserve(len);
for i in l0..len {
self.0.push(f(i));
}
}
}
/// Retain only widgets satisfying predicate `f`
///
/// See documentation of [`Vec::retain`].
pub fn retain<F: FnMut(&(GridChildInfo, W)) -> bool>(&mut self, f: F) {
self.0.retain(f);
}
/// Get the first index of a child occupying the given cell, if any
pub fn find_child_cell(&self, col: u32, row: u32) -> Option<usize> {
for (i, (info, _)) in self.0.iter().enumerate() {
if info.col <= col && col < info.col_end && info.row <= row && row < info.row_end {
return Some(i);
}
}
None
}
/// Iterate over childern
pub fn iter(&self) -> impl Iterator<Item = &(GridChildInfo, W)> {
ListIter { list: self.0 }
}
/// Mutably iterate over childern
pub fn iter_mut(&mut self) -> impl Iterator<Item = &mut (GridChildInfo, W)> {
ListIterMut { list: self.0 }
}
}
impl<W: Widget> FromIterator<(GridChildInfo, W)> for Grid<W> {
#[inline]
fn from_iter<T>(iter: T) -> Self
where
T: IntoIterator<Item = (GridChildInfo, W)>,
{
Self::new_vec(iter.into_iter().collect())
}
}
impl<W: Widget> Index<usize> for Grid<W> {
type Output = (GridChildInfo, W);
fn index(&self, index: usize) -> &Self::Output {
&self.widgets[index]
}
}
impl<W: Widget> IndexMut<usize> for Grid<W> {
fn index_mut(&mut self, index: usize) -> &mut Self::Output {
&mut self.widgets[index]
}
}
struct ListIter<'a, W: Widget> {
list: &'a [(GridChildInfo, W)],
}
impl<'a, W: Widget> Iterator for ListIter<'a, W> {
type Item = &'a (GridChildInfo, W);
fn next(&mut self) -> Option<Self::Item> {
if let Some((first, rest)) = self.list.split_first() {
self.list = rest;
Some(first)
} else {
None
}
}
fn size_hint(&self) -> (usize, Option<usize>) {
let len = self.len();
(len, Some(len))
}
}
impl<'a, W: Widget> ExactSizeIterator for ListIter<'a, W> {
fn len(&self) -> usize {
self.list.len()
}
}
struct ListIterMut<'a, W: Widget> {
list: &'a mut [(GridChildInfo, W)],
}
impl<'a, W: Widget> Iterator for ListIterMut<'a, W> {
type Item = &'a mut (GridChildInfo, W);
fn next(&mut self) -> Option<Self::Item> {
let list = std::mem::take(&mut self.list);
if let Some((first, rest)) = list.split_first_mut() {
self.list = rest;
Some(first)
} else {
None
}
}
fn size_hint(&self) -> (usize, Option<usize>) {
let len = self.len();
(len, Some(len))
}
}
impl<'a, W: Widget> ExactSizeIterator for ListIterMut<'a, W> {
fn len(&self) -> usize {
self.list.len()
}
} | }
fn size_rules(&mut self, sizer: SizeCx, axis: AxisInfo) -> SizeRules {
let mut solver = GridSolver::<Vec<_>, Vec<_>, _>::new(axis, self.dim, &mut self.data); | random_line_split |
main.rs | extern crate clap;
extern crate rust_htslib;
extern crate bio;
use clap::{Arg, App};
use rust_htslib::bam;
use rust_htslib::prelude::*;
use bio::io::fasta;
#[derive(Clone)]
pub struct GenomicInterval {
pub tid: u32,
pub chrom: String,
// chromosome name
pub start_pos: u32,
// start of interval
pub end_pos: u32,
// end of interval (inclusive)
}
pub fn parse_target_names(bam_file: &String) -> Vec<String> {
let bam = bam::Reader::from_path(bam_file).unwrap();
let header_view = bam.header();
let target_names_dec: Vec<&[u8]> = header_view.target_names();
let mut target_names: Vec<String> = vec![];
for t_name_dec in target_names_dec {
let mut name_vec: Vec<char> = vec![];
for decr in t_name_dec {
let dec: u8 = *decr;
name_vec.push(dec as char);
}
let name_string: String = name_vec.into_iter().collect();
target_names.push(name_string);
}
target_names
}
pub fn u8_to_string(u: &[u8]) -> String {
String::from_utf8(u.to_vec()).unwrap()
}
pub fn dna_vec(u: &[u8]) -> (Vec<char>) {
let mut v: Vec<char> = Vec::with_capacity(u.len());
for cu in u.to_ascii_uppercase() {
let c = cu as char;
//assert!(c == 'A' || c == 'C' || c == 'G' || c == 'T' || c == 'N');
if c == 'A' || c == 'C' || c == 'G' || c == 'T' || c == 'N' {
v.push(c);
} else {
eprintln!("Warning: Unexpected base \"{}\" encountered. Replaced with \"N\".",
c);
v.push('N');
}
}
v
}
pub fn | (bam_file: &String) -> Vec<GenomicInterval> {
let bam = bam::Reader::from_path(bam_file).unwrap();
let header_view = bam.header();
let target_names_dec: Vec<&[u8]> = header_view.target_names();
let mut intervals: Vec<GenomicInterval> = vec![];
for (tid, t_name_dec) in target_names_dec.iter().enumerate() {
let mut name_vec: Vec<char> = vec![];
for decr in t_name_dec.iter() {
let dec: u8 = *decr;
name_vec.push(dec as char);
}
let name_string: String = name_vec.into_iter().collect();
intervals.push(GenomicInterval{
tid: tid as u32,
chrom: name_string,
start_pos: 0,
end_pos: header_view.target_len(tid as u32).unwrap()-1
});
}
intervals
}
// given a bam file name and a possible genomic interval,
// if the interval exists then just return a vector holding that lone interval
// otherwise, if the interval is None,
// return a vector holding GenomicIntervals representing the whole genome.
pub fn get_interval_lst(bam_file: &String, interval: &Option<GenomicInterval>) -> Vec<GenomicInterval> {
match interval {
&Some(ref iv) => {
vec![iv.clone()]
}
&None => {
get_whole_genome_intervals(bam_file)
}
}
}
// this is really ugly. TODO a less verbose implementation
pub fn parse_region_string(region_string: Option<&str>,
bamfile_name: &String)
-> Option<GenomicInterval> {
let bam = bam::Reader::from_path(bamfile_name).unwrap();
match region_string {
Some(r) if r.contains(":") && r.contains("-") => {
let split1: Vec<&str> = r.split(":").collect();
if split1.len()!= 2 {
panic!("Invalid format for region. Please use <chrom> or <chrom:start-stop>");
}
let split2: Vec<&str> = split1[1].split("-").collect();
if split2.len()!= 2 {
panic!("Invalid format for region. Please use <chrom> or <chrom:start-stop>");
}
let iv_chrom = split1[0].to_string();
let iv_start = split2[0].parse::<u32>().expect("Invalid position value specified in region string.");
let iv_end = split2[1].parse::<u32>().expect("Invalid position value specified in region string.");
let mut tid: u32 = 0;
for name in bam.header().target_names() {
if u8_to_string(name) == iv_chrom {
break;
}
tid += 1;
}
if tid as usize == bam.header().target_names().len() {
panic!("Chromosome name for region is not in BAM file.");
}
Some(GenomicInterval {
tid: tid,
chrom: iv_chrom,
start_pos: iv_start - 1,
end_pos: iv_end - 1,
})
}
Some(r) => {
let r_str = r.to_string();
let mut tid: u32 = 0;
for name in bam.header().target_names() {
if u8_to_string(name) == r_str {
break;
}
tid += 1;
}
if tid as usize == bam.header().target_names().len() {
panic!("Chromosome name for region is not in BAM file.");
}
let tlen = bam.header().target_len(tid).unwrap();
Some(GenomicInterval {
tid: tid,
chrom: r_str,
start_pos: 0,
end_pos: tlen - 1,
})
}
None => None,
}
}
pub fn count_mapped_reads(bam_file: &String,
fasta_file: &String,
interval: &Option<GenomicInterval>,
min_coverage: u32,
min_mapq: u8,
min_map_frac: f64,
mapped_count_mode: bool) {
let target_names = parse_target_names(&bam_file);
let mut fasta = fasta::IndexedReader::from_file(&fasta_file).unwrap();
// pileup over all covered sites
let mut ref_seq: Vec<char> = vec![];
let mut prev_tid = 4294967295;
let a_str = "A".to_string();
let c_str = "C".to_string();
let g_str = "G".to_string();
let t_str = "T".to_string();
let interval_lst: Vec<GenomicInterval> = get_interval_lst(bam_file, interval);
let mut bam_ix = bam::IndexedReader::from_path(bam_file).unwrap();
let mut count = 0;
for iv in interval_lst {
bam_ix.fetch(iv.tid as u32, iv.start_pos as u32, iv.end_pos as u32 + 1).ok().expect("Error seeking BAM file while extracting fragments.");
let bam_pileup = bam_ix.pileup();
for p in bam_pileup {
let pileup = p.unwrap();
let tid: usize = pileup.tid() as usize;
let chrom: String = target_names[tid].clone();
let pos0: usize = pileup.pos() as usize;
if chrom!= iv.chrom ||
pos0 < iv.start_pos as usize ||
pos0 > iv.end_pos as usize {
continue;
}
if tid!= prev_tid {
let mut ref_seq_u8: Vec<u8> = vec![];
fasta.read_all(&chrom, &mut ref_seq_u8).expect("Failed to read fasta sequence record.");
ref_seq = dna_vec(&ref_seq_u8);
}
let ref_base_str = (ref_seq[pileup.pos() as usize]).to_string();
if ref_base_str.contains("N") {
continue;
}
assert!(ref_base_str == a_str || ref_base_str == c_str || ref_base_str == g_str || ref_base_str == t_str);
let mut depth: usize = 0;
let mut well_mapped: usize = 0;
// pileup the bases for a single position and count number of each base
for alignment in pileup.alignments() {
let record = alignment.record();
// may be faster to implement this as bitwise operation on raw flag in the future?
if record.is_secondary() || record.is_quality_check_failed() ||
record.is_duplicate() || record.is_supplementary() {
continue;
}
depth += 1;
if record.is_unmapped() || record.mapq() < min_mapq {
continue;
}
well_mapped += 1;
}
let well_mapped_frac = well_mapped as f64 / depth as f64;
if mapped_count_mode {
if well_mapped >= min_coverage as usize {
count += 1;
}
} else {
if depth >= min_coverage as usize && well_mapped_frac >= min_map_frac {
count += 1;
}
}
prev_tid = tid;
}
}
println!("{}",count);
}
fn main() {
let input_args = App::new("Map Counter")
.version("0.1")
.author("Peter Edge <[email protected]>")
.about("Given a bam, count the number of positions exceeding a given min coverage and \"well-mapped\" fraction.")
.arg(Arg::with_name("Input BAM")
.short("b")
.long("bam")
.value_name("BAM")
.help("sorted, indexed BAM file.")
.display_order(10)
.required(true)
.takes_value(true))
.arg(Arg::with_name("Input FASTA")
.short("r")
.long("ref")
.value_name("FASTA")
.help("indexed fasta reference that BAM file is aligned to")
.display_order(20)
.required(true)
.takes_value(true))
.arg(Arg::with_name("Chrom")
.short("C")
.long("chrom")
.value_name("string")
.help("Chromosome to limit analysis to.")
.display_order(30)
.takes_value(true))
.arg(Arg::with_name("Min coverage")
.short("c")
.long("min_cov")
.value_name("int")
.help("Minimum coverage (of reads passing filters) to consider position as a potential SNV.")
.display_order(40)
.required(true)
.default_value("0"))
.arg(Arg::with_name("Well-mapped fraction")
.short("f")
.long("map_frac")
.value_name("float")
.help("Minimum fraction of mapped reads with mapq >= MAPQ_CUTOFF.")
.display_order(50)
.required(true)
.default_value("0"))
.arg(Arg::with_name("Min mapq")
.short("q")
.long("min_mapq")
.value_name("int")
.help("Map quality cutoff (for calculating well-mapped fraction).")
.display_order(60)
.default_value("60"))
.arg(Arg::with_name("Mapped read count mode")
.short("m")
.long("mapped_count_mode")
.help("Ignore map fraction and use total mapped read count. \
Return the total number of positions with at least min_cov reads having mapq>=min_mapq. \
Default behavior is to return the number of positions with at least min_cov reads, where \
at least map_frac of them have mapq>=min_mapq")
.display_order(161))
.get_matches();
let bamfile_name = input_args.value_of("Input BAM").unwrap().to_string();
let fastafile_name = input_args.value_of("Input FASTA").unwrap().to_string();
let interval: Option<GenomicInterval> = parse_region_string(input_args.value_of("Chrom"),
&bamfile_name);
let min_mapq = input_args.value_of("Min mapq")
.unwrap()
.parse::<u8>()
.expect("Argument min_mapq must be an int!");
let min_cov = input_args.value_of("Min coverage")
.unwrap()
.parse::<u32>()
.expect("Argument min_cov must be an int!");
let min_map_frac = input_args.value_of("Well-mapped fraction")
.unwrap()
.parse::<f64>()
.expect("Argument map_frac must be a positive float!");
let mapped_count_mode: bool = match input_args.occurrences_of("Mapped read count mode") {
0 => {false},
1 => {true},
_ => {
panic!("mapped_count_mode specified multiple times");
}
};
count_mapped_reads(&bamfile_name,
&fastafile_name,
&interval,
min_cov,
min_mapq,
min_map_frac,
mapped_count_mode);
}
| get_whole_genome_intervals | identifier_name |
main.rs | extern crate clap;
extern crate rust_htslib;
extern crate bio;
use clap::{Arg, App};
use rust_htslib::bam;
use rust_htslib::prelude::*;
use bio::io::fasta;
#[derive(Clone)]
pub struct GenomicInterval {
pub tid: u32,
pub chrom: String,
// chromosome name
pub start_pos: u32,
// start of interval
pub end_pos: u32,
// end of interval (inclusive)
}
pub fn parse_target_names(bam_file: &String) -> Vec<String> {
let bam = bam::Reader::from_path(bam_file).unwrap();
let header_view = bam.header();
let target_names_dec: Vec<&[u8]> = header_view.target_names();
let mut target_names: Vec<String> = vec![];
for t_name_dec in target_names_dec {
let mut name_vec: Vec<char> = vec![];
for decr in t_name_dec {
let dec: u8 = *decr;
name_vec.push(dec as char);
}
let name_string: String = name_vec.into_iter().collect();
target_names.push(name_string);
}
target_names
}
pub fn u8_to_string(u: &[u8]) -> String {
String::from_utf8(u.to_vec()).unwrap()
}
pub fn dna_vec(u: &[u8]) -> (Vec<char>) {
let mut v: Vec<char> = Vec::with_capacity(u.len());
for cu in u.to_ascii_uppercase() {
let c = cu as char;
//assert!(c == 'A' || c == 'C' || c == 'G' || c == 'T' || c == 'N');
if c == 'A' || c == 'C' || c == 'G' || c == 'T' || c == 'N' {
v.push(c);
} else {
eprintln!("Warning: Unexpected base \"{}\" encountered. Replaced with \"N\".",
c);
v.push('N');
}
}
v
}
pub fn get_whole_genome_intervals(bam_file: &String) -> Vec<GenomicInterval> {
let bam = bam::Reader::from_path(bam_file).unwrap();
let header_view = bam.header();
let target_names_dec: Vec<&[u8]> = header_view.target_names();
let mut intervals: Vec<GenomicInterval> = vec![];
for (tid, t_name_dec) in target_names_dec.iter().enumerate() {
let mut name_vec: Vec<char> = vec![];
for decr in t_name_dec.iter() {
let dec: u8 = *decr;
name_vec.push(dec as char);
}
let name_string: String = name_vec.into_iter().collect();
intervals.push(GenomicInterval{
tid: tid as u32,
chrom: name_string,
start_pos: 0,
end_pos: header_view.target_len(tid as u32).unwrap()-1
});
}
intervals
}
// given a bam file name and a possible genomic interval,
// if the interval exists then just return a vector holding that lone interval
// otherwise, if the interval is None,
// return a vector holding GenomicIntervals representing the whole genome.
pub fn get_interval_lst(bam_file: &String, interval: &Option<GenomicInterval>) -> Vec<GenomicInterval> {
match interval {
&Some(ref iv) => {
vec![iv.clone()]
}
&None => {
get_whole_genome_intervals(bam_file)
}
}
}
// this is really ugly. TODO a less verbose implementation
pub fn parse_region_string(region_string: Option<&str>,
bamfile_name: &String)
-> Option<GenomicInterval> {
let bam = bam::Reader::from_path(bamfile_name).unwrap();
match region_string {
Some(r) if r.contains(":") && r.contains("-") => {
let split1: Vec<&str> = r.split(":").collect();
if split1.len()!= 2 {
panic!("Invalid format for region. Please use <chrom> or <chrom:start-stop>");
}
let split2: Vec<&str> = split1[1].split("-").collect();
if split2.len()!= 2 {
panic!("Invalid format for region. Please use <chrom> or <chrom:start-stop>");
}
let iv_chrom = split1[0].to_string();
let iv_start = split2[0].parse::<u32>().expect("Invalid position value specified in region string.");
let iv_end = split2[1].parse::<u32>().expect("Invalid position value specified in region string.");
let mut tid: u32 = 0;
for name in bam.header().target_names() {
if u8_to_string(name) == iv_chrom {
break;
}
tid += 1;
}
if tid as usize == bam.header().target_names().len() {
panic!("Chromosome name for region is not in BAM file.");
}
Some(GenomicInterval {
tid: tid,
chrom: iv_chrom,
start_pos: iv_start - 1,
end_pos: iv_end - 1,
})
}
Some(r) => {
let r_str = r.to_string();
let mut tid: u32 = 0;
for name in bam.header().target_names() {
if u8_to_string(name) == r_str {
break;
}
tid += 1;
}
if tid as usize == bam.header().target_names().len() {
panic!("Chromosome name for region is not in BAM file.");
}
let tlen = bam.header().target_len(tid).unwrap();
Some(GenomicInterval {
tid: tid,
chrom: r_str,
start_pos: 0,
end_pos: tlen - 1,
})
}
None => None,
}
}
pub fn count_mapped_reads(bam_file: &String,
fasta_file: &String,
interval: &Option<GenomicInterval>,
min_coverage: u32,
min_mapq: u8,
min_map_frac: f64,
mapped_count_mode: bool) {
let target_names = parse_target_names(&bam_file);
let mut fasta = fasta::IndexedReader::from_file(&fasta_file).unwrap();
// pileup over all covered sites
let mut ref_seq: Vec<char> = vec![];
let mut prev_tid = 4294967295;
let a_str = "A".to_string();
let c_str = "C".to_string();
let g_str = "G".to_string();
let t_str = "T".to_string();
let interval_lst: Vec<GenomicInterval> = get_interval_lst(bam_file, interval);
let mut bam_ix = bam::IndexedReader::from_path(bam_file).unwrap();
let mut count = 0;
for iv in interval_lst {
bam_ix.fetch(iv.tid as u32, iv.start_pos as u32, iv.end_pos as u32 + 1).ok().expect("Error seeking BAM file while extracting fragments.");
let bam_pileup = bam_ix.pileup();
for p in bam_pileup {
let pileup = p.unwrap();
let tid: usize = pileup.tid() as usize;
let chrom: String = target_names[tid].clone();
let pos0: usize = pileup.pos() as usize;
if chrom!= iv.chrom ||
pos0 < iv.start_pos as usize ||
pos0 > iv.end_pos as usize {
continue;
}
if tid!= prev_tid {
let mut ref_seq_u8: Vec<u8> = vec![];
fasta.read_all(&chrom, &mut ref_seq_u8).expect("Failed to read fasta sequence record.");
ref_seq = dna_vec(&ref_seq_u8);
}
let ref_base_str = (ref_seq[pileup.pos() as usize]).to_string();
if ref_base_str.contains("N") {
continue;
}
assert!(ref_base_str == a_str || ref_base_str == c_str || ref_base_str == g_str || ref_base_str == t_str);
let mut depth: usize = 0;
let mut well_mapped: usize = 0;
// pileup the bases for a single position and count number of each base
for alignment in pileup.alignments() {
let record = alignment.record();
// may be faster to implement this as bitwise operation on raw flag in the future?
if record.is_secondary() || record.is_quality_check_failed() ||
record.is_duplicate() || record.is_supplementary() {
continue;
}
depth += 1;
if record.is_unmapped() || record.mapq() < min_mapq {
continue;
}
well_mapped += 1;
}
let well_mapped_frac = well_mapped as f64 / depth as f64;
if mapped_count_mode {
if well_mapped >= min_coverage as usize {
count += 1;
}
} else {
if depth >= min_coverage as usize && well_mapped_frac >= min_map_frac {
count += 1;
}
}
prev_tid = tid;
}
}
println!("{}",count);
}
fn main() | .takes_value(true))
.arg(Arg::with_name("Chrom")
.short("C")
.long("chrom")
.value_name("string")
.help("Chromosome to limit analysis to.")
.display_order(30)
.takes_value(true))
.arg(Arg::with_name("Min coverage")
.short("c")
.long("min_cov")
.value_name("int")
.help("Minimum coverage (of reads passing filters) to consider position as a potential SNV.")
.display_order(40)
.required(true)
.default_value("0"))
.arg(Arg::with_name("Well-mapped fraction")
.short("f")
.long("map_frac")
.value_name("float")
.help("Minimum fraction of mapped reads with mapq >= MAPQ_CUTOFF.")
.display_order(50)
.required(true)
.default_value("0"))
.arg(Arg::with_name("Min mapq")
.short("q")
.long("min_mapq")
.value_name("int")
.help("Map quality cutoff (for calculating well-mapped fraction).")
.display_order(60)
.default_value("60"))
.arg(Arg::with_name("Mapped read count mode")
.short("m")
.long("mapped_count_mode")
.help("Ignore map fraction and use total mapped read count. \
Return the total number of positions with at least min_cov reads having mapq>=min_mapq. \
Default behavior is to return the number of positions with at least min_cov reads, where \
at least map_frac of them have mapq>=min_mapq")
.display_order(161))
.get_matches();
let bamfile_name = input_args.value_of("Input BAM").unwrap().to_string();
let fastafile_name = input_args.value_of("Input FASTA").unwrap().to_string();
let interval: Option<GenomicInterval> = parse_region_string(input_args.value_of("Chrom"),
&bamfile_name);
let min_mapq = input_args.value_of("Min mapq")
.unwrap()
.parse::<u8>()
.expect("Argument min_mapq must be an int!");
let min_cov = input_args.value_of("Min coverage")
.unwrap()
.parse::<u32>()
.expect("Argument min_cov must be an int!");
let min_map_frac = input_args.value_of("Well-mapped fraction")
.unwrap()
.parse::<f64>()
.expect("Argument map_frac must be a positive float!");
let mapped_count_mode: bool = match input_args.occurrences_of("Mapped read count mode") {
0 => {false},
1 => {true},
_ => {
panic!("mapped_count_mode specified multiple times");
}
};
count_mapped_reads(&bamfile_name,
&fastafile_name,
&interval,
min_cov,
min_mapq,
min_map_frac,
mapped_count_mode);
}
| {
let input_args = App::new("Map Counter")
.version("0.1")
.author("Peter Edge <[email protected]>")
.about("Given a bam, count the number of positions exceeding a given min coverage and \"well-mapped\" fraction.")
.arg(Arg::with_name("Input BAM")
.short("b")
.long("bam")
.value_name("BAM")
.help("sorted, indexed BAM file.")
.display_order(10)
.required(true)
.takes_value(true))
.arg(Arg::with_name("Input FASTA")
.short("r")
.long("ref")
.value_name("FASTA")
.help("indexed fasta reference that BAM file is aligned to")
.display_order(20)
.required(true) | identifier_body |
main.rs | extern crate clap;
extern crate rust_htslib;
extern crate bio;
use clap::{Arg, App};
use rust_htslib::bam;
use rust_htslib::prelude::*;
use bio::io::fasta;
#[derive(Clone)]
pub struct GenomicInterval {
pub tid: u32,
pub chrom: String,
// chromosome name
pub start_pos: u32,
// start of interval
pub end_pos: u32,
// end of interval (inclusive)
}
pub fn parse_target_names(bam_file: &String) -> Vec<String> {
let bam = bam::Reader::from_path(bam_file).unwrap();
let header_view = bam.header();
let target_names_dec: Vec<&[u8]> = header_view.target_names();
let mut target_names: Vec<String> = vec![];
for t_name_dec in target_names_dec {
let mut name_vec: Vec<char> = vec![];
for decr in t_name_dec {
let dec: u8 = *decr;
name_vec.push(dec as char);
}
let name_string: String = name_vec.into_iter().collect();
target_names.push(name_string);
}
target_names
}
pub fn u8_to_string(u: &[u8]) -> String {
String::from_utf8(u.to_vec()).unwrap()
}
pub fn dna_vec(u: &[u8]) -> (Vec<char>) {
let mut v: Vec<char> = Vec::with_capacity(u.len());
for cu in u.to_ascii_uppercase() {
let c = cu as char;
//assert!(c == 'A' || c == 'C' || c == 'G' || c == 'T' || c == 'N');
if c == 'A' || c == 'C' || c == 'G' || c == 'T' || c == 'N' {
v.push(c);
} else {
eprintln!("Warning: Unexpected base \"{}\" encountered. Replaced with \"N\".",
c);
v.push('N');
}
}
v
}
pub fn get_whole_genome_intervals(bam_file: &String) -> Vec<GenomicInterval> {
let bam = bam::Reader::from_path(bam_file).unwrap();
let header_view = bam.header();
let target_names_dec: Vec<&[u8]> = header_view.target_names();
let mut intervals: Vec<GenomicInterval> = vec![];
for (tid, t_name_dec) in target_names_dec.iter().enumerate() {
let mut name_vec: Vec<char> = vec![];
for decr in t_name_dec.iter() {
let dec: u8 = *decr;
name_vec.push(dec as char);
}
let name_string: String = name_vec.into_iter().collect();
intervals.push(GenomicInterval{
tid: tid as u32,
chrom: name_string,
start_pos: 0, | }
intervals
}
// given a bam file name and a possible genomic interval,
// if the interval exists then just return a vector holding that lone interval
// otherwise, if the interval is None,
// return a vector holding GenomicIntervals representing the whole genome.
pub fn get_interval_lst(bam_file: &String, interval: &Option<GenomicInterval>) -> Vec<GenomicInterval> {
match interval {
&Some(ref iv) => {
vec![iv.clone()]
}
&None => {
get_whole_genome_intervals(bam_file)
}
}
}
// this is really ugly. TODO a less verbose implementation
pub fn parse_region_string(region_string: Option<&str>,
bamfile_name: &String)
-> Option<GenomicInterval> {
let bam = bam::Reader::from_path(bamfile_name).unwrap();
match region_string {
Some(r) if r.contains(":") && r.contains("-") => {
let split1: Vec<&str> = r.split(":").collect();
if split1.len()!= 2 {
panic!("Invalid format for region. Please use <chrom> or <chrom:start-stop>");
}
let split2: Vec<&str> = split1[1].split("-").collect();
if split2.len()!= 2 {
panic!("Invalid format for region. Please use <chrom> or <chrom:start-stop>");
}
let iv_chrom = split1[0].to_string();
let iv_start = split2[0].parse::<u32>().expect("Invalid position value specified in region string.");
let iv_end = split2[1].parse::<u32>().expect("Invalid position value specified in region string.");
let mut tid: u32 = 0;
for name in bam.header().target_names() {
if u8_to_string(name) == iv_chrom {
break;
}
tid += 1;
}
if tid as usize == bam.header().target_names().len() {
panic!("Chromosome name for region is not in BAM file.");
}
Some(GenomicInterval {
tid: tid,
chrom: iv_chrom,
start_pos: iv_start - 1,
end_pos: iv_end - 1,
})
}
Some(r) => {
let r_str = r.to_string();
let mut tid: u32 = 0;
for name in bam.header().target_names() {
if u8_to_string(name) == r_str {
break;
}
tid += 1;
}
if tid as usize == bam.header().target_names().len() {
panic!("Chromosome name for region is not in BAM file.");
}
let tlen = bam.header().target_len(tid).unwrap();
Some(GenomicInterval {
tid: tid,
chrom: r_str,
start_pos: 0,
end_pos: tlen - 1,
})
}
None => None,
}
}
pub fn count_mapped_reads(bam_file: &String,
fasta_file: &String,
interval: &Option<GenomicInterval>,
min_coverage: u32,
min_mapq: u8,
min_map_frac: f64,
mapped_count_mode: bool) {
let target_names = parse_target_names(&bam_file);
let mut fasta = fasta::IndexedReader::from_file(&fasta_file).unwrap();
// pileup over all covered sites
let mut ref_seq: Vec<char> = vec![];
let mut prev_tid = 4294967295;
let a_str = "A".to_string();
let c_str = "C".to_string();
let g_str = "G".to_string();
let t_str = "T".to_string();
let interval_lst: Vec<GenomicInterval> = get_interval_lst(bam_file, interval);
let mut bam_ix = bam::IndexedReader::from_path(bam_file).unwrap();
let mut count = 0;
for iv in interval_lst {
bam_ix.fetch(iv.tid as u32, iv.start_pos as u32, iv.end_pos as u32 + 1).ok().expect("Error seeking BAM file while extracting fragments.");
let bam_pileup = bam_ix.pileup();
for p in bam_pileup {
let pileup = p.unwrap();
let tid: usize = pileup.tid() as usize;
let chrom: String = target_names[tid].clone();
let pos0: usize = pileup.pos() as usize;
if chrom!= iv.chrom ||
pos0 < iv.start_pos as usize ||
pos0 > iv.end_pos as usize {
continue;
}
if tid!= prev_tid {
let mut ref_seq_u8: Vec<u8> = vec![];
fasta.read_all(&chrom, &mut ref_seq_u8).expect("Failed to read fasta sequence record.");
ref_seq = dna_vec(&ref_seq_u8);
}
let ref_base_str = (ref_seq[pileup.pos() as usize]).to_string();
if ref_base_str.contains("N") {
continue;
}
assert!(ref_base_str == a_str || ref_base_str == c_str || ref_base_str == g_str || ref_base_str == t_str);
let mut depth: usize = 0;
let mut well_mapped: usize = 0;
// pileup the bases for a single position and count number of each base
for alignment in pileup.alignments() {
let record = alignment.record();
// may be faster to implement this as bitwise operation on raw flag in the future?
if record.is_secondary() || record.is_quality_check_failed() ||
record.is_duplicate() || record.is_supplementary() {
continue;
}
depth += 1;
if record.is_unmapped() || record.mapq() < min_mapq {
continue;
}
well_mapped += 1;
}
let well_mapped_frac = well_mapped as f64 / depth as f64;
if mapped_count_mode {
if well_mapped >= min_coverage as usize {
count += 1;
}
} else {
if depth >= min_coverage as usize && well_mapped_frac >= min_map_frac {
count += 1;
}
}
prev_tid = tid;
}
}
println!("{}",count);
}
fn main() {
let input_args = App::new("Map Counter")
.version("0.1")
.author("Peter Edge <[email protected]>")
.about("Given a bam, count the number of positions exceeding a given min coverage and \"well-mapped\" fraction.")
.arg(Arg::with_name("Input BAM")
.short("b")
.long("bam")
.value_name("BAM")
.help("sorted, indexed BAM file.")
.display_order(10)
.required(true)
.takes_value(true))
.arg(Arg::with_name("Input FASTA")
.short("r")
.long("ref")
.value_name("FASTA")
.help("indexed fasta reference that BAM file is aligned to")
.display_order(20)
.required(true)
.takes_value(true))
.arg(Arg::with_name("Chrom")
.short("C")
.long("chrom")
.value_name("string")
.help("Chromosome to limit analysis to.")
.display_order(30)
.takes_value(true))
.arg(Arg::with_name("Min coverage")
.short("c")
.long("min_cov")
.value_name("int")
.help("Minimum coverage (of reads passing filters) to consider position as a potential SNV.")
.display_order(40)
.required(true)
.default_value("0"))
.arg(Arg::with_name("Well-mapped fraction")
.short("f")
.long("map_frac")
.value_name("float")
.help("Minimum fraction of mapped reads with mapq >= MAPQ_CUTOFF.")
.display_order(50)
.required(true)
.default_value("0"))
.arg(Arg::with_name("Min mapq")
.short("q")
.long("min_mapq")
.value_name("int")
.help("Map quality cutoff (for calculating well-mapped fraction).")
.display_order(60)
.default_value("60"))
.arg(Arg::with_name("Mapped read count mode")
.short("m")
.long("mapped_count_mode")
.help("Ignore map fraction and use total mapped read count. \
Return the total number of positions with at least min_cov reads having mapq>=min_mapq. \
Default behavior is to return the number of positions with at least min_cov reads, where \
at least map_frac of them have mapq>=min_mapq")
.display_order(161))
.get_matches();
let bamfile_name = input_args.value_of("Input BAM").unwrap().to_string();
let fastafile_name = input_args.value_of("Input FASTA").unwrap().to_string();
let interval: Option<GenomicInterval> = parse_region_string(input_args.value_of("Chrom"),
&bamfile_name);
let min_mapq = input_args.value_of("Min mapq")
.unwrap()
.parse::<u8>()
.expect("Argument min_mapq must be an int!");
let min_cov = input_args.value_of("Min coverage")
.unwrap()
.parse::<u32>()
.expect("Argument min_cov must be an int!");
let min_map_frac = input_args.value_of("Well-mapped fraction")
.unwrap()
.parse::<f64>()
.expect("Argument map_frac must be a positive float!");
let mapped_count_mode: bool = match input_args.occurrences_of("Mapped read count mode") {
0 => {false},
1 => {true},
_ => {
panic!("mapped_count_mode specified multiple times");
}
};
count_mapped_reads(&bamfile_name,
&fastafile_name,
&interval,
min_cov,
min_mapq,
min_map_frac,
mapped_count_mode);
} | end_pos: header_view.target_len(tid as u32).unwrap()-1
}); | random_line_split |
lib.rs | // Copyright (C) 2019 Alibaba Cloud. All rights reserved.
// SPDX-License-Identifier: Apache-2.0 or BSD-3-Clause
//! Virtio Vhost Backend Drivers
//!
//! Virtio devices use virtqueues to transport data efficiently. The first generation of virtqueue
//! is a set of three different single-producer, single-consumer ring structures designed to store
//! generic scatter-gather I/O. The virtio specification 1.1 introduces an alternative compact
//! virtqueue layout named "Packed Virtqueue", which is more friendly to memory cache system and
//! hardware implemented virtio devices. The packed virtqueue uses read-write memory, that means
//! the memory will be both read and written by both host and guest. The new Packed Virtqueue is
//! preferred for performance.
//!
//! Vhost is a mechanism to improve performance of Virtio devices by delegate data plane operations
//! to dedicated IO service processes. Only the configuration, I/O submission notification, and I/O
//! completion interruption are piped through the hypervisor.
//! It uses the same virtqueue layout as Virtio to allow Vhost devices to be mapped directly to
//! Virtio devices. This allows a Vhost device to be accessed directly by a guest OS inside a
//! hypervisor process with an existing Virtio (PCI) driver.
//!
//! The initial vhost implementation is a part of the Linux kernel and uses ioctl interface to
//! communicate with userspace applications. Dedicated kernel worker threads are created to handle
//! IO requests from the guest.
//!
//! Later Vhost-user protocol is introduced to complement the ioctl interface used to control the
//! vhost implementation in the Linux kernel. It implements the control plane needed to establish
//! virtqueues sharing with a user space process on the same host. It uses communication over a
//! Unix domain socket to share file descriptors in the ancillary data of the message.
//! The protocol defines 2 sides of the communication, master and slave. Master is the application
//! that shares its virtqueues. Slave is the consumer of the virtqueues. Master and slave can be
//! either a client (i.e. connecting) or server (listening) in the socket communication.
#![deny(missing_docs)]
use std::fs::File;
use std::io::Error as IOError;
use remain::sorted;
use thiserror::Error as ThisError;
mod backend;
pub use backend::*;
pub mod message;
pub mod connection;
mod sys;
pub use sys::SystemStream;
pub use sys::*;
cfg_if::cfg_if! {
if #[cfg(feature = "vmm")] {
pub(crate) mod master;
pub use self::master::{Master, VhostUserMaster};
mod master_req_handler;
pub use self::master_req_handler::{VhostUserMasterReqHandler,
VhostUserMasterReqHandlerMut};
}
}
cfg_if::cfg_if! {
if #[cfg(feature = "device")] {
mod slave_req_handler;
mod slave_proxy;
pub use self::slave_req_handler::{
Protocol, SlaveReqHandler, SlaveReqHelper, VhostUserSlaveReqHandler,
VhostUserSlaveReqHandlerMut,
};
pub use self::slave_proxy::Slave;
}
}
cfg_if::cfg_if! {
if #[cfg(all(feature = "device", unix))] {
mod slave;
pub use self::slave::SlaveListener;
}
}
cfg_if::cfg_if! {
if #[cfg(feature = "vmm")] {
pub use self::master_req_handler::MasterReqHandler;
}
}
/// Errors for vhost-user operations
#[sorted]
#[derive(Debug, ThisError)]
pub enum | {
/// client exited properly.
#[error("client exited properly")]
ClientExit,
/// client disconnected.
/// If connection is closed properly, use `ClientExit` instead.
#[error("client closed the connection")]
Disconnect,
/// Virtio/protocol features mismatch.
#[error("virtio features mismatch")]
FeatureMismatch,
/// Fd array in question is too big or too small
#[error("wrong number of attached fds")]
IncorrectFds,
/// Invalid message format, flag or content.
#[error("invalid message")]
InvalidMessage,
/// Unsupported operations due to that the protocol feature hasn't been negotiated.
#[error("invalid operation")]
InvalidOperation,
/// Invalid parameters.
#[error("invalid parameters")]
InvalidParam,
/// Failure from the master side.
#[error("master Internal error")]
MasterInternalError,
/// Message is too large
#[error("oversized message")]
OversizedMsg,
/// Only part of a message have been sent or received successfully
#[error("partial message")]
PartialMessage,
/// Provided recv buffer was too small, and data was dropped.
#[error("buffer for recv was too small, data was dropped: got size {got}, needed {want}")]
RecvBufferTooSmall {
/// The size of the buffer received.
got: usize,
/// The expected size of the buffer.
want: usize,
},
/// Error from request handler
#[error("handler failed to handle request: {0}")]
ReqHandlerError(IOError),
/// Failure from the slave side.
#[error("slave internal error")]
SlaveInternalError,
/// The socket is broken or has been closed.
#[error("socket is broken: {0}")]
SocketBroken(std::io::Error),
/// Can't connect to peer.
#[error("can't connect to peer: {0}")]
SocketConnect(std::io::Error),
/// Generic socket errors.
#[error("socket error: {0}")]
SocketError(std::io::Error),
/// Should retry the socket operation again.
#[error("temporary socket error: {0}")]
SocketRetry(std::io::Error),
/// Error from tx/rx on a Tube.
#[error("failed to read/write on Tube: {0}")]
TubeError(base::TubeError),
/// Error from VFIO device.
#[error("error occurred in VFIO device: {0}")]
VfioDeviceError(anyhow::Error),
}
impl From<base::TubeError> for Error {
fn from(err: base::TubeError) -> Self {
Error::TubeError(err)
}
}
impl From<std::io::Error> for Error {
fn from(err: std::io::Error) -> Self {
Error::SocketError(err)
}
}
impl From<base::Error> for Error {
/// Convert raw socket errors into meaningful vhost-user errors.
///
/// The base::Error is a simple wrapper over the raw errno, which doesn't means
/// much to the vhost-user connection manager. So convert it into meaningful errors to simplify
/// the connection manager logic.
///
/// # Return:
/// * - Error::SocketRetry: temporary error caused by signals or short of resources.
/// * - Error::SocketBroken: the underline socket is broken.
/// * - Error::SocketError: other socket related errors.
#[allow(unreachable_patterns)] // EWOULDBLOCK equals to EGAIN on linux
fn from(err: base::Error) -> Self {
match err.errno() {
// Retry:
// * EAGAIN, EWOULDBLOCK: The socket is marked nonblocking and the requested operation
// would block.
// * EINTR: A signal occurred before any data was transmitted
// * ENOBUFS: The output queue for a network interface was full. This generally
// indicates that the interface has stopped sending, but may be caused by transient
// congestion.
// * ENOMEM: No memory available.
libc::EAGAIN | libc::EWOULDBLOCK | libc::EINTR | libc::ENOBUFS | libc::ENOMEM => {
Error::SocketRetry(err.into())
}
// Broken:
// * ECONNRESET: Connection reset by peer.
// * EPIPE: The local end has been shut down on a connection oriented socket. In this
// case the process will also receive a SIGPIPE unless MSG_NOSIGNAL is set.
libc::ECONNRESET | libc::EPIPE => Error::SocketBroken(err.into()),
// Write permission is denied on the destination socket file, or search permission is
// denied for one of the directories the path prefix.
libc::EACCES => Error::SocketConnect(IOError::from_raw_os_error(libc::EACCES)),
// Catch all other errors
e => Error::SocketError(IOError::from_raw_os_error(e)),
}
}
}
/// Result of vhost-user operations
pub type Result<T> = std::result::Result<T, Error>;
/// Result of request handler.
pub type HandlerResult<T> = std::result::Result<T, IOError>;
/// Utility function to take the first element from option of a vector of files.
/// Returns `None` if the vector contains no file or more than one file.
pub(crate) fn take_single_file(files: Option<Vec<File>>) -> Option<File> {
let mut files = files?;
if files.len()!= 1 {
return None;
}
Some(files.swap_remove(0))
}
#[cfg(all(test, feature = "device"))]
mod dummy_slave;
#[cfg(all(test, feature = "vmm", feature = "device"))]
mod tests {
use std::sync::Arc;
use std::sync::Barrier;
use std::sync::Mutex;
use std::thread;
use base::AsRawDescriptor;
use tempfile::tempfile;
use super::*;
use crate::backend::VhostBackend;
use crate::connection::tests::*;
use crate::dummy_slave::DummySlaveReqHandler;
use crate::dummy_slave::VIRTIO_FEATURES;
use crate::message::*;
use crate::VhostUserMemoryRegionInfo;
use crate::VringConfigData;
#[test]
fn create_dummy_slave() {
let slave = Mutex::new(DummySlaveReqHandler::new());
slave.set_owner().unwrap();
assert!(slave.set_owner().is_err());
}
#[test]
fn test_set_owner() {
let slave_be = Mutex::new(DummySlaveReqHandler::new());
let (master, mut slave) = create_master_slave_pair(slave_be);
assert!(!slave.as_ref().lock().unwrap().owned);
master.set_owner().unwrap();
slave.handle_request().unwrap();
assert!(slave.as_ref().lock().unwrap().owned);
master.set_owner().unwrap();
assert!(slave.handle_request().is_err());
assert!(slave.as_ref().lock().unwrap().owned);
}
#[test]
fn test_set_features() {
let mbar = Arc::new(Barrier::new(2));
let sbar = mbar.clone();
let slave_be = Mutex::new(DummySlaveReqHandler::new());
let (mut master, mut slave) = create_master_slave_pair(slave_be);
thread::spawn(move || {
slave.handle_request().unwrap();
assert!(slave.as_ref().lock().unwrap().owned);
slave.handle_request().unwrap();
slave.handle_request().unwrap();
assert_eq!(
slave.as_ref().lock().unwrap().acked_features,
VIRTIO_FEATURES &!0x1
);
slave.handle_request().unwrap();
slave.handle_request().unwrap();
assert_eq!(
slave.as_ref().lock().unwrap().acked_protocol_features,
VhostUserProtocolFeatures::all().bits()
);
sbar.wait();
});
master.set_owner().unwrap();
// set virtio features
let features = master.get_features().unwrap();
assert_eq!(features, VIRTIO_FEATURES);
master.set_features(VIRTIO_FEATURES &!0x1).unwrap();
// set vhost protocol features
let features = master.get_protocol_features().unwrap();
assert_eq!(features.bits(), VhostUserProtocolFeatures::all().bits());
master.set_protocol_features(features).unwrap();
mbar.wait();
}
#[test]
fn test_master_slave_process() {
let mbar = Arc::new(Barrier::new(2));
let sbar = mbar.clone();
let slave_be = Mutex::new(DummySlaveReqHandler::new());
let (mut master, mut slave) = create_master_slave_pair(slave_be);
thread::spawn(move || {
// set_own()
slave.handle_request().unwrap();
assert!(slave.as_ref().lock().unwrap().owned);
// get/set_features()
slave.handle_request().unwrap();
slave.handle_request().unwrap();
assert_eq!(
slave.as_ref().lock().unwrap().acked_features,
VIRTIO_FEATURES &!0x1
);
slave.handle_request().unwrap();
slave.handle_request().unwrap();
assert_eq!(
slave.as_ref().lock().unwrap().acked_protocol_features,
VhostUserProtocolFeatures::all().bits()
);
// get_inflight_fd()
slave.handle_request().unwrap();
// set_inflight_fd()
slave.handle_request().unwrap();
// get_queue_num()
slave.handle_request().unwrap();
// set_mem_table()
slave.handle_request().unwrap();
// get/set_config()
slave.handle_request().unwrap();
slave.handle_request().unwrap();
// set_slave_request_fd
slave.handle_request().unwrap();
// set_vring_enable
slave.handle_request().unwrap();
// set_log_base,set_log_fd()
slave.handle_request().unwrap_err();
slave.handle_request().unwrap_err();
// set_vring_xxx
slave.handle_request().unwrap();
slave.handle_request().unwrap();
slave.handle_request().unwrap();
slave.handle_request().unwrap();
slave.handle_request().unwrap();
slave.handle_request().unwrap();
// get_max_mem_slots()
slave.handle_request().unwrap();
// add_mem_region()
slave.handle_request().unwrap();
// remove_mem_region()
slave.handle_request().unwrap();
sbar.wait();
});
master.set_owner().unwrap();
// set virtio features
let features = master.get_features().unwrap();
assert_eq!(features, VIRTIO_FEATURES);
master.set_features(VIRTIO_FEATURES &!0x1).unwrap();
// set vhost protocol features
let features = master.get_protocol_features().unwrap();
assert_eq!(features.bits(), VhostUserProtocolFeatures::all().bits());
master.set_protocol_features(features).unwrap();
// Retrieve inflight I/O tracking information
let (inflight_info, inflight_file) = master
.get_inflight_fd(&VhostUserInflight {
num_queues: 2,
queue_size: 256,
..Default::default()
})
.unwrap();
// Set the buffer back to the backend
master
.set_inflight_fd(&inflight_info, inflight_file.as_raw_descriptor())
.unwrap();
let num = master.get_queue_num().unwrap();
assert_eq!(num, 2);
let event = base::Event::new().unwrap();
let mem = [VhostUserMemoryRegionInfo {
guest_phys_addr: 0,
memory_size: 0x10_0000,
userspace_addr: 0,
mmap_offset: 0,
mmap_handle: event.as_raw_descriptor(),
}];
master.set_mem_table(&mem).unwrap();
master
.set_config(0x100, VhostUserConfigFlags::WRITABLE, &[0xa5u8])
.unwrap();
let buf = [0x0u8; 4];
let (reply_body, reply_payload) = master
.get_config(0x100, 4, VhostUserConfigFlags::empty(), &buf)
.unwrap();
let offset = reply_body.offset;
assert_eq!(offset, 0x100);
assert_eq!(reply_payload[0], 0xa5);
#[cfg(windows)]
let tubes = base::Tube::pair().unwrap();
#[cfg(windows)]
// Safe because we will be importing the Tube in the other thread.
let descriptor =
unsafe { tube_transporter::packed_tube::pack(tubes.0, std::process::id()).unwrap() };
#[cfg(unix)]
let descriptor = base::Event::new().unwrap();
master.set_slave_request_fd(&descriptor).unwrap();
master.set_vring_enable(0, true).unwrap();
// unimplemented yet
master
.set_log_base(0, Some(event.as_raw_descriptor()))
.unwrap();
master.set_log_fd(event.as_raw_descriptor()).unwrap();
master.set_vring_num(0, 256).unwrap();
master.set_vring_base(0, 0).unwrap();
let config = VringConfigData {
queue_max_size: 256,
queue_size: 128,
flags: VhostUserVringAddrFlags::VHOST_VRING_F_LOG.bits(),
desc_table_addr: 0x1000,
used_ring_addr: 0x2000,
avail_ring_addr: 0x3000,
log_addr: Some(0x4000),
};
master.set_vring_addr(0, &config).unwrap();
master.set_vring_call(0, &event).unwrap();
master.set_vring_kick(0, &event).unwrap();
master.set_vring_err(0, &event).unwrap();
let max_mem_slots = master.get_max_mem_slots().unwrap();
assert_eq!(max_mem_slots, 32);
let region_file = tempfile().unwrap();
let region = VhostUserMemoryRegionInfo {
guest_phys_addr: 0x10_0000,
memory_size: 0x10_0000,
userspace_addr: 0,
mmap_offset: 0,
mmap_handle: region_file.as_raw_descriptor(),
};
master.add_mem_region(®ion).unwrap();
master.remove_mem_region(®ion).unwrap();
mbar.wait();
}
#[test]
fn test_error_display() {
assert_eq!(format!("{}", Error::InvalidParam), "invalid parameters");
assert_eq!(format!("{}", Error::InvalidOperation), "invalid operation");
}
#[test]
fn test_error_from_base_error() {
let e: Error = base::Error::new(libc::EAGAIN).into();
if let Error::SocketRetry(e1) = e {
assert_eq!(e1.raw_os_error().unwrap(), libc::EAGAIN);
} else {
panic!("invalid error code conversion!");
}
}
}
| Error | identifier_name |
lib.rs | // Copyright (C) 2019 Alibaba Cloud. All rights reserved.
// SPDX-License-Identifier: Apache-2.0 or BSD-3-Clause
//! Virtio Vhost Backend Drivers
//!
//! Virtio devices use virtqueues to transport data efficiently. The first generation of virtqueue
//! is a set of three different single-producer, single-consumer ring structures designed to store
//! generic scatter-gather I/O. The virtio specification 1.1 introduces an alternative compact
//! virtqueue layout named "Packed Virtqueue", which is more friendly to memory cache system and
//! hardware implemented virtio devices. The packed virtqueue uses read-write memory, that means
//! the memory will be both read and written by both host and guest. The new Packed Virtqueue is
//! preferred for performance.
//!
//! Vhost is a mechanism to improve performance of Virtio devices by delegate data plane operations
//! to dedicated IO service processes. Only the configuration, I/O submission notification, and I/O
//! completion interruption are piped through the hypervisor.
//! It uses the same virtqueue layout as Virtio to allow Vhost devices to be mapped directly to
//! Virtio devices. This allows a Vhost device to be accessed directly by a guest OS inside a
//! hypervisor process with an existing Virtio (PCI) driver.
//!
//! The initial vhost implementation is a part of the Linux kernel and uses ioctl interface to
//! communicate with userspace applications. Dedicated kernel worker threads are created to handle
//! IO requests from the guest.
//!
//! Later Vhost-user protocol is introduced to complement the ioctl interface used to control the
//! vhost implementation in the Linux kernel. It implements the control plane needed to establish
//! virtqueues sharing with a user space process on the same host. It uses communication over a
//! Unix domain socket to share file descriptors in the ancillary data of the message.
//! The protocol defines 2 sides of the communication, master and slave. Master is the application
//! that shares its virtqueues. Slave is the consumer of the virtqueues. Master and slave can be
//! either a client (i.e. connecting) or server (listening) in the socket communication.
#![deny(missing_docs)]
use std::fs::File;
use std::io::Error as IOError;
use remain::sorted;
use thiserror::Error as ThisError;
mod backend;
pub use backend::*;
pub mod message;
pub mod connection;
mod sys;
pub use sys::SystemStream;
pub use sys::*;
cfg_if::cfg_if! {
if #[cfg(feature = "vmm")] {
pub(crate) mod master;
pub use self::master::{Master, VhostUserMaster};
mod master_req_handler;
pub use self::master_req_handler::{VhostUserMasterReqHandler,
VhostUserMasterReqHandlerMut};
}
}
cfg_if::cfg_if! {
if #[cfg(feature = "device")] {
mod slave_req_handler;
mod slave_proxy;
pub use self::slave_req_handler::{
Protocol, SlaveReqHandler, SlaveReqHelper, VhostUserSlaveReqHandler,
VhostUserSlaveReqHandlerMut,
};
pub use self::slave_proxy::Slave;
}
}
cfg_if::cfg_if! {
if #[cfg(all(feature = "device", unix))] {
mod slave;
pub use self::slave::SlaveListener;
}
}
cfg_if::cfg_if! {
if #[cfg(feature = "vmm")] {
pub use self::master_req_handler::MasterReqHandler;
}
}
/// Errors for vhost-user operations
#[sorted]
#[derive(Debug, ThisError)]
pub enum Error {
/// client exited properly.
#[error("client exited properly")]
ClientExit,
/// client disconnected.
/// If connection is closed properly, use `ClientExit` instead.
#[error("client closed the connection")]
Disconnect,
/// Virtio/protocol features mismatch.
#[error("virtio features mismatch")]
FeatureMismatch,
/// Fd array in question is too big or too small
#[error("wrong number of attached fds")]
IncorrectFds,
/// Invalid message format, flag or content.
#[error("invalid message")]
InvalidMessage,
/// Unsupported operations due to that the protocol feature hasn't been negotiated.
#[error("invalid operation")]
InvalidOperation,
/// Invalid parameters.
#[error("invalid parameters")]
InvalidParam,
/// Failure from the master side.
#[error("master Internal error")]
MasterInternalError,
/// Message is too large
#[error("oversized message")]
OversizedMsg,
/// Only part of a message have been sent or received successfully
#[error("partial message")]
PartialMessage,
/// Provided recv buffer was too small, and data was dropped.
#[error("buffer for recv was too small, data was dropped: got size {got}, needed {want}")]
RecvBufferTooSmall {
/// The size of the buffer received.
got: usize,
/// The expected size of the buffer.
want: usize,
},
/// Error from request handler
#[error("handler failed to handle request: {0}")]
ReqHandlerError(IOError),
/// Failure from the slave side.
#[error("slave internal error")]
SlaveInternalError,
/// The socket is broken or has been closed.
#[error("socket is broken: {0}")]
SocketBroken(std::io::Error),
/// Can't connect to peer.
#[error("can't connect to peer: {0}")]
SocketConnect(std::io::Error),
/// Generic socket errors.
#[error("socket error: {0}")]
SocketError(std::io::Error),
/// Should retry the socket operation again.
#[error("temporary socket error: {0}")]
SocketRetry(std::io::Error),
/// Error from tx/rx on a Tube.
#[error("failed to read/write on Tube: {0}")]
TubeError(base::TubeError),
/// Error from VFIO device.
#[error("error occurred in VFIO device: {0}")]
VfioDeviceError(anyhow::Error),
}
impl From<base::TubeError> for Error {
fn from(err: base::TubeError) -> Self {
Error::TubeError(err)
}
}
impl From<std::io::Error> for Error {
fn from(err: std::io::Error) -> Self {
Error::SocketError(err)
}
}
impl From<base::Error> for Error {
/// Convert raw socket errors into meaningful vhost-user errors.
///
/// The base::Error is a simple wrapper over the raw errno, which doesn't means
/// much to the vhost-user connection manager. So convert it into meaningful errors to simplify
/// the connection manager logic.
///
/// # Return:
/// * - Error::SocketRetry: temporary error caused by signals or short of resources.
/// * - Error::SocketBroken: the underline socket is broken.
/// * - Error::SocketError: other socket related errors.
#[allow(unreachable_patterns)] // EWOULDBLOCK equals to EGAIN on linux
fn from(err: base::Error) -> Self | libc::EACCES => Error::SocketConnect(IOError::from_raw_os_error(libc::EACCES)),
// Catch all other errors
e => Error::SocketError(IOError::from_raw_os_error(e)),
}
}
}
/// Result of vhost-user operations
pub type Result<T> = std::result::Result<T, Error>;
/// Result of request handler.
pub type HandlerResult<T> = std::result::Result<T, IOError>;
/// Utility function to take the first element from option of a vector of files.
/// Returns `None` if the vector contains no file or more than one file.
pub(crate) fn take_single_file(files: Option<Vec<File>>) -> Option<File> {
let mut files = files?;
if files.len()!= 1 {
return None;
}
Some(files.swap_remove(0))
}
#[cfg(all(test, feature = "device"))]
mod dummy_slave;
#[cfg(all(test, feature = "vmm", feature = "device"))]
mod tests {
use std::sync::Arc;
use std::sync::Barrier;
use std::sync::Mutex;
use std::thread;
use base::AsRawDescriptor;
use tempfile::tempfile;
use super::*;
use crate::backend::VhostBackend;
use crate::connection::tests::*;
use crate::dummy_slave::DummySlaveReqHandler;
use crate::dummy_slave::VIRTIO_FEATURES;
use crate::message::*;
use crate::VhostUserMemoryRegionInfo;
use crate::VringConfigData;
#[test]
fn create_dummy_slave() {
let slave = Mutex::new(DummySlaveReqHandler::new());
slave.set_owner().unwrap();
assert!(slave.set_owner().is_err());
}
#[test]
fn test_set_owner() {
let slave_be = Mutex::new(DummySlaveReqHandler::new());
let (master, mut slave) = create_master_slave_pair(slave_be);
assert!(!slave.as_ref().lock().unwrap().owned);
master.set_owner().unwrap();
slave.handle_request().unwrap();
assert!(slave.as_ref().lock().unwrap().owned);
master.set_owner().unwrap();
assert!(slave.handle_request().is_err());
assert!(slave.as_ref().lock().unwrap().owned);
}
#[test]
fn test_set_features() {
let mbar = Arc::new(Barrier::new(2));
let sbar = mbar.clone();
let slave_be = Mutex::new(DummySlaveReqHandler::new());
let (mut master, mut slave) = create_master_slave_pair(slave_be);
thread::spawn(move || {
slave.handle_request().unwrap();
assert!(slave.as_ref().lock().unwrap().owned);
slave.handle_request().unwrap();
slave.handle_request().unwrap();
assert_eq!(
slave.as_ref().lock().unwrap().acked_features,
VIRTIO_FEATURES &!0x1
);
slave.handle_request().unwrap();
slave.handle_request().unwrap();
assert_eq!(
slave.as_ref().lock().unwrap().acked_protocol_features,
VhostUserProtocolFeatures::all().bits()
);
sbar.wait();
});
master.set_owner().unwrap();
// set virtio features
let features = master.get_features().unwrap();
assert_eq!(features, VIRTIO_FEATURES);
master.set_features(VIRTIO_FEATURES &!0x1).unwrap();
// set vhost protocol features
let features = master.get_protocol_features().unwrap();
assert_eq!(features.bits(), VhostUserProtocolFeatures::all().bits());
master.set_protocol_features(features).unwrap();
mbar.wait();
}
#[test]
fn test_master_slave_process() {
let mbar = Arc::new(Barrier::new(2));
let sbar = mbar.clone();
let slave_be = Mutex::new(DummySlaveReqHandler::new());
let (mut master, mut slave) = create_master_slave_pair(slave_be);
thread::spawn(move || {
// set_own()
slave.handle_request().unwrap();
assert!(slave.as_ref().lock().unwrap().owned);
// get/set_features()
slave.handle_request().unwrap();
slave.handle_request().unwrap();
assert_eq!(
slave.as_ref().lock().unwrap().acked_features,
VIRTIO_FEATURES &!0x1
);
slave.handle_request().unwrap();
slave.handle_request().unwrap();
assert_eq!(
slave.as_ref().lock().unwrap().acked_protocol_features,
VhostUserProtocolFeatures::all().bits()
);
// get_inflight_fd()
slave.handle_request().unwrap();
// set_inflight_fd()
slave.handle_request().unwrap();
// get_queue_num()
slave.handle_request().unwrap();
// set_mem_table()
slave.handle_request().unwrap();
// get/set_config()
slave.handle_request().unwrap();
slave.handle_request().unwrap();
// set_slave_request_fd
slave.handle_request().unwrap();
// set_vring_enable
slave.handle_request().unwrap();
// set_log_base,set_log_fd()
slave.handle_request().unwrap_err();
slave.handle_request().unwrap_err();
// set_vring_xxx
slave.handle_request().unwrap();
slave.handle_request().unwrap();
slave.handle_request().unwrap();
slave.handle_request().unwrap();
slave.handle_request().unwrap();
slave.handle_request().unwrap();
// get_max_mem_slots()
slave.handle_request().unwrap();
// add_mem_region()
slave.handle_request().unwrap();
// remove_mem_region()
slave.handle_request().unwrap();
sbar.wait();
});
master.set_owner().unwrap();
// set virtio features
let features = master.get_features().unwrap();
assert_eq!(features, VIRTIO_FEATURES);
master.set_features(VIRTIO_FEATURES &!0x1).unwrap();
// set vhost protocol features
let features = master.get_protocol_features().unwrap();
assert_eq!(features.bits(), VhostUserProtocolFeatures::all().bits());
master.set_protocol_features(features).unwrap();
// Retrieve inflight I/O tracking information
let (inflight_info, inflight_file) = master
.get_inflight_fd(&VhostUserInflight {
num_queues: 2,
queue_size: 256,
..Default::default()
})
.unwrap();
// Set the buffer back to the backend
master
.set_inflight_fd(&inflight_info, inflight_file.as_raw_descriptor())
.unwrap();
let num = master.get_queue_num().unwrap();
assert_eq!(num, 2);
let event = base::Event::new().unwrap();
let mem = [VhostUserMemoryRegionInfo {
guest_phys_addr: 0,
memory_size: 0x10_0000,
userspace_addr: 0,
mmap_offset: 0,
mmap_handle: event.as_raw_descriptor(),
}];
master.set_mem_table(&mem).unwrap();
master
.set_config(0x100, VhostUserConfigFlags::WRITABLE, &[0xa5u8])
.unwrap();
let buf = [0x0u8; 4];
let (reply_body, reply_payload) = master
.get_config(0x100, 4, VhostUserConfigFlags::empty(), &buf)
.unwrap();
let offset = reply_body.offset;
assert_eq!(offset, 0x100);
assert_eq!(reply_payload[0], 0xa5);
#[cfg(windows)]
let tubes = base::Tube::pair().unwrap();
#[cfg(windows)]
// Safe because we will be importing the Tube in the other thread.
let descriptor =
unsafe { tube_transporter::packed_tube::pack(tubes.0, std::process::id()).unwrap() };
#[cfg(unix)]
let descriptor = base::Event::new().unwrap();
master.set_slave_request_fd(&descriptor).unwrap();
master.set_vring_enable(0, true).unwrap();
// unimplemented yet
master
.set_log_base(0, Some(event.as_raw_descriptor()))
.unwrap();
master.set_log_fd(event.as_raw_descriptor()).unwrap();
master.set_vring_num(0, 256).unwrap();
master.set_vring_base(0, 0).unwrap();
let config = VringConfigData {
queue_max_size: 256,
queue_size: 128,
flags: VhostUserVringAddrFlags::VHOST_VRING_F_LOG.bits(),
desc_table_addr: 0x1000,
used_ring_addr: 0x2000,
avail_ring_addr: 0x3000,
log_addr: Some(0x4000),
};
master.set_vring_addr(0, &config).unwrap();
master.set_vring_call(0, &event).unwrap();
master.set_vring_kick(0, &event).unwrap();
master.set_vring_err(0, &event).unwrap();
let max_mem_slots = master.get_max_mem_slots().unwrap();
assert_eq!(max_mem_slots, 32);
let region_file = tempfile().unwrap();
let region = VhostUserMemoryRegionInfo {
guest_phys_addr: 0x10_0000,
memory_size: 0x10_0000,
userspace_addr: 0,
mmap_offset: 0,
mmap_handle: region_file.as_raw_descriptor(),
};
master.add_mem_region(®ion).unwrap();
master.remove_mem_region(®ion).unwrap();
mbar.wait();
}
#[test]
fn test_error_display() {
assert_eq!(format!("{}", Error::InvalidParam), "invalid parameters");
assert_eq!(format!("{}", Error::InvalidOperation), "invalid operation");
}
#[test]
fn test_error_from_base_error() {
let e: Error = base::Error::new(libc::EAGAIN).into();
if let Error::SocketRetry(e1) = e {
assert_eq!(e1.raw_os_error().unwrap(), libc::EAGAIN);
} else {
panic!("invalid error code conversion!");
}
}
}
| {
match err.errno() {
// Retry:
// * EAGAIN, EWOULDBLOCK: The socket is marked nonblocking and the requested operation
// would block.
// * EINTR: A signal occurred before any data was transmitted
// * ENOBUFS: The output queue for a network interface was full. This generally
// indicates that the interface has stopped sending, but may be caused by transient
// congestion.
// * ENOMEM: No memory available.
libc::EAGAIN | libc::EWOULDBLOCK | libc::EINTR | libc::ENOBUFS | libc::ENOMEM => {
Error::SocketRetry(err.into())
}
// Broken:
// * ECONNRESET: Connection reset by peer.
// * EPIPE: The local end has been shut down on a connection oriented socket. In this
// case the process will also receive a SIGPIPE unless MSG_NOSIGNAL is set.
libc::ECONNRESET | libc::EPIPE => Error::SocketBroken(err.into()),
// Write permission is denied on the destination socket file, or search permission is
// denied for one of the directories the path prefix. | identifier_body |
lib.rs | // Copyright (C) 2019 Alibaba Cloud. All rights reserved.
// SPDX-License-Identifier: Apache-2.0 or BSD-3-Clause
//! Virtio Vhost Backend Drivers
//!
//! Virtio devices use virtqueues to transport data efficiently. The first generation of virtqueue
//! is a set of three different single-producer, single-consumer ring structures designed to store
//! generic scatter-gather I/O. The virtio specification 1.1 introduces an alternative compact
//! virtqueue layout named "Packed Virtqueue", which is more friendly to memory cache system and
//! hardware implemented virtio devices. The packed virtqueue uses read-write memory, that means
//! the memory will be both read and written by both host and guest. The new Packed Virtqueue is
//! preferred for performance.
//!
//! Vhost is a mechanism to improve performance of Virtio devices by delegate data plane operations
//! to dedicated IO service processes. Only the configuration, I/O submission notification, and I/O
//! completion interruption are piped through the hypervisor.
//! It uses the same virtqueue layout as Virtio to allow Vhost devices to be mapped directly to
//! Virtio devices. This allows a Vhost device to be accessed directly by a guest OS inside a
//! hypervisor process with an existing Virtio (PCI) driver.
//!
//! The initial vhost implementation is a part of the Linux kernel and uses ioctl interface to
//! communicate with userspace applications. Dedicated kernel worker threads are created to handle
//! IO requests from the guest.
//!
//! Later Vhost-user protocol is introduced to complement the ioctl interface used to control the
//! vhost implementation in the Linux kernel. It implements the control plane needed to establish
//! virtqueues sharing with a user space process on the same host. It uses communication over a
//! Unix domain socket to share file descriptors in the ancillary data of the message.
//! The protocol defines 2 sides of the communication, master and slave. Master is the application
//! that shares its virtqueues. Slave is the consumer of the virtqueues. Master and slave can be
//! either a client (i.e. connecting) or server (listening) in the socket communication.
#![deny(missing_docs)]
use std::fs::File;
use std::io::Error as IOError;
use remain::sorted;
use thiserror::Error as ThisError;
mod backend;
pub use backend::*;
pub mod message;
pub mod connection;
mod sys;
pub use sys::SystemStream;
pub use sys::*;
cfg_if::cfg_if! {
if #[cfg(feature = "vmm")] {
pub(crate) mod master;
pub use self::master::{Master, VhostUserMaster};
mod master_req_handler;
pub use self::master_req_handler::{VhostUserMasterReqHandler,
VhostUserMasterReqHandlerMut};
}
}
cfg_if::cfg_if! {
if #[cfg(feature = "device")] {
mod slave_req_handler;
mod slave_proxy;
pub use self::slave_req_handler::{
Protocol, SlaveReqHandler, SlaveReqHelper, VhostUserSlaveReqHandler,
VhostUserSlaveReqHandlerMut,
};
pub use self::slave_proxy::Slave;
}
}
cfg_if::cfg_if! {
if #[cfg(all(feature = "device", unix))] {
mod slave;
pub use self::slave::SlaveListener;
}
}
cfg_if::cfg_if! {
if #[cfg(feature = "vmm")] {
pub use self::master_req_handler::MasterReqHandler;
}
}
/// Errors for vhost-user operations
#[sorted]
#[derive(Debug, ThisError)]
pub enum Error {
/// client exited properly.
#[error("client exited properly")]
ClientExit,
/// client disconnected.
/// If connection is closed properly, use `ClientExit` instead.
#[error("client closed the connection")]
Disconnect,
/// Virtio/protocol features mismatch.
#[error("virtio features mismatch")]
FeatureMismatch,
/// Fd array in question is too big or too small
#[error("wrong number of attached fds")]
IncorrectFds,
/// Invalid message format, flag or content.
#[error("invalid message")]
InvalidMessage,
/// Unsupported operations due to that the protocol feature hasn't been negotiated.
#[error("invalid operation")]
InvalidOperation,
/// Invalid parameters.
#[error("invalid parameters")]
InvalidParam,
/// Failure from the master side.
#[error("master Internal error")]
MasterInternalError,
/// Message is too large
#[error("oversized message")]
OversizedMsg,
/// Only part of a message have been sent or received successfully
#[error("partial message")]
PartialMessage,
/// Provided recv buffer was too small, and data was dropped.
#[error("buffer for recv was too small, data was dropped: got size {got}, needed {want}")]
RecvBufferTooSmall {
/// The size of the buffer received.
got: usize,
/// The expected size of the buffer.
want: usize,
},
/// Error from request handler
#[error("handler failed to handle request: {0}")]
ReqHandlerError(IOError),
/// Failure from the slave side.
#[error("slave internal error")]
SlaveInternalError,
/// The socket is broken or has been closed.
#[error("socket is broken: {0}")]
SocketBroken(std::io::Error),
/// Can't connect to peer.
#[error("can't connect to peer: {0}")]
SocketConnect(std::io::Error),
/// Generic socket errors.
#[error("socket error: {0}")]
SocketError(std::io::Error),
/// Should retry the socket operation again.
#[error("temporary socket error: {0}")]
SocketRetry(std::io::Error),
/// Error from tx/rx on a Tube.
#[error("failed to read/write on Tube: {0}")]
TubeError(base::TubeError),
/// Error from VFIO device.
#[error("error occurred in VFIO device: {0}")]
VfioDeviceError(anyhow::Error),
}
impl From<base::TubeError> for Error {
fn from(err: base::TubeError) -> Self {
Error::TubeError(err)
}
}
impl From<std::io::Error> for Error {
fn from(err: std::io::Error) -> Self {
Error::SocketError(err)
}
}
impl From<base::Error> for Error {
/// Convert raw socket errors into meaningful vhost-user errors.
///
/// The base::Error is a simple wrapper over the raw errno, which doesn't means
/// much to the vhost-user connection manager. So convert it into meaningful errors to simplify
/// the connection manager logic.
///
/// # Return:
/// * - Error::SocketRetry: temporary error caused by signals or short of resources.
/// * - Error::SocketBroken: the underline socket is broken.
/// * - Error::SocketError: other socket related errors.
#[allow(unreachable_patterns)] // EWOULDBLOCK equals to EGAIN on linux
fn from(err: base::Error) -> Self {
match err.errno() {
// Retry:
// * EAGAIN, EWOULDBLOCK: The socket is marked nonblocking and the requested operation
// would block.
// * EINTR: A signal occurred before any data was transmitted
// * ENOBUFS: The output queue for a network interface was full. This generally
// indicates that the interface has stopped sending, but may be caused by transient
// congestion.
// * ENOMEM: No memory available.
libc::EAGAIN | libc::EWOULDBLOCK | libc::EINTR | libc::ENOBUFS | libc::ENOMEM => {
Error::SocketRetry(err.into())
}
// Broken:
// * ECONNRESET: Connection reset by peer.
// * EPIPE: The local end has been shut down on a connection oriented socket. In this
// case the process will also receive a SIGPIPE unless MSG_NOSIGNAL is set.
libc::ECONNRESET | libc::EPIPE => Error::SocketBroken(err.into()),
// Write permission is denied on the destination socket file, or search permission is
// denied for one of the directories the path prefix.
libc::EACCES => Error::SocketConnect(IOError::from_raw_os_error(libc::EACCES)),
// Catch all other errors
e => Error::SocketError(IOError::from_raw_os_error(e)),
}
}
}
/// Result of vhost-user operations
pub type Result<T> = std::result::Result<T, Error>;
/// Result of request handler.
pub type HandlerResult<T> = std::result::Result<T, IOError>;
/// Utility function to take the first element from option of a vector of files.
/// Returns `None` if the vector contains no file or more than one file.
pub(crate) fn take_single_file(files: Option<Vec<File>>) -> Option<File> {
let mut files = files?;
if files.len()!= 1 {
return None;
}
Some(files.swap_remove(0))
}
#[cfg(all(test, feature = "device"))]
mod dummy_slave;
#[cfg(all(test, feature = "vmm", feature = "device"))]
mod tests {
use std::sync::Arc;
use std::sync::Barrier;
use std::sync::Mutex;
use std::thread;
use base::AsRawDescriptor;
use tempfile::tempfile;
use super::*;
use crate::backend::VhostBackend;
use crate::connection::tests::*;
use crate::dummy_slave::DummySlaveReqHandler;
use crate::dummy_slave::VIRTIO_FEATURES;
use crate::message::*;
use crate::VhostUserMemoryRegionInfo;
use crate::VringConfigData;
#[test]
fn create_dummy_slave() {
let slave = Mutex::new(DummySlaveReqHandler::new());
slave.set_owner().unwrap();
assert!(slave.set_owner().is_err());
}
#[test]
fn test_set_owner() {
let slave_be = Mutex::new(DummySlaveReqHandler::new());
let (master, mut slave) = create_master_slave_pair(slave_be);
assert!(!slave.as_ref().lock().unwrap().owned);
master.set_owner().unwrap();
slave.handle_request().unwrap();
assert!(slave.as_ref().lock().unwrap().owned);
master.set_owner().unwrap();
assert!(slave.handle_request().is_err());
assert!(slave.as_ref().lock().unwrap().owned);
}
#[test]
fn test_set_features() {
let mbar = Arc::new(Barrier::new(2));
let sbar = mbar.clone();
let slave_be = Mutex::new(DummySlaveReqHandler::new());
let (mut master, mut slave) = create_master_slave_pair(slave_be);
thread::spawn(move || {
slave.handle_request().unwrap();
assert!(slave.as_ref().lock().unwrap().owned);
slave.handle_request().unwrap();
slave.handle_request().unwrap();
assert_eq!(
slave.as_ref().lock().unwrap().acked_features,
VIRTIO_FEATURES &!0x1
);
slave.handle_request().unwrap();
slave.handle_request().unwrap();
assert_eq!(
slave.as_ref().lock().unwrap().acked_protocol_features,
VhostUserProtocolFeatures::all().bits()
);
sbar.wait();
});
master.set_owner().unwrap();
// set virtio features
let features = master.get_features().unwrap();
assert_eq!(features, VIRTIO_FEATURES);
master.set_features(VIRTIO_FEATURES &!0x1).unwrap();
// set vhost protocol features
let features = master.get_protocol_features().unwrap();
assert_eq!(features.bits(), VhostUserProtocolFeatures::all().bits());
master.set_protocol_features(features).unwrap();
mbar.wait();
}
#[test]
fn test_master_slave_process() {
let mbar = Arc::new(Barrier::new(2));
let sbar = mbar.clone();
let slave_be = Mutex::new(DummySlaveReqHandler::new());
let (mut master, mut slave) = create_master_slave_pair(slave_be);
thread::spawn(move || {
// set_own()
slave.handle_request().unwrap();
assert!(slave.as_ref().lock().unwrap().owned);
// get/set_features()
slave.handle_request().unwrap();
slave.handle_request().unwrap();
assert_eq!(
slave.as_ref().lock().unwrap().acked_features,
VIRTIO_FEATURES &!0x1
);
slave.handle_request().unwrap();
slave.handle_request().unwrap();
assert_eq!(
slave.as_ref().lock().unwrap().acked_protocol_features,
VhostUserProtocolFeatures::all().bits()
);
// get_inflight_fd()
slave.handle_request().unwrap();
// set_inflight_fd()
slave.handle_request().unwrap();
// get_queue_num()
slave.handle_request().unwrap();
// set_mem_table()
slave.handle_request().unwrap();
// get/set_config()
slave.handle_request().unwrap();
slave.handle_request().unwrap();
// set_slave_request_fd
slave.handle_request().unwrap();
// set_vring_enable
slave.handle_request().unwrap();
// set_log_base,set_log_fd()
slave.handle_request().unwrap_err();
slave.handle_request().unwrap_err();
// set_vring_xxx
slave.handle_request().unwrap();
slave.handle_request().unwrap();
slave.handle_request().unwrap();
slave.handle_request().unwrap();
slave.handle_request().unwrap();
slave.handle_request().unwrap();
// get_max_mem_slots()
slave.handle_request().unwrap();
// add_mem_region()
slave.handle_request().unwrap();
// remove_mem_region()
slave.handle_request().unwrap();
sbar.wait();
});
master.set_owner().unwrap();
// set virtio features
let features = master.get_features().unwrap();
assert_eq!(features, VIRTIO_FEATURES);
master.set_features(VIRTIO_FEATURES &!0x1).unwrap();
// set vhost protocol features
let features = master.get_protocol_features().unwrap();
assert_eq!(features.bits(), VhostUserProtocolFeatures::all().bits());
master.set_protocol_features(features).unwrap();
// Retrieve inflight I/O tracking information
let (inflight_info, inflight_file) = master
.get_inflight_fd(&VhostUserInflight {
num_queues: 2,
queue_size: 256,
..Default::default()
})
.unwrap();
// Set the buffer back to the backend
master
.set_inflight_fd(&inflight_info, inflight_file.as_raw_descriptor())
.unwrap();
let num = master.get_queue_num().unwrap();
assert_eq!(num, 2);
let event = base::Event::new().unwrap();
let mem = [VhostUserMemoryRegionInfo {
guest_phys_addr: 0,
memory_size: 0x10_0000,
userspace_addr: 0,
mmap_offset: 0,
mmap_handle: event.as_raw_descriptor(),
}];
master.set_mem_table(&mem).unwrap();
master
.set_config(0x100, VhostUserConfigFlags::WRITABLE, &[0xa5u8])
.unwrap();
let buf = [0x0u8; 4];
let (reply_body, reply_payload) = master
.get_config(0x100, 4, VhostUserConfigFlags::empty(), &buf)
.unwrap();
let offset = reply_body.offset;
assert_eq!(offset, 0x100);
assert_eq!(reply_payload[0], 0xa5);
#[cfg(windows)]
let tubes = base::Tube::pair().unwrap();
#[cfg(windows)]
// Safe because we will be importing the Tube in the other thread.
let descriptor =
unsafe { tube_transporter::packed_tube::pack(tubes.0, std::process::id()).unwrap() };
#[cfg(unix)]
let descriptor = base::Event::new().unwrap();
master.set_slave_request_fd(&descriptor).unwrap();
master.set_vring_enable(0, true).unwrap();
// unimplemented yet
master
.set_log_base(0, Some(event.as_raw_descriptor()))
.unwrap();
master.set_log_fd(event.as_raw_descriptor()).unwrap();
master.set_vring_num(0, 256).unwrap();
master.set_vring_base(0, 0).unwrap();
let config = VringConfigData {
queue_max_size: 256,
queue_size: 128,
flags: VhostUserVringAddrFlags::VHOST_VRING_F_LOG.bits(),
desc_table_addr: 0x1000,
used_ring_addr: 0x2000,
avail_ring_addr: 0x3000,
log_addr: Some(0x4000),
};
master.set_vring_addr(0, &config).unwrap();
master.set_vring_call(0, &event).unwrap();
master.set_vring_kick(0, &event).unwrap();
master.set_vring_err(0, &event).unwrap();
let max_mem_slots = master.get_max_mem_slots().unwrap();
assert_eq!(max_mem_slots, 32);
let region_file = tempfile().unwrap();
let region = VhostUserMemoryRegionInfo {
guest_phys_addr: 0x10_0000,
memory_size: 0x10_0000,
userspace_addr: 0,
mmap_offset: 0,
mmap_handle: region_file.as_raw_descriptor(),
};
master.add_mem_region(®ion).unwrap();
master.remove_mem_region(®ion).unwrap();
mbar.wait();
}
#[test]
fn test_error_display() {
assert_eq!(format!("{}", Error::InvalidParam), "invalid parameters");
assert_eq!(format!("{}", Error::InvalidOperation), "invalid operation");
}
#[test]
fn test_error_from_base_error() {
let e: Error = base::Error::new(libc::EAGAIN).into();
if let Error::SocketRetry(e1) = e {
assert_eq!(e1.raw_os_error().unwrap(), libc::EAGAIN);
} else |
}
}
| {
panic!("invalid error code conversion!");
} | conditional_block |
lib.rs | // Copyright (C) 2019 Alibaba Cloud. All rights reserved.
// SPDX-License-Identifier: Apache-2.0 or BSD-3-Clause
//! Virtio Vhost Backend Drivers
//!
//! Virtio devices use virtqueues to transport data efficiently. The first generation of virtqueue
//! is a set of three different single-producer, single-consumer ring structures designed to store
//! generic scatter-gather I/O. The virtio specification 1.1 introduces an alternative compact
//! virtqueue layout named "Packed Virtqueue", which is more friendly to memory cache system and
//! hardware implemented virtio devices. The packed virtqueue uses read-write memory, that means
//! the memory will be both read and written by both host and guest. The new Packed Virtqueue is
//! preferred for performance.
//!
//! Vhost is a mechanism to improve performance of Virtio devices by delegate data plane operations
//! to dedicated IO service processes. Only the configuration, I/O submission notification, and I/O
//! completion interruption are piped through the hypervisor.
//! It uses the same virtqueue layout as Virtio to allow Vhost devices to be mapped directly to
//! Virtio devices. This allows a Vhost device to be accessed directly by a guest OS inside a
//! hypervisor process with an existing Virtio (PCI) driver.
//!
//! The initial vhost implementation is a part of the Linux kernel and uses ioctl interface to
//! communicate with userspace applications. Dedicated kernel worker threads are created to handle
//! IO requests from the guest.
//!
//! Later Vhost-user protocol is introduced to complement the ioctl interface used to control the | //! virtqueues sharing with a user space process on the same host. It uses communication over a
//! Unix domain socket to share file descriptors in the ancillary data of the message.
//! The protocol defines 2 sides of the communication, master and slave. Master is the application
//! that shares its virtqueues. Slave is the consumer of the virtqueues. Master and slave can be
//! either a client (i.e. connecting) or server (listening) in the socket communication.
#![deny(missing_docs)]
use std::fs::File;
use std::io::Error as IOError;
use remain::sorted;
use thiserror::Error as ThisError;
mod backend;
pub use backend::*;
pub mod message;
pub mod connection;
mod sys;
pub use sys::SystemStream;
pub use sys::*;
cfg_if::cfg_if! {
if #[cfg(feature = "vmm")] {
pub(crate) mod master;
pub use self::master::{Master, VhostUserMaster};
mod master_req_handler;
pub use self::master_req_handler::{VhostUserMasterReqHandler,
VhostUserMasterReqHandlerMut};
}
}
cfg_if::cfg_if! {
if #[cfg(feature = "device")] {
mod slave_req_handler;
mod slave_proxy;
pub use self::slave_req_handler::{
Protocol, SlaveReqHandler, SlaveReqHelper, VhostUserSlaveReqHandler,
VhostUserSlaveReqHandlerMut,
};
pub use self::slave_proxy::Slave;
}
}
cfg_if::cfg_if! {
if #[cfg(all(feature = "device", unix))] {
mod slave;
pub use self::slave::SlaveListener;
}
}
cfg_if::cfg_if! {
if #[cfg(feature = "vmm")] {
pub use self::master_req_handler::MasterReqHandler;
}
}
/// Errors for vhost-user operations
#[sorted]
#[derive(Debug, ThisError)]
pub enum Error {
/// client exited properly.
#[error("client exited properly")]
ClientExit,
/// client disconnected.
/// If connection is closed properly, use `ClientExit` instead.
#[error("client closed the connection")]
Disconnect,
/// Virtio/protocol features mismatch.
#[error("virtio features mismatch")]
FeatureMismatch,
/// Fd array in question is too big or too small
#[error("wrong number of attached fds")]
IncorrectFds,
/// Invalid message format, flag or content.
#[error("invalid message")]
InvalidMessage,
/// Unsupported operations due to that the protocol feature hasn't been negotiated.
#[error("invalid operation")]
InvalidOperation,
/// Invalid parameters.
#[error("invalid parameters")]
InvalidParam,
/// Failure from the master side.
#[error("master Internal error")]
MasterInternalError,
/// Message is too large
#[error("oversized message")]
OversizedMsg,
/// Only part of a message have been sent or received successfully
#[error("partial message")]
PartialMessage,
/// Provided recv buffer was too small, and data was dropped.
#[error("buffer for recv was too small, data was dropped: got size {got}, needed {want}")]
RecvBufferTooSmall {
/// The size of the buffer received.
got: usize,
/// The expected size of the buffer.
want: usize,
},
/// Error from request handler
#[error("handler failed to handle request: {0}")]
ReqHandlerError(IOError),
/// Failure from the slave side.
#[error("slave internal error")]
SlaveInternalError,
/// The socket is broken or has been closed.
#[error("socket is broken: {0}")]
SocketBroken(std::io::Error),
/// Can't connect to peer.
#[error("can't connect to peer: {0}")]
SocketConnect(std::io::Error),
/// Generic socket errors.
#[error("socket error: {0}")]
SocketError(std::io::Error),
/// Should retry the socket operation again.
#[error("temporary socket error: {0}")]
SocketRetry(std::io::Error),
/// Error from tx/rx on a Tube.
#[error("failed to read/write on Tube: {0}")]
TubeError(base::TubeError),
/// Error from VFIO device.
#[error("error occurred in VFIO device: {0}")]
VfioDeviceError(anyhow::Error),
}
impl From<base::TubeError> for Error {
fn from(err: base::TubeError) -> Self {
Error::TubeError(err)
}
}
impl From<std::io::Error> for Error {
fn from(err: std::io::Error) -> Self {
Error::SocketError(err)
}
}
impl From<base::Error> for Error {
/// Convert raw socket errors into meaningful vhost-user errors.
///
/// The base::Error is a simple wrapper over the raw errno, which doesn't means
/// much to the vhost-user connection manager. So convert it into meaningful errors to simplify
/// the connection manager logic.
///
/// # Return:
/// * - Error::SocketRetry: temporary error caused by signals or short of resources.
/// * - Error::SocketBroken: the underline socket is broken.
/// * - Error::SocketError: other socket related errors.
#[allow(unreachable_patterns)] // EWOULDBLOCK equals to EGAIN on linux
fn from(err: base::Error) -> Self {
match err.errno() {
// Retry:
// * EAGAIN, EWOULDBLOCK: The socket is marked nonblocking and the requested operation
// would block.
// * EINTR: A signal occurred before any data was transmitted
// * ENOBUFS: The output queue for a network interface was full. This generally
// indicates that the interface has stopped sending, but may be caused by transient
// congestion.
// * ENOMEM: No memory available.
libc::EAGAIN | libc::EWOULDBLOCK | libc::EINTR | libc::ENOBUFS | libc::ENOMEM => {
Error::SocketRetry(err.into())
}
// Broken:
// * ECONNRESET: Connection reset by peer.
// * EPIPE: The local end has been shut down on a connection oriented socket. In this
// case the process will also receive a SIGPIPE unless MSG_NOSIGNAL is set.
libc::ECONNRESET | libc::EPIPE => Error::SocketBroken(err.into()),
// Write permission is denied on the destination socket file, or search permission is
// denied for one of the directories the path prefix.
libc::EACCES => Error::SocketConnect(IOError::from_raw_os_error(libc::EACCES)),
// Catch all other errors
e => Error::SocketError(IOError::from_raw_os_error(e)),
}
}
}
/// Result of vhost-user operations
pub type Result<T> = std::result::Result<T, Error>;
/// Result of request handler.
pub type HandlerResult<T> = std::result::Result<T, IOError>;
/// Utility function to take the first element from option of a vector of files.
/// Returns `None` if the vector contains no file or more than one file.
pub(crate) fn take_single_file(files: Option<Vec<File>>) -> Option<File> {
let mut files = files?;
if files.len()!= 1 {
return None;
}
Some(files.swap_remove(0))
}
#[cfg(all(test, feature = "device"))]
mod dummy_slave;
#[cfg(all(test, feature = "vmm", feature = "device"))]
mod tests {
use std::sync::Arc;
use std::sync::Barrier;
use std::sync::Mutex;
use std::thread;
use base::AsRawDescriptor;
use tempfile::tempfile;
use super::*;
use crate::backend::VhostBackend;
use crate::connection::tests::*;
use crate::dummy_slave::DummySlaveReqHandler;
use crate::dummy_slave::VIRTIO_FEATURES;
use crate::message::*;
use crate::VhostUserMemoryRegionInfo;
use crate::VringConfigData;
#[test]
fn create_dummy_slave() {
let slave = Mutex::new(DummySlaveReqHandler::new());
slave.set_owner().unwrap();
assert!(slave.set_owner().is_err());
}
#[test]
fn test_set_owner() {
let slave_be = Mutex::new(DummySlaveReqHandler::new());
let (master, mut slave) = create_master_slave_pair(slave_be);
assert!(!slave.as_ref().lock().unwrap().owned);
master.set_owner().unwrap();
slave.handle_request().unwrap();
assert!(slave.as_ref().lock().unwrap().owned);
master.set_owner().unwrap();
assert!(slave.handle_request().is_err());
assert!(slave.as_ref().lock().unwrap().owned);
}
#[test]
fn test_set_features() {
let mbar = Arc::new(Barrier::new(2));
let sbar = mbar.clone();
let slave_be = Mutex::new(DummySlaveReqHandler::new());
let (mut master, mut slave) = create_master_slave_pair(slave_be);
thread::spawn(move || {
slave.handle_request().unwrap();
assert!(slave.as_ref().lock().unwrap().owned);
slave.handle_request().unwrap();
slave.handle_request().unwrap();
assert_eq!(
slave.as_ref().lock().unwrap().acked_features,
VIRTIO_FEATURES &!0x1
);
slave.handle_request().unwrap();
slave.handle_request().unwrap();
assert_eq!(
slave.as_ref().lock().unwrap().acked_protocol_features,
VhostUserProtocolFeatures::all().bits()
);
sbar.wait();
});
master.set_owner().unwrap();
// set virtio features
let features = master.get_features().unwrap();
assert_eq!(features, VIRTIO_FEATURES);
master.set_features(VIRTIO_FEATURES &!0x1).unwrap();
// set vhost protocol features
let features = master.get_protocol_features().unwrap();
assert_eq!(features.bits(), VhostUserProtocolFeatures::all().bits());
master.set_protocol_features(features).unwrap();
mbar.wait();
}
#[test]
fn test_master_slave_process() {
let mbar = Arc::new(Barrier::new(2));
let sbar = mbar.clone();
let slave_be = Mutex::new(DummySlaveReqHandler::new());
let (mut master, mut slave) = create_master_slave_pair(slave_be);
thread::spawn(move || {
// set_own()
slave.handle_request().unwrap();
assert!(slave.as_ref().lock().unwrap().owned);
// get/set_features()
slave.handle_request().unwrap();
slave.handle_request().unwrap();
assert_eq!(
slave.as_ref().lock().unwrap().acked_features,
VIRTIO_FEATURES &!0x1
);
slave.handle_request().unwrap();
slave.handle_request().unwrap();
assert_eq!(
slave.as_ref().lock().unwrap().acked_protocol_features,
VhostUserProtocolFeatures::all().bits()
);
// get_inflight_fd()
slave.handle_request().unwrap();
// set_inflight_fd()
slave.handle_request().unwrap();
// get_queue_num()
slave.handle_request().unwrap();
// set_mem_table()
slave.handle_request().unwrap();
// get/set_config()
slave.handle_request().unwrap();
slave.handle_request().unwrap();
// set_slave_request_fd
slave.handle_request().unwrap();
// set_vring_enable
slave.handle_request().unwrap();
// set_log_base,set_log_fd()
slave.handle_request().unwrap_err();
slave.handle_request().unwrap_err();
// set_vring_xxx
slave.handle_request().unwrap();
slave.handle_request().unwrap();
slave.handle_request().unwrap();
slave.handle_request().unwrap();
slave.handle_request().unwrap();
slave.handle_request().unwrap();
// get_max_mem_slots()
slave.handle_request().unwrap();
// add_mem_region()
slave.handle_request().unwrap();
// remove_mem_region()
slave.handle_request().unwrap();
sbar.wait();
});
master.set_owner().unwrap();
// set virtio features
let features = master.get_features().unwrap();
assert_eq!(features, VIRTIO_FEATURES);
master.set_features(VIRTIO_FEATURES &!0x1).unwrap();
// set vhost protocol features
let features = master.get_protocol_features().unwrap();
assert_eq!(features.bits(), VhostUserProtocolFeatures::all().bits());
master.set_protocol_features(features).unwrap();
// Retrieve inflight I/O tracking information
let (inflight_info, inflight_file) = master
.get_inflight_fd(&VhostUserInflight {
num_queues: 2,
queue_size: 256,
..Default::default()
})
.unwrap();
// Set the buffer back to the backend
master
.set_inflight_fd(&inflight_info, inflight_file.as_raw_descriptor())
.unwrap();
let num = master.get_queue_num().unwrap();
assert_eq!(num, 2);
let event = base::Event::new().unwrap();
let mem = [VhostUserMemoryRegionInfo {
guest_phys_addr: 0,
memory_size: 0x10_0000,
userspace_addr: 0,
mmap_offset: 0,
mmap_handle: event.as_raw_descriptor(),
}];
master.set_mem_table(&mem).unwrap();
master
.set_config(0x100, VhostUserConfigFlags::WRITABLE, &[0xa5u8])
.unwrap();
let buf = [0x0u8; 4];
let (reply_body, reply_payload) = master
.get_config(0x100, 4, VhostUserConfigFlags::empty(), &buf)
.unwrap();
let offset = reply_body.offset;
assert_eq!(offset, 0x100);
assert_eq!(reply_payload[0], 0xa5);
#[cfg(windows)]
let tubes = base::Tube::pair().unwrap();
#[cfg(windows)]
// Safe because we will be importing the Tube in the other thread.
let descriptor =
unsafe { tube_transporter::packed_tube::pack(tubes.0, std::process::id()).unwrap() };
#[cfg(unix)]
let descriptor = base::Event::new().unwrap();
master.set_slave_request_fd(&descriptor).unwrap();
master.set_vring_enable(0, true).unwrap();
// unimplemented yet
master
.set_log_base(0, Some(event.as_raw_descriptor()))
.unwrap();
master.set_log_fd(event.as_raw_descriptor()).unwrap();
master.set_vring_num(0, 256).unwrap();
master.set_vring_base(0, 0).unwrap();
let config = VringConfigData {
queue_max_size: 256,
queue_size: 128,
flags: VhostUserVringAddrFlags::VHOST_VRING_F_LOG.bits(),
desc_table_addr: 0x1000,
used_ring_addr: 0x2000,
avail_ring_addr: 0x3000,
log_addr: Some(0x4000),
};
master.set_vring_addr(0, &config).unwrap();
master.set_vring_call(0, &event).unwrap();
master.set_vring_kick(0, &event).unwrap();
master.set_vring_err(0, &event).unwrap();
let max_mem_slots = master.get_max_mem_slots().unwrap();
assert_eq!(max_mem_slots, 32);
let region_file = tempfile().unwrap();
let region = VhostUserMemoryRegionInfo {
guest_phys_addr: 0x10_0000,
memory_size: 0x10_0000,
userspace_addr: 0,
mmap_offset: 0,
mmap_handle: region_file.as_raw_descriptor(),
};
master.add_mem_region(®ion).unwrap();
master.remove_mem_region(®ion).unwrap();
mbar.wait();
}
#[test]
fn test_error_display() {
assert_eq!(format!("{}", Error::InvalidParam), "invalid parameters");
assert_eq!(format!("{}", Error::InvalidOperation), "invalid operation");
}
#[test]
fn test_error_from_base_error() {
let e: Error = base::Error::new(libc::EAGAIN).into();
if let Error::SocketRetry(e1) = e {
assert_eq!(e1.raw_os_error().unwrap(), libc::EAGAIN);
} else {
panic!("invalid error code conversion!");
}
}
} | //! vhost implementation in the Linux kernel. It implements the control plane needed to establish | random_line_split |
main.rs | //! CLI tool to use the functions provided by the [wasmstandalone](../wasmstandalone/index.html)
//! crate.
//!
//! Reads Wasm binary files (one Wasm module per file), translates the functions' code to Cretonne
//! IL. Can also executes the `start` function of the module by laying out the memories, globals
//! and tables, then emitting the translated code with hardcoded addresses to memory.
extern crate cton_wasm;
extern crate cton_native;
extern crate wasmstandalone_runtime;
extern crate wasmstandalone_execute;
extern crate wasmparser;
extern crate cretonne;
extern crate wasmtext;
extern crate docopt;
#[macro_use]
extern crate serde_derive;
extern crate term;
extern crate tempdir;
use cton_wasm::{translate_module, TranslationResult};
use wasmstandalone_execute::{compile_module, execute};
use std::path::PathBuf;
use wasmparser::{Parser, ParserState, WasmDecoder, SectionCode};
use wasmtext::Writer;
use cretonne::loop_analysis::LoopAnalysis;
use cretonne::flowgraph::ControlFlowGraph;
use cretonne::dominator_tree::DominatorTree;
use cretonne::Context;
use cretonne::result::CtonError;
use cretonne::ir;
use cretonne::ir::entities::AnyEntity;
use cretonne::isa::TargetIsa;
use cretonne::verifier;
use cretonne::settings;
use std::fs::File;
use std::error::Error;
use std::io;
use std::io::stdout;
use std::io::prelude::*;
use docopt::Docopt;
use std::path::Path;
use std::process::{exit, Command};
use tempdir::TempDir;
use cretonne::settings::Configurable;
macro_rules! vprintln {
($x: expr, $($tts:tt)*) => {
if $x {
println!($($tts)*);
}
}
}
macro_rules! vprint {
($x: expr, $($tts:tt)*) => {
if $x {
print!($($tts)*);
}
}
}
const USAGE: &str = "
Wasm to Cretonne IL translation utility.
Takes a binary WebAssembly module and returns its functions in Cretonne IL format.
The translation is dependent on the runtime chosen.
The default is a dummy runtime that produces placeholder values.
Usage:
wasmstandalone [-vcop] <file>...
wasmstandalone -e [-mvcop] <file>...
wasmstandalone --help | --version
Options:
-v, --verbose displays info on the different steps
-p, --print displays the module and translated functions
-c, --check checks the corectness of the translated functions
-o, --optimize runs optimization passes on the translated functions
-e, --execute enable the standalone runtime and executes the start function of the module
-m, --memory interactive memory inspector after execution
-h, --help print this help message
--version print the Cretonne version
";
#[derive(Deserialize, Debug, Clone)]
struct Args {
arg_file: Vec<String>,
flag_verbose: bool,
flag_execute: bool,
flag_memory: bool,
flag_check: bool,
flag_optimize: bool,
flag_print: bool,
}
fn read_to_end(path: PathBuf) -> Result<Vec<u8>, io::Error> {
let mut buf: Vec<u8> = Vec::new();
let mut file = File::open(path)?;
file.read_to_end(&mut buf)?;
Ok(buf)
}
fn main() {
let args: Args = Docopt::new(USAGE)
.and_then(|d| {
d.help(true)
.version(Some(String::from("0.0.0")))
.deserialize()
})
.unwrap_or_else(|e| e.exit());
let mut terminal = term::stdout().unwrap();
let (mut flag_builder, isa_builder) = cton_native::builders().unwrap_or_else(|_| {
panic!("host machine is not a supported target");
});
// Enable verifier passes in debug mode.
if cfg!(debug_assertions) {
flag_builder.enable("enable_verifier").unwrap();
}
let isa = isa_builder.finish(settings::Flags::new(&flag_builder));
for filename in &args.arg_file {
let path = Path::new(&filename);
let name = path.as_os_str().to_string_lossy();
match handle_module(&args, path.to_path_buf(), &name, &*isa) {
Ok(()) => {}
Err(message) => {
terminal.fg(term::color::RED).unwrap();
println!("error");
terminal.reset().unwrap();
println!("{}", message);
exit(1);
}
}
}
}
fn | (args: &Args, path: PathBuf, name: &str, isa: &TargetIsa) -> Result<(), String> {
let mut terminal = term::stdout().unwrap();
terminal.fg(term::color::YELLOW).unwrap();
vprint!(args.flag_verbose, "Handling: ");
terminal.reset().unwrap();
vprintln!(args.flag_verbose, "\"{}\"", name);
terminal.fg(term::color::MAGENTA).unwrap();
vprint!(args.flag_verbose, "Translating...");
terminal.reset().unwrap();
let mut data = read_to_end(path.clone()).map_err(|err| {
String::from(err.description())
})?;
if!data.starts_with(&[b'\0', b'a', b's', b'm']) {
let tmp_dir = TempDir::new("cretonne-wasm").unwrap();
let file_path = tmp_dir.path().join("module.wasm");
File::create(file_path.clone()).unwrap();
Command::new("wat2wasm")
.arg(path.clone())
.arg("-o")
.arg(file_path.to_str().unwrap())
.output()
.or_else(|e| if let io::ErrorKind::NotFound = e.kind() {
return Err(String::from("wat2wasm not found"));
} else {
return Err(String::from(e.description()));
})?;
data = read_to_end(file_path).map_err(
|err| String::from(err.description()),
)?;
}
let mut runtime = wasmstandalone_runtime::Runtime::with_flags(isa.flags().clone());
let translation = {
match translate_module(&data, &mut runtime) {
Ok(x) => x,
Err(string) => {
return Err(string);
}
}
};
terminal.fg(term::color::GREEN).unwrap();
vprintln!(args.flag_verbose, " ok");
terminal.reset().unwrap();
if args.flag_check {
terminal.fg(term::color::MAGENTA).unwrap();
vprint!(args.flag_verbose, "Checking... ");
terminal.reset().unwrap();
for func in &translation.functions {
verifier::verify_function(func, isa).map_err(|err| {
pretty_verifier_error(func, Some(isa), &err)
})?;
}
terminal.fg(term::color::GREEN).unwrap();
vprintln!(args.flag_verbose, " ok");
terminal.reset().unwrap();
}
if args.flag_print {
let mut writer1 = stdout();
let mut writer2 = stdout();
match pretty_print_translation(name, &data, &translation, &mut writer1, &mut writer2, isa) {
Err(error) => return Err(String::from(error.description())),
Ok(()) => (),
}
}
if args.flag_optimize {
terminal.fg(term::color::MAGENTA).unwrap();
vprint!(args.flag_verbose, "Optimizing... ");
terminal.reset().unwrap();
for func in &translation.functions {
let mut loop_analysis = LoopAnalysis::new();
let mut cfg = ControlFlowGraph::new();
cfg.compute(func);
let mut domtree = DominatorTree::new();
domtree.compute(func, &cfg);
loop_analysis.compute(func, &cfg, &domtree);
let mut context = Context::new();
context.func = func.clone(); // TODO: Avoid this clone.
context.cfg = cfg;
context.domtree = domtree;
context.loop_analysis = loop_analysis;
match verifier::verify_context(&context.func, &context.cfg, &context.domtree, isa) {
Ok(()) => (),
Err(ref err) => {
return Err(pretty_verifier_error(&context.func, Some(isa), err));
}
};
match context.licm(isa) {
Ok(())=> (),
Err(error) => {
match error {
CtonError::Verifier(ref err) => {
return Err(pretty_verifier_error(&context.func, Some(isa), err));
}
CtonError::InvalidInput |
CtonError::ImplLimitExceeded |
CtonError::CodeTooLarge => return Err(String::from(error.description())),
}
}
};
match verifier::verify_context(&context.func, &context.cfg, &context.domtree, isa) {
Ok(()) => (),
Err(ref err) => return Err(pretty_verifier_error(&context.func, Some(isa), err)),
}
}
terminal.fg(term::color::GREEN).unwrap();
vprintln!(args.flag_verbose, " ok");
terminal.reset().unwrap();
}
if args.flag_execute {
terminal.fg(term::color::MAGENTA).unwrap();
vprint!(args.flag_verbose, "Compiling... ");
terminal.reset().unwrap();
match compile_module(&translation, isa, &runtime) {
Ok(ref exec) => {
terminal.fg(term::color::GREEN).unwrap();
vprintln!(args.flag_verbose, "ok");
terminal.reset().unwrap();
terminal.fg(term::color::MAGENTA).unwrap();
vprint!(args.flag_verbose, "Executing... ");
terminal.reset().unwrap();
match execute(exec) {
Ok(()) => {
terminal.fg(term::color::GREEN).unwrap();
vprintln!(args.flag_verbose, "ok");
terminal.reset().unwrap();
}
Err(s) => {
return Err(s);
}
}
}
Err(s) => {
return Err(s);
}
};
if args.flag_memory {
let mut input = String::new();
terminal.fg(term::color::YELLOW).unwrap();
println!("Inspecting memory");
terminal.fg(term::color::MAGENTA).unwrap();
println!("Type 'quit' to exit.");
terminal.reset().unwrap();
loop {
input.clear();
terminal.fg(term::color::YELLOW).unwrap();
print!("Memory index, offset, length (e.g. 0,0,4): ");
terminal.reset().unwrap();
let _ = stdout().flush();
match io::stdin().read_line(&mut input) {
Ok(_) => {
input.pop();
if input == "quit" {
break;
}
let split: Vec<&str> = input.split(',').collect();
if split.len()!= 3 {
break;
}
let memory = runtime.inspect_memory(
str::parse(split[0]).unwrap(),
str::parse(split[1]).unwrap(),
str::parse(split[2]).unwrap(),
);
let mut s = memory.iter().fold(String::from("#"), |mut acc, byte| {
acc.push_str(format!("{:02x}_", byte).as_str());
acc
});
s.pop();
println!("{}", s);
}
Err(error) => return Err(String::from(error.description())),
}
}
}
}
Ok(())
}
// Prints out a Wasm module, and for each function the corresponding translation in Cretonne IL.
fn pretty_print_translation(
filename: &str,
data: &[u8],
translation: &TranslationResult,
writer_wat: &mut Write,
writer_cretonne: &mut Write,
isa: &TargetIsa,
) -> Result<(), io::Error> {
let mut terminal = term::stdout().unwrap();
let mut parser = Parser::new(data);
let mut parser_writer = Writer::new(writer_wat);
match parser.read() {
s @ &ParserState::BeginWasm {.. } => parser_writer.write(s)?,
_ => panic!("modules should begin properly"),
}
loop {
match parser.read() {
s @ &ParserState::BeginSection { code: SectionCode::Code,.. } => {
// The code section begins
parser_writer.write(s)?;
break;
}
&ParserState::EndWasm => return Ok(()),
s => parser_writer.write(s)?,
}
}
let mut function_index = 0;
loop {
match parser.read() {
s @ &ParserState::BeginFunctionBody {.. } => {
terminal.fg(term::color::BLUE).unwrap();
write!(
writer_cretonne,
"====== Function No. {} of module \"{}\" ======\n",
function_index,
filename
)?;
terminal.fg(term::color::CYAN).unwrap();
write!(writer_cretonne, "Wast ---------->\n")?;
terminal.reset().unwrap();
parser_writer.write(s)?;
}
s @ &ParserState::EndSection => {
parser_writer.write(s)?;
break;
}
_ => panic!("wrong content in code section"),
}
loop {
match parser.read() {
s @ &ParserState::EndFunctionBody => {
parser_writer.write(s)?;
break;
}
s => {
parser_writer.write(s)?;
}
};
}
let mut function_string =
format!(" {}", translation.functions[function_index].display(isa));
function_string.pop();
let function_str = str::replace(function_string.as_str(), "\n", "\n ");
terminal.fg(term::color::CYAN).unwrap();
write!(writer_cretonne, "Cretonne IL --->\n")?;
terminal.reset().unwrap();
write!(writer_cretonne, "{}\n", function_str)?;
function_index += 1;
}
loop {
match parser.read() {
&ParserState::EndWasm => return Ok(()),
s => parser_writer.write(s)?,
}
}
}
/// Pretty-print a verifier error.
pub fn pretty_verifier_error(
func: &ir::Function,
isa: Option<&TargetIsa>,
err: &verifier::Error,
) -> String {
let msg = err.to_string();
let str1 = match err.location {
AnyEntity::Inst(inst) => {
format!(
"{}\n{}: {}\n\n",
msg,
inst,
func.dfg.display_inst(inst, isa)
)
}
_ => String::from(format!("{}\n", msg)),
};
format!("{}{}", str1, func.display(isa))
}
| handle_module | identifier_name |
main.rs | //! CLI tool to use the functions provided by the [wasmstandalone](../wasmstandalone/index.html)
//! crate.
//!
//! Reads Wasm binary files (one Wasm module per file), translates the functions' code to Cretonne
//! IL. Can also executes the `start` function of the module by laying out the memories, globals
//! and tables, then emitting the translated code with hardcoded addresses to memory.
extern crate cton_wasm;
extern crate cton_native;
extern crate wasmstandalone_runtime;
extern crate wasmstandalone_execute;
extern crate wasmparser;
extern crate cretonne;
extern crate wasmtext;
extern crate docopt;
#[macro_use]
extern crate serde_derive;
extern crate term;
extern crate tempdir;
use cton_wasm::{translate_module, TranslationResult};
use wasmstandalone_execute::{compile_module, execute};
use std::path::PathBuf;
use wasmparser::{Parser, ParserState, WasmDecoder, SectionCode};
use wasmtext::Writer;
use cretonne::loop_analysis::LoopAnalysis;
use cretonne::flowgraph::ControlFlowGraph;
use cretonne::dominator_tree::DominatorTree;
use cretonne::Context;
use cretonne::result::CtonError;
use cretonne::ir;
use cretonne::ir::entities::AnyEntity;
use cretonne::isa::TargetIsa;
use cretonne::verifier;
use cretonne::settings;
use std::fs::File;
use std::error::Error;
use std::io;
use std::io::stdout;
use std::io::prelude::*;
use docopt::Docopt;
use std::path::Path;
use std::process::{exit, Command};
use tempdir::TempDir;
use cretonne::settings::Configurable;
macro_rules! vprintln {
($x: expr, $($tts:tt)*) => {
if $x {
println!($($tts)*);
}
}
}
macro_rules! vprint {
($x: expr, $($tts:tt)*) => {
if $x {
print!($($tts)*);
}
}
}
const USAGE: &str = "
Wasm to Cretonne IL translation utility.
Takes a binary WebAssembly module and returns its functions in Cretonne IL format.
The translation is dependent on the runtime chosen.
The default is a dummy runtime that produces placeholder values.
Usage:
wasmstandalone [-vcop] <file>...
wasmstandalone -e [-mvcop] <file>...
wasmstandalone --help | --version
Options:
-v, --verbose displays info on the different steps
-p, --print displays the module and translated functions
-c, --check checks the corectness of the translated functions
-o, --optimize runs optimization passes on the translated functions
-e, --execute enable the standalone runtime and executes the start function of the module
-m, --memory interactive memory inspector after execution
-h, --help print this help message
--version print the Cretonne version
";
#[derive(Deserialize, Debug, Clone)]
struct Args {
arg_file: Vec<String>,
flag_verbose: bool,
flag_execute: bool,
flag_memory: bool,
flag_check: bool,
flag_optimize: bool,
flag_print: bool,
}
fn read_to_end(path: PathBuf) -> Result<Vec<u8>, io::Error> {
let mut buf: Vec<u8> = Vec::new();
let mut file = File::open(path)?;
file.read_to_end(&mut buf)?;
Ok(buf)
}
fn main() {
let args: Args = Docopt::new(USAGE)
.and_then(|d| {
d.help(true)
.version(Some(String::from("0.0.0")))
.deserialize()
})
.unwrap_or_else(|e| e.exit());
let mut terminal = term::stdout().unwrap();
let (mut flag_builder, isa_builder) = cton_native::builders().unwrap_or_else(|_| {
panic!("host machine is not a supported target");
});
// Enable verifier passes in debug mode.
if cfg!(debug_assertions) {
flag_builder.enable("enable_verifier").unwrap();
}
let isa = isa_builder.finish(settings::Flags::new(&flag_builder));
for filename in &args.arg_file {
let path = Path::new(&filename);
let name = path.as_os_str().to_string_lossy();
match handle_module(&args, path.to_path_buf(), &name, &*isa) {
Ok(()) => {}
Err(message) => {
terminal.fg(term::color::RED).unwrap();
println!("error");
terminal.reset().unwrap();
println!("{}", message);
exit(1);
}
}
}
}
fn handle_module(args: &Args, path: PathBuf, name: &str, isa: &TargetIsa) -> Result<(), String> {
let mut terminal = term::stdout().unwrap();
terminal.fg(term::color::YELLOW).unwrap();
vprint!(args.flag_verbose, "Handling: ");
terminal.reset().unwrap();
vprintln!(args.flag_verbose, "\"{}\"", name);
terminal.fg(term::color::MAGENTA).unwrap();
vprint!(args.flag_verbose, "Translating...");
terminal.reset().unwrap();
let mut data = read_to_end(path.clone()).map_err(|err| {
String::from(err.description())
})?;
if!data.starts_with(&[b'\0', b'a', b's', b'm']) {
let tmp_dir = TempDir::new("cretonne-wasm").unwrap();
let file_path = tmp_dir.path().join("module.wasm");
File::create(file_path.clone()).unwrap();
Command::new("wat2wasm")
.arg(path.clone())
.arg("-o")
.arg(file_path.to_str().unwrap())
.output()
.or_else(|e| if let io::ErrorKind::NotFound = e.kind() {
return Err(String::from("wat2wasm not found"));
} else {
return Err(String::from(e.description()));
})?;
data = read_to_end(file_path).map_err(
|err| String::from(err.description()),
)?;
}
let mut runtime = wasmstandalone_runtime::Runtime::with_flags(isa.flags().clone());
let translation = {
match translate_module(&data, &mut runtime) {
Ok(x) => x,
Err(string) => {
return Err(string);
}
}
};
terminal.fg(term::color::GREEN).unwrap();
vprintln!(args.flag_verbose, " ok");
terminal.reset().unwrap();
if args.flag_check {
terminal.fg(term::color::MAGENTA).unwrap();
vprint!(args.flag_verbose, "Checking... ");
terminal.reset().unwrap();
for func in &translation.functions {
verifier::verify_function(func, isa).map_err(|err| {
pretty_verifier_error(func, Some(isa), &err)
})?;
}
terminal.fg(term::color::GREEN).unwrap();
vprintln!(args.flag_verbose, " ok");
terminal.reset().unwrap();
}
if args.flag_print {
let mut writer1 = stdout();
let mut writer2 = stdout();
match pretty_print_translation(name, &data, &translation, &mut writer1, &mut writer2, isa) {
Err(error) => return Err(String::from(error.description())),
Ok(()) => (),
}
}
if args.flag_optimize {
terminal.fg(term::color::MAGENTA).unwrap();
vprint!(args.flag_verbose, "Optimizing... ");
terminal.reset().unwrap();
for func in &translation.functions {
let mut loop_analysis = LoopAnalysis::new();
let mut cfg = ControlFlowGraph::new();
cfg.compute(func);
let mut domtree = DominatorTree::new();
domtree.compute(func, &cfg);
loop_analysis.compute(func, &cfg, &domtree);
let mut context = Context::new();
context.func = func.clone(); // TODO: Avoid this clone.
context.cfg = cfg;
context.domtree = domtree;
context.loop_analysis = loop_analysis;
match verifier::verify_context(&context.func, &context.cfg, &context.domtree, isa) {
Ok(()) => (),
Err(ref err) => {
return Err(pretty_verifier_error(&context.func, Some(isa), err));
}
};
match context.licm(isa) {
Ok(())=> (),
Err(error) => {
match error {
CtonError::Verifier(ref err) => {
return Err(pretty_verifier_error(&context.func, Some(isa), err));
}
CtonError::InvalidInput |
CtonError::ImplLimitExceeded |
CtonError::CodeTooLarge => return Err(String::from(error.description())),
}
}
};
match verifier::verify_context(&context.func, &context.cfg, &context.domtree, isa) {
Ok(()) => (),
Err(ref err) => return Err(pretty_verifier_error(&context.func, Some(isa), err)),
}
}
terminal.fg(term::color::GREEN).unwrap();
vprintln!(args.flag_verbose, " ok");
terminal.reset().unwrap();
}
if args.flag_execute {
terminal.fg(term::color::MAGENTA).unwrap();
vprint!(args.flag_verbose, "Compiling... ");
terminal.reset().unwrap();
match compile_module(&translation, isa, &runtime) {
Ok(ref exec) => {
terminal.fg(term::color::GREEN).unwrap();
vprintln!(args.flag_verbose, "ok");
terminal.reset().unwrap();
terminal.fg(term::color::MAGENTA).unwrap();
vprint!(args.flag_verbose, "Executing... ");
terminal.reset().unwrap();
match execute(exec) {
Ok(()) => {
terminal.fg(term::color::GREEN).unwrap();
vprintln!(args.flag_verbose, "ok");
terminal.reset().unwrap();
}
Err(s) => {
return Err(s);
}
}
}
Err(s) => {
return Err(s);
}
};
if args.flag_memory {
let mut input = String::new();
terminal.fg(term::color::YELLOW).unwrap();
println!("Inspecting memory");
terminal.fg(term::color::MAGENTA).unwrap();
println!("Type 'quit' to exit.");
terminal.reset().unwrap();
loop {
input.clear();
terminal.fg(term::color::YELLOW).unwrap();
print!("Memory index, offset, length (e.g. 0,0,4): ");
terminal.reset().unwrap();
let _ = stdout().flush();
match io::stdin().read_line(&mut input) {
Ok(_) => {
input.pop();
if input == "quit" {
break;
}
let split: Vec<&str> = input.split(',').collect();
if split.len()!= 3 {
break;
}
let memory = runtime.inspect_memory(
str::parse(split[0]).unwrap(),
str::parse(split[1]).unwrap(),
str::parse(split[2]).unwrap(),
);
let mut s = memory.iter().fold(String::from("#"), |mut acc, byte| {
acc.push_str(format!("{:02x}_", byte).as_str());
acc
});
s.pop();
println!("{}", s);
}
Err(error) => return Err(String::from(error.description())),
}
}
}
}
Ok(())
}
// Prints out a Wasm module, and for each function the corresponding translation in Cretonne IL.
fn pretty_print_translation(
filename: &str,
data: &[u8],
translation: &TranslationResult,
writer_wat: &mut Write,
writer_cretonne: &mut Write,
isa: &TargetIsa,
) -> Result<(), io::Error> {
let mut terminal = term::stdout().unwrap();
let mut parser = Parser::new(data);
let mut parser_writer = Writer::new(writer_wat);
match parser.read() {
s @ &ParserState::BeginWasm {.. } => parser_writer.write(s)?,
_ => panic!("modules should begin properly"),
}
loop {
match parser.read() {
s @ &ParserState::BeginSection { code: SectionCode::Code,.. } => {
// The code section begins
parser_writer.write(s)?;
break;
}
&ParserState::EndWasm => return Ok(()),
s => parser_writer.write(s)?,
}
}
let mut function_index = 0;
loop {
match parser.read() {
s @ &ParserState::BeginFunctionBody {.. } => {
terminal.fg(term::color::BLUE).unwrap();
write!(
writer_cretonne,
"====== Function No. {} of module \"{}\" ======\n",
function_index,
filename
)?;
terminal.fg(term::color::CYAN).unwrap();
write!(writer_cretonne, "Wast ---------->\n")?;
terminal.reset().unwrap();
parser_writer.write(s)?;
}
s @ &ParserState::EndSection => {
parser_writer.write(s)?;
break;
}
_ => panic!("wrong content in code section"),
}
loop {
match parser.read() {
s @ &ParserState::EndFunctionBody => {
parser_writer.write(s)?;
break;
}
s => {
parser_writer.write(s)?;
}
};
}
let mut function_string =
format!(" {}", translation.functions[function_index].display(isa));
function_string.pop();
let function_str = str::replace(function_string.as_str(), "\n", "\n ");
terminal.fg(term::color::CYAN).unwrap();
write!(writer_cretonne, "Cretonne IL --->\n")?;
terminal.reset().unwrap();
write!(writer_cretonne, "{}\n", function_str)?;
function_index += 1;
}
loop {
match parser.read() {
&ParserState::EndWasm => return Ok(()),
s => parser_writer.write(s)?,
}
}
}
/// Pretty-print a verifier error.
pub fn pretty_verifier_error(
func: &ir::Function,
isa: Option<&TargetIsa>,
err: &verifier::Error,
) -> String | {
let msg = err.to_string();
let str1 = match err.location {
AnyEntity::Inst(inst) => {
format!(
"{}\n{}: {}\n\n",
msg,
inst,
func.dfg.display_inst(inst, isa)
)
}
_ => String::from(format!("{}\n", msg)),
};
format!("{}{}", str1, func.display(isa))
} | identifier_body |
|
main.rs | //! CLI tool to use the functions provided by the [wasmstandalone](../wasmstandalone/index.html)
//! crate.
//!
//! Reads Wasm binary files (one Wasm module per file), translates the functions' code to Cretonne
//! IL. Can also executes the `start` function of the module by laying out the memories, globals
//! and tables, then emitting the translated code with hardcoded addresses to memory.
extern crate cton_wasm;
extern crate cton_native;
extern crate wasmstandalone_runtime;
extern crate wasmstandalone_execute;
extern crate wasmparser;
extern crate cretonne;
extern crate wasmtext;
extern crate docopt;
#[macro_use]
extern crate serde_derive;
extern crate term;
extern crate tempdir;
use cton_wasm::{translate_module, TranslationResult};
use wasmstandalone_execute::{compile_module, execute};
use std::path::PathBuf;
use wasmparser::{Parser, ParserState, WasmDecoder, SectionCode};
use wasmtext::Writer;
use cretonne::loop_analysis::LoopAnalysis; | use cretonne::ir;
use cretonne::ir::entities::AnyEntity;
use cretonne::isa::TargetIsa;
use cretonne::verifier;
use cretonne::settings;
use std::fs::File;
use std::error::Error;
use std::io;
use std::io::stdout;
use std::io::prelude::*;
use docopt::Docopt;
use std::path::Path;
use std::process::{exit, Command};
use tempdir::TempDir;
use cretonne::settings::Configurable;
macro_rules! vprintln {
($x: expr, $($tts:tt)*) => {
if $x {
println!($($tts)*);
}
}
}
macro_rules! vprint {
($x: expr, $($tts:tt)*) => {
if $x {
print!($($tts)*);
}
}
}
const USAGE: &str = "
Wasm to Cretonne IL translation utility.
Takes a binary WebAssembly module and returns its functions in Cretonne IL format.
The translation is dependent on the runtime chosen.
The default is a dummy runtime that produces placeholder values.
Usage:
wasmstandalone [-vcop] <file>...
wasmstandalone -e [-mvcop] <file>...
wasmstandalone --help | --version
Options:
-v, --verbose displays info on the different steps
-p, --print displays the module and translated functions
-c, --check checks the corectness of the translated functions
-o, --optimize runs optimization passes on the translated functions
-e, --execute enable the standalone runtime and executes the start function of the module
-m, --memory interactive memory inspector after execution
-h, --help print this help message
--version print the Cretonne version
";
#[derive(Deserialize, Debug, Clone)]
struct Args {
arg_file: Vec<String>,
flag_verbose: bool,
flag_execute: bool,
flag_memory: bool,
flag_check: bool,
flag_optimize: bool,
flag_print: bool,
}
fn read_to_end(path: PathBuf) -> Result<Vec<u8>, io::Error> {
let mut buf: Vec<u8> = Vec::new();
let mut file = File::open(path)?;
file.read_to_end(&mut buf)?;
Ok(buf)
}
fn main() {
let args: Args = Docopt::new(USAGE)
.and_then(|d| {
d.help(true)
.version(Some(String::from("0.0.0")))
.deserialize()
})
.unwrap_or_else(|e| e.exit());
let mut terminal = term::stdout().unwrap();
let (mut flag_builder, isa_builder) = cton_native::builders().unwrap_or_else(|_| {
panic!("host machine is not a supported target");
});
// Enable verifier passes in debug mode.
if cfg!(debug_assertions) {
flag_builder.enable("enable_verifier").unwrap();
}
let isa = isa_builder.finish(settings::Flags::new(&flag_builder));
for filename in &args.arg_file {
let path = Path::new(&filename);
let name = path.as_os_str().to_string_lossy();
match handle_module(&args, path.to_path_buf(), &name, &*isa) {
Ok(()) => {}
Err(message) => {
terminal.fg(term::color::RED).unwrap();
println!("error");
terminal.reset().unwrap();
println!("{}", message);
exit(1);
}
}
}
}
fn handle_module(args: &Args, path: PathBuf, name: &str, isa: &TargetIsa) -> Result<(), String> {
let mut terminal = term::stdout().unwrap();
terminal.fg(term::color::YELLOW).unwrap();
vprint!(args.flag_verbose, "Handling: ");
terminal.reset().unwrap();
vprintln!(args.flag_verbose, "\"{}\"", name);
terminal.fg(term::color::MAGENTA).unwrap();
vprint!(args.flag_verbose, "Translating...");
terminal.reset().unwrap();
let mut data = read_to_end(path.clone()).map_err(|err| {
String::from(err.description())
})?;
if!data.starts_with(&[b'\0', b'a', b's', b'm']) {
let tmp_dir = TempDir::new("cretonne-wasm").unwrap();
let file_path = tmp_dir.path().join("module.wasm");
File::create(file_path.clone()).unwrap();
Command::new("wat2wasm")
.arg(path.clone())
.arg("-o")
.arg(file_path.to_str().unwrap())
.output()
.or_else(|e| if let io::ErrorKind::NotFound = e.kind() {
return Err(String::from("wat2wasm not found"));
} else {
return Err(String::from(e.description()));
})?;
data = read_to_end(file_path).map_err(
|err| String::from(err.description()),
)?;
}
let mut runtime = wasmstandalone_runtime::Runtime::with_flags(isa.flags().clone());
let translation = {
match translate_module(&data, &mut runtime) {
Ok(x) => x,
Err(string) => {
return Err(string);
}
}
};
terminal.fg(term::color::GREEN).unwrap();
vprintln!(args.flag_verbose, " ok");
terminal.reset().unwrap();
if args.flag_check {
terminal.fg(term::color::MAGENTA).unwrap();
vprint!(args.flag_verbose, "Checking... ");
terminal.reset().unwrap();
for func in &translation.functions {
verifier::verify_function(func, isa).map_err(|err| {
pretty_verifier_error(func, Some(isa), &err)
})?;
}
terminal.fg(term::color::GREEN).unwrap();
vprintln!(args.flag_verbose, " ok");
terminal.reset().unwrap();
}
if args.flag_print {
let mut writer1 = stdout();
let mut writer2 = stdout();
match pretty_print_translation(name, &data, &translation, &mut writer1, &mut writer2, isa) {
Err(error) => return Err(String::from(error.description())),
Ok(()) => (),
}
}
if args.flag_optimize {
terminal.fg(term::color::MAGENTA).unwrap();
vprint!(args.flag_verbose, "Optimizing... ");
terminal.reset().unwrap();
for func in &translation.functions {
let mut loop_analysis = LoopAnalysis::new();
let mut cfg = ControlFlowGraph::new();
cfg.compute(func);
let mut domtree = DominatorTree::new();
domtree.compute(func, &cfg);
loop_analysis.compute(func, &cfg, &domtree);
let mut context = Context::new();
context.func = func.clone(); // TODO: Avoid this clone.
context.cfg = cfg;
context.domtree = domtree;
context.loop_analysis = loop_analysis;
match verifier::verify_context(&context.func, &context.cfg, &context.domtree, isa) {
Ok(()) => (),
Err(ref err) => {
return Err(pretty_verifier_error(&context.func, Some(isa), err));
}
};
match context.licm(isa) {
Ok(())=> (),
Err(error) => {
match error {
CtonError::Verifier(ref err) => {
return Err(pretty_verifier_error(&context.func, Some(isa), err));
}
CtonError::InvalidInput |
CtonError::ImplLimitExceeded |
CtonError::CodeTooLarge => return Err(String::from(error.description())),
}
}
};
match verifier::verify_context(&context.func, &context.cfg, &context.domtree, isa) {
Ok(()) => (),
Err(ref err) => return Err(pretty_verifier_error(&context.func, Some(isa), err)),
}
}
terminal.fg(term::color::GREEN).unwrap();
vprintln!(args.flag_verbose, " ok");
terminal.reset().unwrap();
}
if args.flag_execute {
terminal.fg(term::color::MAGENTA).unwrap();
vprint!(args.flag_verbose, "Compiling... ");
terminal.reset().unwrap();
match compile_module(&translation, isa, &runtime) {
Ok(ref exec) => {
terminal.fg(term::color::GREEN).unwrap();
vprintln!(args.flag_verbose, "ok");
terminal.reset().unwrap();
terminal.fg(term::color::MAGENTA).unwrap();
vprint!(args.flag_verbose, "Executing... ");
terminal.reset().unwrap();
match execute(exec) {
Ok(()) => {
terminal.fg(term::color::GREEN).unwrap();
vprintln!(args.flag_verbose, "ok");
terminal.reset().unwrap();
}
Err(s) => {
return Err(s);
}
}
}
Err(s) => {
return Err(s);
}
};
if args.flag_memory {
let mut input = String::new();
terminal.fg(term::color::YELLOW).unwrap();
println!("Inspecting memory");
terminal.fg(term::color::MAGENTA).unwrap();
println!("Type 'quit' to exit.");
terminal.reset().unwrap();
loop {
input.clear();
terminal.fg(term::color::YELLOW).unwrap();
print!("Memory index, offset, length (e.g. 0,0,4): ");
terminal.reset().unwrap();
let _ = stdout().flush();
match io::stdin().read_line(&mut input) {
Ok(_) => {
input.pop();
if input == "quit" {
break;
}
let split: Vec<&str> = input.split(',').collect();
if split.len()!= 3 {
break;
}
let memory = runtime.inspect_memory(
str::parse(split[0]).unwrap(),
str::parse(split[1]).unwrap(),
str::parse(split[2]).unwrap(),
);
let mut s = memory.iter().fold(String::from("#"), |mut acc, byte| {
acc.push_str(format!("{:02x}_", byte).as_str());
acc
});
s.pop();
println!("{}", s);
}
Err(error) => return Err(String::from(error.description())),
}
}
}
}
Ok(())
}
// Prints out a Wasm module, and for each function the corresponding translation in Cretonne IL.
fn pretty_print_translation(
filename: &str,
data: &[u8],
translation: &TranslationResult,
writer_wat: &mut Write,
writer_cretonne: &mut Write,
isa: &TargetIsa,
) -> Result<(), io::Error> {
let mut terminal = term::stdout().unwrap();
let mut parser = Parser::new(data);
let mut parser_writer = Writer::new(writer_wat);
match parser.read() {
s @ &ParserState::BeginWasm {.. } => parser_writer.write(s)?,
_ => panic!("modules should begin properly"),
}
loop {
match parser.read() {
s @ &ParserState::BeginSection { code: SectionCode::Code,.. } => {
// The code section begins
parser_writer.write(s)?;
break;
}
&ParserState::EndWasm => return Ok(()),
s => parser_writer.write(s)?,
}
}
let mut function_index = 0;
loop {
match parser.read() {
s @ &ParserState::BeginFunctionBody {.. } => {
terminal.fg(term::color::BLUE).unwrap();
write!(
writer_cretonne,
"====== Function No. {} of module \"{}\" ======\n",
function_index,
filename
)?;
terminal.fg(term::color::CYAN).unwrap();
write!(writer_cretonne, "Wast ---------->\n")?;
terminal.reset().unwrap();
parser_writer.write(s)?;
}
s @ &ParserState::EndSection => {
parser_writer.write(s)?;
break;
}
_ => panic!("wrong content in code section"),
}
loop {
match parser.read() {
s @ &ParserState::EndFunctionBody => {
parser_writer.write(s)?;
break;
}
s => {
parser_writer.write(s)?;
}
};
}
let mut function_string =
format!(" {}", translation.functions[function_index].display(isa));
function_string.pop();
let function_str = str::replace(function_string.as_str(), "\n", "\n ");
terminal.fg(term::color::CYAN).unwrap();
write!(writer_cretonne, "Cretonne IL --->\n")?;
terminal.reset().unwrap();
write!(writer_cretonne, "{}\n", function_str)?;
function_index += 1;
}
loop {
match parser.read() {
&ParserState::EndWasm => return Ok(()),
s => parser_writer.write(s)?,
}
}
}
/// Pretty-print a verifier error.
pub fn pretty_verifier_error(
func: &ir::Function,
isa: Option<&TargetIsa>,
err: &verifier::Error,
) -> String {
let msg = err.to_string();
let str1 = match err.location {
AnyEntity::Inst(inst) => {
format!(
"{}\n{}: {}\n\n",
msg,
inst,
func.dfg.display_inst(inst, isa)
)
}
_ => String::from(format!("{}\n", msg)),
};
format!("{}{}", str1, func.display(isa))
} | use cretonne::flowgraph::ControlFlowGraph;
use cretonne::dominator_tree::DominatorTree;
use cretonne::Context;
use cretonne::result::CtonError; | random_line_split |
lib.rs | //! Interface for interacting with the Wayland protocol, server-side.
//!
//! ## General concepts
//!
//! This crate is structured around four main objects: the [`Display`] and [`DisplayHandle`] structs,
//! resources (objects implementing the [`Resource`] trait), and the [`Dispatch`] trait.
//!
//! The [`Display`] is the heart of this crate, it represents the protocol state of your Wayland server, and
//! takes care of processing messages from clients. You'll need to integrate it in your event loop (see its
//! documentation for details). From it you can retrieve the [`DisplayHandle`], which is a clonable handle to
//! the Wayland state and is the type used to actually interact with the protocol.
//!
//! Each of the Wayland object you can manipulate is represented by a struct implementing the [`Resource`]
//! trait. Thos structs are automatically generated from the wayland XML protocol specification. This crate
//! provides the types generated from the core protocol in the [`protocol`] module. For other standard
//! protocols, see the `wayland-protocols` crate.
//!
//! ## Request dispatching and the [`Dispatch`] trait
//!
//! The request dispatching logic provided by this crate is build around the [`Dispatch`] trait. During the
//! dispatching process (in [`Display::dispatch_clients()`]), all requests sent by clients are read from
//! their respective process and delivered to your processing logic, by invoking methods on the various
//! [`Dispatch`] implementations of your `State` struct. In this paradigm, your `State` needs to implement
//! `Dispatch<O, _>` for every Wayland object `O` it needs to process events for.
//!
//! However, implementing all those traits on your own is a lot of (often uninteresting) work. To make this
//! easier a composition mechanism is provided using the [`delegate_dispatch!`] macro. This way, another
//! library (such as Smithay) can provide generic [`Dispatch`] implementations that you can reuse on your
//! own app by delegating those objects to that provided implementation. See the documentation of those
//! traits and macro for details.
//!
//! ## Globals
//!
//! The entry point of the protocol for clients goes through the protocol globals. Each global represents a
//! capability of your compositor, a peripheral it has access to, or a protocol extension it supports.
//! Globals are created by you using [`DisplayHandle::create_global()`], and require your `State` to
//! implement the [`GlobalDispatch`] trait for the interface associated with that global.
//!
//! ## Logging
//!
//! This crate can generate some runtime error message (notably when a protocol error occurs). By default
//! those messages are printed to stderr. If you activate the `log` cargo feature, they will instead be
//! piped through the `log` crate.
//!
//! ## Advanced use
//!
//! ### Bypassing [`Dispatch`]
//!
//! It may be that for some of your objects, handling them via the [`Dispatch`] trait is impractical. In
//! those contexts, this crate also provides some escape-hatches to directly interface with the low-level
//! APIs from `wayland-backend`, allowing you to register callbacks for those objects by directly providing
//! implementations of the backend [`ObjectData`](crate::backend::ObjectData) trait.
//! See [`Client::create_resource_from_objdata()`] and [`DataInit::custom_init()`].
//!
//! ### Interaction with FFI
//!
//! It can happen that you'll need to interact with Wayland states accross FFI, such as for example when
//! interfacing with the graphics stack for enabling hardware acceleration for clients.
//!
//! In this case, you'll need to do it in two steps, by explicitly working with `wayland-backend`, adding
//! it to your dependencies and enabling its `server_system` feature.
//!
//! Then, you'll generally need:
//!
//! - The `*mut wl_display` pointer, that you can retrieve by first retrieving the
//! [`Backend`](crate::backend::Backend) using [`Display::backend()`], and then invoke
//! `Backend::display_ptr().
//! - The `*mut wl_resource` pointers for the objects you need to share, by first getting the
//! [`ObjectId`](crate::backend::ObjectId) using the [`Resource::id()`] method, and then
//! the `ObjectId::as_ptr()` method.
//!
//! If you need to receive pointers from FFI, you can make [`ObjectId`]s from the `*mut wl_resource` pointers
//! using `ObjectId::from_ptr()`, and then make the resources using [`Resource::from_id`].
#![forbid(improper_ctypes, unsafe_op_in_unsafe_fn)]
// Doc feature labels can be tested locally by running RUSTDOCFLAGS="--cfg=docsrs" cargo +nightly doc -p <crate>
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
use std::{
fmt,
hash::{Hash, Hasher},
};
use wayland_backend::{
io_lifetimes::OwnedFd,
protocol::{Interface, Message},
server::{InvalidId, ObjectId, WeakHandle},
};
mod client;
mod dispatch;
mod display;
mod global;
mod socket;
pub use client::Client;
pub use dispatch::{DataInit, Dispatch, New, ResourceData};
pub use display::{Display, DisplayHandle};
pub use global::GlobalDispatch;
pub use socket::{BindError, ListeningSocket};
/// Backend reexports
pub mod backend {
pub use wayland_backend::io_lifetimes;
pub use wayland_backend::protocol;
pub use wayland_backend::server::{
Backend, ClientData, ClientId, Credentials, DisconnectReason, GlobalHandler, GlobalId,
Handle, InitError, InvalidId, ObjectData, ObjectId, WeakHandle,
};
pub use wayland_backend::smallvec;
}
pub use wayland_backend::protocol::WEnum;
/// Generated protocol definitions
///
/// This module is automatically generated from the `wayland.xml` protocol specification, and contains the
/// interface definitions for the core Wayland protocol.
#[allow(missing_docs)]
pub mod protocol {
use self::__interfaces::*;
use crate as wayland_server;
pub mod __interfaces {
wayland_scanner::generate_interfaces!("wayland.xml");
}
wayland_scanner::generate_server_code!("wayland.xml");
}
// internal imports for dispatching logging depending on the `log` feature
#[cfg(feature = "log")]
#[allow(unused_imports)]
use log::{debug as log_debug, error as log_error, info as log_info, warn as log_warn};
#[cfg(not(feature = "log"))]
#[allow(unused_imports)]
use std::{
eprintln as log_error, eprintln as log_warn, eprintln as log_info, eprintln as log_debug,
};
/// Trait representing a Wayland interface
pub trait Resource: Clone + std::fmt::Debug + Sized {
/// The event enum for this interface
type Event;
/// The request enum for this interface
type Request;
/// The interface description
fn interface() -> &'static Interface;
/// The ID of this object
fn id(&self) -> ObjectId;
/// The client owning this object
///
/// Returns [`None`] if the object is no longer alive.
fn client(&self) -> Option<Client> {
let handle = self.handle().upgrade()?;
let client_id = handle.get_client(self.id()).ok()?;
let dh = DisplayHandle::from(handle);
Client::from_id(&dh, client_id).ok()
}
/// The version of this object
fn version(&self) -> u32;
/// Checks if the Wayland object associated with this proxy is still alive
fn is_alive(&self) -> bool {
if let Some(handle) = self.handle().upgrade() {
handle.object_info(self.id()).is_ok()
} else {
false
}
}
/// Access the user-data associated with this object
fn data<U:'static>(&self) -> Option<&U>;
/// Access the raw data associated with this object.
///
/// It is given to you as a `dyn Any`, and you are responsible for downcasting it.
///
/// For objects created using the scanner-generated methods, this will be an instance of the
/// [`ResourceData`] type.
fn object_data(&self) -> Option<&std::sync::Arc<dyn std::any::Any + Send + Sync>>;
/// Access the backend handle associated with this object
fn handle(&self) -> &backend::WeakHandle;
/// Create an object resource from its ID
///
/// Returns an error this the provided object ID does not correspond to the `Self` interface.
///
/// **Note:** This method is mostly meant as an implementation detail to be used by code generated by
/// wayland-scanner.
fn from_id(dh: &DisplayHandle, id: ObjectId) -> Result<Self, InvalidId>;
/// Send an event to this object
fn send_event(&self, evt: Self::Event) -> Result<(), InvalidId>;
/// Trigger a protocol error on this object
///
/// The `code` is intended to be from the `Error` enum declared alongside that object interface.
///
/// A protocol error is fatal to the Wayland connection, and the client will be disconnected.
#[inline]
fn post_error(&self, code: impl Into<u32>, error: impl Into<String>) {
if let Some(dh) = self.handle().upgrade().map(DisplayHandle::from) {
dh.post_error(self, code.into(), error.into());
}
}
/// Parse a event for this object
///
/// **Note:** This method is mostly meant as an implementation detail to be used by code generated by
/// wayland-scanner.
fn parse_request(
dh: &DisplayHandle,
msg: Message<ObjectId, OwnedFd>, | /// wayland-scanner.
fn write_event(
&self,
dh: &DisplayHandle,
req: Self::Event,
) -> Result<Message<ObjectId, std::os::unix::io::RawFd>, InvalidId>;
/// Creates a weak handle to this object
///
/// This weak handle will not keep the user-data associated with the object alive,
/// and can be converted back to a full resource using [`Weak::upgrade()`].
///
/// This can be of use if you need to store resources in the used data of other objects and want
/// to be sure to avoid reference cycles that would cause memory leaks.
fn downgrade(&self) -> Weak<Self> {
Weak { handle: self.handle().clone(), id: self.id(), _iface: std::marker::PhantomData }
}
#[doc(hidden)]
fn __set_object_data(
&mut self,
odata: std::sync::Arc<dyn std::any::Any + Send + Sync +'static>,
);
}
/// An error generated if an illegal request was received from a client
#[derive(Debug)]
pub enum DispatchError {
/// The received message does not match the specification for the object's interface.
BadMessage {
/// The id of the target object
sender_id: ObjectId,
/// The interface of the target object
interface: &'static str,
/// The opcode number
opcode: u16,
},
}
impl std::error::Error for DispatchError {}
impl fmt::Display for DispatchError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
DispatchError::BadMessage { sender_id, interface, opcode } => {
write!(f, "Bad message for object {interface}@{sender_id} on opcode {opcode}",)
}
}
}
}
/// A weak handle to a Wayland object
///
/// This handle does not keep the underlying user data alive, and can be converted back to a full resource
/// using [`Weak::upgrade()`].
#[derive(Debug, Clone)]
pub struct Weak<I> {
handle: WeakHandle,
id: ObjectId,
_iface: std::marker::PhantomData<I>,
}
impl<I: Resource> Weak<I> {
/// Try to upgrade with weak handle back into a full resource.
///
/// This will fail if either:
/// - the object represented by this handle has already been destroyed at the protocol level
/// - the Wayland connection has already been closed
pub fn upgrade(&self) -> Result<I, InvalidId> {
let handle = self.handle.upgrade().ok_or(InvalidId)?;
// Check if the object has been destroyed
handle.object_info(self.id.clone())?;
let d_handle = DisplayHandle::from(handle);
I::from_id(&d_handle, self.id.clone())
}
/// The underlying [`ObjectId`]
pub fn id(&self) -> ObjectId {
self.id.clone()
}
}
impl<I> PartialEq for Weak<I> {
fn eq(&self, other: &Self) -> bool {
self.id == other.id
}
}
impl<I> Eq for Weak<I> {}
impl<I> Hash for Weak<I> {
fn hash<H: Hasher>(&self, state: &mut H) {
self.id.hash(state);
}
}
impl<I: Resource> PartialEq<I> for Weak<I> {
fn eq(&self, other: &I) -> bool {
self.id == other.id()
}
} | ) -> Result<(Self, Self::Request), DispatchError>;
/// Serialize an event for this object
///
/// **Note:** This method is mostly meant as an implementation detail to be used by code generated by | random_line_split |
lib.rs | //! Interface for interacting with the Wayland protocol, server-side.
//!
//! ## General concepts
//!
//! This crate is structured around four main objects: the [`Display`] and [`DisplayHandle`] structs,
//! resources (objects implementing the [`Resource`] trait), and the [`Dispatch`] trait.
//!
//! The [`Display`] is the heart of this crate, it represents the protocol state of your Wayland server, and
//! takes care of processing messages from clients. You'll need to integrate it in your event loop (see its
//! documentation for details). From it you can retrieve the [`DisplayHandle`], which is a clonable handle to
//! the Wayland state and is the type used to actually interact with the protocol.
//!
//! Each of the Wayland object you can manipulate is represented by a struct implementing the [`Resource`]
//! trait. Thos structs are automatically generated from the wayland XML protocol specification. This crate
//! provides the types generated from the core protocol in the [`protocol`] module. For other standard
//! protocols, see the `wayland-protocols` crate.
//!
//! ## Request dispatching and the [`Dispatch`] trait
//!
//! The request dispatching logic provided by this crate is build around the [`Dispatch`] trait. During the
//! dispatching process (in [`Display::dispatch_clients()`]), all requests sent by clients are read from
//! their respective process and delivered to your processing logic, by invoking methods on the various
//! [`Dispatch`] implementations of your `State` struct. In this paradigm, your `State` needs to implement
//! `Dispatch<O, _>` for every Wayland object `O` it needs to process events for.
//!
//! However, implementing all those traits on your own is a lot of (often uninteresting) work. To make this
//! easier a composition mechanism is provided using the [`delegate_dispatch!`] macro. This way, another
//! library (such as Smithay) can provide generic [`Dispatch`] implementations that you can reuse on your
//! own app by delegating those objects to that provided implementation. See the documentation of those
//! traits and macro for details.
//!
//! ## Globals
//!
//! The entry point of the protocol for clients goes through the protocol globals. Each global represents a
//! capability of your compositor, a peripheral it has access to, or a protocol extension it supports.
//! Globals are created by you using [`DisplayHandle::create_global()`], and require your `State` to
//! implement the [`GlobalDispatch`] trait for the interface associated with that global.
//!
//! ## Logging
//!
//! This crate can generate some runtime error message (notably when a protocol error occurs). By default
//! those messages are printed to stderr. If you activate the `log` cargo feature, they will instead be
//! piped through the `log` crate.
//!
//! ## Advanced use
//!
//! ### Bypassing [`Dispatch`]
//!
//! It may be that for some of your objects, handling them via the [`Dispatch`] trait is impractical. In
//! those contexts, this crate also provides some escape-hatches to directly interface with the low-level
//! APIs from `wayland-backend`, allowing you to register callbacks for those objects by directly providing
//! implementations of the backend [`ObjectData`](crate::backend::ObjectData) trait.
//! See [`Client::create_resource_from_objdata()`] and [`DataInit::custom_init()`].
//!
//! ### Interaction with FFI
//!
//! It can happen that you'll need to interact with Wayland states accross FFI, such as for example when
//! interfacing with the graphics stack for enabling hardware acceleration for clients.
//!
//! In this case, you'll need to do it in two steps, by explicitly working with `wayland-backend`, adding
//! it to your dependencies and enabling its `server_system` feature.
//!
//! Then, you'll generally need:
//!
//! - The `*mut wl_display` pointer, that you can retrieve by first retrieving the
//! [`Backend`](crate::backend::Backend) using [`Display::backend()`], and then invoke
//! `Backend::display_ptr().
//! - The `*mut wl_resource` pointers for the objects you need to share, by first getting the
//! [`ObjectId`](crate::backend::ObjectId) using the [`Resource::id()`] method, and then
//! the `ObjectId::as_ptr()` method.
//!
//! If you need to receive pointers from FFI, you can make [`ObjectId`]s from the `*mut wl_resource` pointers
//! using `ObjectId::from_ptr()`, and then make the resources using [`Resource::from_id`].
#![forbid(improper_ctypes, unsafe_op_in_unsafe_fn)]
// Doc feature labels can be tested locally by running RUSTDOCFLAGS="--cfg=docsrs" cargo +nightly doc -p <crate>
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
use std::{
fmt,
hash::{Hash, Hasher},
};
use wayland_backend::{
io_lifetimes::OwnedFd,
protocol::{Interface, Message},
server::{InvalidId, ObjectId, WeakHandle},
};
mod client;
mod dispatch;
mod display;
mod global;
mod socket;
pub use client::Client;
pub use dispatch::{DataInit, Dispatch, New, ResourceData};
pub use display::{Display, DisplayHandle};
pub use global::GlobalDispatch;
pub use socket::{BindError, ListeningSocket};
/// Backend reexports
pub mod backend {
pub use wayland_backend::io_lifetimes;
pub use wayland_backend::protocol;
pub use wayland_backend::server::{
Backend, ClientData, ClientId, Credentials, DisconnectReason, GlobalHandler, GlobalId,
Handle, InitError, InvalidId, ObjectData, ObjectId, WeakHandle,
};
pub use wayland_backend::smallvec;
}
pub use wayland_backend::protocol::WEnum;
/// Generated protocol definitions
///
/// This module is automatically generated from the `wayland.xml` protocol specification, and contains the
/// interface definitions for the core Wayland protocol.
#[allow(missing_docs)]
pub mod protocol {
use self::__interfaces::*;
use crate as wayland_server;
pub mod __interfaces {
wayland_scanner::generate_interfaces!("wayland.xml");
}
wayland_scanner::generate_server_code!("wayland.xml");
}
// internal imports for dispatching logging depending on the `log` feature
#[cfg(feature = "log")]
#[allow(unused_imports)]
use log::{debug as log_debug, error as log_error, info as log_info, warn as log_warn};
#[cfg(not(feature = "log"))]
#[allow(unused_imports)]
use std::{
eprintln as log_error, eprintln as log_warn, eprintln as log_info, eprintln as log_debug,
};
/// Trait representing a Wayland interface
pub trait Resource: Clone + std::fmt::Debug + Sized {
/// The event enum for this interface
type Event;
/// The request enum for this interface
type Request;
/// The interface description
fn interface() -> &'static Interface;
/// The ID of this object
fn id(&self) -> ObjectId;
/// The client owning this object
///
/// Returns [`None`] if the object is no longer alive.
fn client(&self) -> Option<Client> {
let handle = self.handle().upgrade()?;
let client_id = handle.get_client(self.id()).ok()?;
let dh = DisplayHandle::from(handle);
Client::from_id(&dh, client_id).ok()
}
/// The version of this object
fn version(&self) -> u32;
/// Checks if the Wayland object associated with this proxy is still alive
fn is_alive(&self) -> bool {
if let Some(handle) = self.handle().upgrade() {
handle.object_info(self.id()).is_ok()
} else {
false
}
}
/// Access the user-data associated with this object
fn data<U:'static>(&self) -> Option<&U>;
/// Access the raw data associated with this object.
///
/// It is given to you as a `dyn Any`, and you are responsible for downcasting it.
///
/// For objects created using the scanner-generated methods, this will be an instance of the
/// [`ResourceData`] type.
fn object_data(&self) -> Option<&std::sync::Arc<dyn std::any::Any + Send + Sync>>;
/// Access the backend handle associated with this object
fn handle(&self) -> &backend::WeakHandle;
/// Create an object resource from its ID
///
/// Returns an error this the provided object ID does not correspond to the `Self` interface.
///
/// **Note:** This method is mostly meant as an implementation detail to be used by code generated by
/// wayland-scanner.
fn from_id(dh: &DisplayHandle, id: ObjectId) -> Result<Self, InvalidId>;
/// Send an event to this object
fn send_event(&self, evt: Self::Event) -> Result<(), InvalidId>;
/// Trigger a protocol error on this object
///
/// The `code` is intended to be from the `Error` enum declared alongside that object interface.
///
/// A protocol error is fatal to the Wayland connection, and the client will be disconnected.
#[inline]
fn post_error(&self, code: impl Into<u32>, error: impl Into<String>) {
if let Some(dh) = self.handle().upgrade().map(DisplayHandle::from) {
dh.post_error(self, code.into(), error.into());
}
}
/// Parse a event for this object
///
/// **Note:** This method is mostly meant as an implementation detail to be used by code generated by
/// wayland-scanner.
fn parse_request(
dh: &DisplayHandle,
msg: Message<ObjectId, OwnedFd>,
) -> Result<(Self, Self::Request), DispatchError>;
/// Serialize an event for this object
///
/// **Note:** This method is mostly meant as an implementation detail to be used by code generated by
/// wayland-scanner.
fn write_event(
&self,
dh: &DisplayHandle,
req: Self::Event,
) -> Result<Message<ObjectId, std::os::unix::io::RawFd>, InvalidId>;
/// Creates a weak handle to this object
///
/// This weak handle will not keep the user-data associated with the object alive,
/// and can be converted back to a full resource using [`Weak::upgrade()`].
///
/// This can be of use if you need to store resources in the used data of other objects and want
/// to be sure to avoid reference cycles that would cause memory leaks.
fn downgrade(&self) -> Weak<Self> {
Weak { handle: self.handle().clone(), id: self.id(), _iface: std::marker::PhantomData }
}
#[doc(hidden)]
fn __set_object_data(
&mut self,
odata: std::sync::Arc<dyn std::any::Any + Send + Sync +'static>,
);
}
/// An error generated if an illegal request was received from a client
#[derive(Debug)]
pub enum DispatchError {
/// The received message does not match the specification for the object's interface.
BadMessage {
/// The id of the target object
sender_id: ObjectId,
/// The interface of the target object
interface: &'static str,
/// The opcode number
opcode: u16,
},
}
impl std::error::Error for DispatchError {}
impl fmt::Display for DispatchError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
DispatchError::BadMessage { sender_id, interface, opcode } => |
}
}
}
/// A weak handle to a Wayland object
///
/// This handle does not keep the underlying user data alive, and can be converted back to a full resource
/// using [`Weak::upgrade()`].
#[derive(Debug, Clone)]
pub struct Weak<I> {
handle: WeakHandle,
id: ObjectId,
_iface: std::marker::PhantomData<I>,
}
impl<I: Resource> Weak<I> {
/// Try to upgrade with weak handle back into a full resource.
///
/// This will fail if either:
/// - the object represented by this handle has already been destroyed at the protocol level
/// - the Wayland connection has already been closed
pub fn upgrade(&self) -> Result<I, InvalidId> {
let handle = self.handle.upgrade().ok_or(InvalidId)?;
// Check if the object has been destroyed
handle.object_info(self.id.clone())?;
let d_handle = DisplayHandle::from(handle);
I::from_id(&d_handle, self.id.clone())
}
/// The underlying [`ObjectId`]
pub fn id(&self) -> ObjectId {
self.id.clone()
}
}
impl<I> PartialEq for Weak<I> {
fn eq(&self, other: &Self) -> bool {
self.id == other.id
}
}
impl<I> Eq for Weak<I> {}
impl<I> Hash for Weak<I> {
fn hash<H: Hasher>(&self, state: &mut H) {
self.id.hash(state);
}
}
impl<I: Resource> PartialEq<I> for Weak<I> {
fn eq(&self, other: &I) -> bool {
self.id == other.id()
}
}
| {
write!(f, "Bad message for object {interface}@{sender_id} on opcode {opcode}",)
} | conditional_block |
lib.rs | //! Interface for interacting with the Wayland protocol, server-side.
//!
//! ## General concepts
//!
//! This crate is structured around four main objects: the [`Display`] and [`DisplayHandle`] structs,
//! resources (objects implementing the [`Resource`] trait), and the [`Dispatch`] trait.
//!
//! The [`Display`] is the heart of this crate, it represents the protocol state of your Wayland server, and
//! takes care of processing messages from clients. You'll need to integrate it in your event loop (see its
//! documentation for details). From it you can retrieve the [`DisplayHandle`], which is a clonable handle to
//! the Wayland state and is the type used to actually interact with the protocol.
//!
//! Each of the Wayland object you can manipulate is represented by a struct implementing the [`Resource`]
//! trait. Thos structs are automatically generated from the wayland XML protocol specification. This crate
//! provides the types generated from the core protocol in the [`protocol`] module. For other standard
//! protocols, see the `wayland-protocols` crate.
//!
//! ## Request dispatching and the [`Dispatch`] trait
//!
//! The request dispatching logic provided by this crate is build around the [`Dispatch`] trait. During the
//! dispatching process (in [`Display::dispatch_clients()`]), all requests sent by clients are read from
//! their respective process and delivered to your processing logic, by invoking methods on the various
//! [`Dispatch`] implementations of your `State` struct. In this paradigm, your `State` needs to implement
//! `Dispatch<O, _>` for every Wayland object `O` it needs to process events for.
//!
//! However, implementing all those traits on your own is a lot of (often uninteresting) work. To make this
//! easier a composition mechanism is provided using the [`delegate_dispatch!`] macro. This way, another
//! library (such as Smithay) can provide generic [`Dispatch`] implementations that you can reuse on your
//! own app by delegating those objects to that provided implementation. See the documentation of those
//! traits and macro for details.
//!
//! ## Globals
//!
//! The entry point of the protocol for clients goes through the protocol globals. Each global represents a
//! capability of your compositor, a peripheral it has access to, or a protocol extension it supports.
//! Globals are created by you using [`DisplayHandle::create_global()`], and require your `State` to
//! implement the [`GlobalDispatch`] trait for the interface associated with that global.
//!
//! ## Logging
//!
//! This crate can generate some runtime error message (notably when a protocol error occurs). By default
//! those messages are printed to stderr. If you activate the `log` cargo feature, they will instead be
//! piped through the `log` crate.
//!
//! ## Advanced use
//!
//! ### Bypassing [`Dispatch`]
//!
//! It may be that for some of your objects, handling them via the [`Dispatch`] trait is impractical. In
//! those contexts, this crate also provides some escape-hatches to directly interface with the low-level
//! APIs from `wayland-backend`, allowing you to register callbacks for those objects by directly providing
//! implementations of the backend [`ObjectData`](crate::backend::ObjectData) trait.
//! See [`Client::create_resource_from_objdata()`] and [`DataInit::custom_init()`].
//!
//! ### Interaction with FFI
//!
//! It can happen that you'll need to interact with Wayland states accross FFI, such as for example when
//! interfacing with the graphics stack for enabling hardware acceleration for clients.
//!
//! In this case, you'll need to do it in two steps, by explicitly working with `wayland-backend`, adding
//! it to your dependencies and enabling its `server_system` feature.
//!
//! Then, you'll generally need:
//!
//! - The `*mut wl_display` pointer, that you can retrieve by first retrieving the
//! [`Backend`](crate::backend::Backend) using [`Display::backend()`], and then invoke
//! `Backend::display_ptr().
//! - The `*mut wl_resource` pointers for the objects you need to share, by first getting the
//! [`ObjectId`](crate::backend::ObjectId) using the [`Resource::id()`] method, and then
//! the `ObjectId::as_ptr()` method.
//!
//! If you need to receive pointers from FFI, you can make [`ObjectId`]s from the `*mut wl_resource` pointers
//! using `ObjectId::from_ptr()`, and then make the resources using [`Resource::from_id`].
#![forbid(improper_ctypes, unsafe_op_in_unsafe_fn)]
// Doc feature labels can be tested locally by running RUSTDOCFLAGS="--cfg=docsrs" cargo +nightly doc -p <crate>
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
use std::{
fmt,
hash::{Hash, Hasher},
};
use wayland_backend::{
io_lifetimes::OwnedFd,
protocol::{Interface, Message},
server::{InvalidId, ObjectId, WeakHandle},
};
mod client;
mod dispatch;
mod display;
mod global;
mod socket;
pub use client::Client;
pub use dispatch::{DataInit, Dispatch, New, ResourceData};
pub use display::{Display, DisplayHandle};
pub use global::GlobalDispatch;
pub use socket::{BindError, ListeningSocket};
/// Backend reexports
pub mod backend {
pub use wayland_backend::io_lifetimes;
pub use wayland_backend::protocol;
pub use wayland_backend::server::{
Backend, ClientData, ClientId, Credentials, DisconnectReason, GlobalHandler, GlobalId,
Handle, InitError, InvalidId, ObjectData, ObjectId, WeakHandle,
};
pub use wayland_backend::smallvec;
}
pub use wayland_backend::protocol::WEnum;
/// Generated protocol definitions
///
/// This module is automatically generated from the `wayland.xml` protocol specification, and contains the
/// interface definitions for the core Wayland protocol.
#[allow(missing_docs)]
pub mod protocol {
use self::__interfaces::*;
use crate as wayland_server;
pub mod __interfaces {
wayland_scanner::generate_interfaces!("wayland.xml");
}
wayland_scanner::generate_server_code!("wayland.xml");
}
// internal imports for dispatching logging depending on the `log` feature
#[cfg(feature = "log")]
#[allow(unused_imports)]
use log::{debug as log_debug, error as log_error, info as log_info, warn as log_warn};
#[cfg(not(feature = "log"))]
#[allow(unused_imports)]
use std::{
eprintln as log_error, eprintln as log_warn, eprintln as log_info, eprintln as log_debug,
};
/// Trait representing a Wayland interface
pub trait Resource: Clone + std::fmt::Debug + Sized {
/// The event enum for this interface
type Event;
/// The request enum for this interface
type Request;
/// The interface description
fn interface() -> &'static Interface;
/// The ID of this object
fn id(&self) -> ObjectId;
/// The client owning this object
///
/// Returns [`None`] if the object is no longer alive.
fn client(&self) -> Option<Client> {
let handle = self.handle().upgrade()?;
let client_id = handle.get_client(self.id()).ok()?;
let dh = DisplayHandle::from(handle);
Client::from_id(&dh, client_id).ok()
}
/// The version of this object
fn version(&self) -> u32;
/// Checks if the Wayland object associated with this proxy is still alive
fn | (&self) -> bool {
if let Some(handle) = self.handle().upgrade() {
handle.object_info(self.id()).is_ok()
} else {
false
}
}
/// Access the user-data associated with this object
fn data<U:'static>(&self) -> Option<&U>;
/// Access the raw data associated with this object.
///
/// It is given to you as a `dyn Any`, and you are responsible for downcasting it.
///
/// For objects created using the scanner-generated methods, this will be an instance of the
/// [`ResourceData`] type.
fn object_data(&self) -> Option<&std::sync::Arc<dyn std::any::Any + Send + Sync>>;
/// Access the backend handle associated with this object
fn handle(&self) -> &backend::WeakHandle;
/// Create an object resource from its ID
///
/// Returns an error this the provided object ID does not correspond to the `Self` interface.
///
/// **Note:** This method is mostly meant as an implementation detail to be used by code generated by
/// wayland-scanner.
fn from_id(dh: &DisplayHandle, id: ObjectId) -> Result<Self, InvalidId>;
/// Send an event to this object
fn send_event(&self, evt: Self::Event) -> Result<(), InvalidId>;
/// Trigger a protocol error on this object
///
/// The `code` is intended to be from the `Error` enum declared alongside that object interface.
///
/// A protocol error is fatal to the Wayland connection, and the client will be disconnected.
#[inline]
fn post_error(&self, code: impl Into<u32>, error: impl Into<String>) {
if let Some(dh) = self.handle().upgrade().map(DisplayHandle::from) {
dh.post_error(self, code.into(), error.into());
}
}
/// Parse a event for this object
///
/// **Note:** This method is mostly meant as an implementation detail to be used by code generated by
/// wayland-scanner.
fn parse_request(
dh: &DisplayHandle,
msg: Message<ObjectId, OwnedFd>,
) -> Result<(Self, Self::Request), DispatchError>;
/// Serialize an event for this object
///
/// **Note:** This method is mostly meant as an implementation detail to be used by code generated by
/// wayland-scanner.
fn write_event(
&self,
dh: &DisplayHandle,
req: Self::Event,
) -> Result<Message<ObjectId, std::os::unix::io::RawFd>, InvalidId>;
/// Creates a weak handle to this object
///
/// This weak handle will not keep the user-data associated with the object alive,
/// and can be converted back to a full resource using [`Weak::upgrade()`].
///
/// This can be of use if you need to store resources in the used data of other objects and want
/// to be sure to avoid reference cycles that would cause memory leaks.
fn downgrade(&self) -> Weak<Self> {
Weak { handle: self.handle().clone(), id: self.id(), _iface: std::marker::PhantomData }
}
#[doc(hidden)]
fn __set_object_data(
&mut self,
odata: std::sync::Arc<dyn std::any::Any + Send + Sync +'static>,
);
}
/// An error generated if an illegal request was received from a client
#[derive(Debug)]
pub enum DispatchError {
/// The received message does not match the specification for the object's interface.
BadMessage {
/// The id of the target object
sender_id: ObjectId,
/// The interface of the target object
interface: &'static str,
/// The opcode number
opcode: u16,
},
}
impl std::error::Error for DispatchError {}
impl fmt::Display for DispatchError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
DispatchError::BadMessage { sender_id, interface, opcode } => {
write!(f, "Bad message for object {interface}@{sender_id} on opcode {opcode}",)
}
}
}
}
/// A weak handle to a Wayland object
///
/// This handle does not keep the underlying user data alive, and can be converted back to a full resource
/// using [`Weak::upgrade()`].
#[derive(Debug, Clone)]
pub struct Weak<I> {
handle: WeakHandle,
id: ObjectId,
_iface: std::marker::PhantomData<I>,
}
impl<I: Resource> Weak<I> {
/// Try to upgrade with weak handle back into a full resource.
///
/// This will fail if either:
/// - the object represented by this handle has already been destroyed at the protocol level
/// - the Wayland connection has already been closed
pub fn upgrade(&self) -> Result<I, InvalidId> {
let handle = self.handle.upgrade().ok_or(InvalidId)?;
// Check if the object has been destroyed
handle.object_info(self.id.clone())?;
let d_handle = DisplayHandle::from(handle);
I::from_id(&d_handle, self.id.clone())
}
/// The underlying [`ObjectId`]
pub fn id(&self) -> ObjectId {
self.id.clone()
}
}
impl<I> PartialEq for Weak<I> {
fn eq(&self, other: &Self) -> bool {
self.id == other.id
}
}
impl<I> Eq for Weak<I> {}
impl<I> Hash for Weak<I> {
fn hash<H: Hasher>(&self, state: &mut H) {
self.id.hash(state);
}
}
impl<I: Resource> PartialEq<I> for Weak<I> {
fn eq(&self, other: &I) -> bool {
self.id == other.id()
}
}
| is_alive | identifier_name |
player_vlc.rs | use crate::player::{Player, Result, SlideshowConfig};
use elementtree::Element;
use failure::{format_err, Fail};
use libc;
use log::{debug, info, warn};
use reqwest;
use std::path::PathBuf;
use std::process::Child;
use std::process::Command;
use std::time::Duration;
use std::time::Instant;
use url::Url;
const VLC_VOLUME_MAX: u32 = 512;
const VLC_HTTP_PASSWORD: &str = "cherry";
const VLC_HTTP_HOST: &str = "localhost";
const VLC_STARTUP_TIMEOUT: Duration = Duration::from_secs(10);
const VLC_STARTUP_CHECK_BACKOFF: Duration = Duration::from_millis(500);
const VLC_DEFAULT_BIN: &str = "vlc";
const VLC_DEFAULT_HTTP_PORT: u32 = 9843;
const VLC_REQUEST_TIMEOUT: u64 = 30;
#[derive(Debug, Fail)]
pub enum VlcError {
#[fail(display = "Player not started")]
NotStarted,
#[fail(display = "Timed out in waiting player to start")]
StartTimeout,
#[fail(display = "Failed to send request to player: {}", _0)]
BadResponse(#[fail(cause)] failure::Error),
}
impl From<reqwest::Error> for VlcError {
fn from(e: reqwest::Error) -> Self {
VlcError::BadResponse(e.into())
}
}
impl From<elementtree::Error> for VlcError {
fn from(e: elementtree::Error) -> Self {
VlcError::BadResponse(e.into())
}
}
pub struct VlcConfig {
pub vlc_bin: Option<String>,
pub http_port: Option<u32>,
}
impl Default for VlcConfig {
fn default() -> Self {
VlcConfig {
vlc_bin: None,
http_port: None,
}
}
}
pub trait HttpClient {
fn send_get(
&self,
port: u32,
path: &str,
params: &[(&str, &str)],
) -> std::result::Result<String, VlcError>;
}
pub struct ReqwestClient(reqwest::Client);
impl HttpClient for ReqwestClient {
fn send_get(
&self,
port: u32,
path: &str,
params: &[(&str, &str)],
) -> std::result::Result<String, VlcError> {
let url = Url::parse_with_params(
&format!("http://{}:{}/{}", VLC_HTTP_HOST, port, path),
params,
)
.expect("parse vlc url");
debug!("Sending GET to {}", url);
let mut resp = self
.0
.get(url.as_ref())
.basic_auth("", Some(VLC_HTTP_PASSWORD))
.send()?;
if!resp.status().is_success() {
return Err(VlcError::BadResponse(format_err!(
"Bad HTTP status {} : {}",
resp.status(),
resp.text().unwrap_or_else(|_| "N/A".to_string())
)));
}
Ok(resp.text()?)
}
}
pub struct VlcPlayer<C: HttpClient = ReqwestClient> {
vlc_config: VlcConfig,
config: Option<SlideshowConfig>,
process: Option<Child>,
client: C,
pausing: bool,
sleeping: bool,
muting: bool,
}
impl VlcPlayer {
pub fn new(config: VlcConfig) -> Self {
Self::new_with_client(
config,
ReqwestClient(
reqwest::Client::builder()
.timeout(Some(Duration::from_secs(VLC_REQUEST_TIMEOUT)))
.build()
.expect("reqwest client"),
),
)
}
}
impl<C: HttpClient> VlcPlayer<C> {
fn new_with_client(config: VlcConfig, client: C) -> Self {
Self {
vlc_config: config,
config: None,
process: None,
client,
pausing: false,
sleeping: false,
muting: false,
}
}
fn config(&self) -> std::result::Result<&SlideshowConfig, VlcError> {
self.config.as_ref().ok_or(VlcError::NotStarted)
}
/// Convert `audio_volume` set in config into the value
/// range used in VLC player
fn audio_volume(&self) -> std::result::Result<u32, VlcError> {
Ok((VLC_VOLUME_MAX as f32 * self.config()?.audio_volume).round() as u32)
}
fn http_port(&self) -> u32 {
self.vlc_config.http_port.unwrap_or(VLC_DEFAULT_HTTP_PORT)
}
fn send_get(
&self,
path: &str,
params: &[(&str, &str)],
) -> std::result::Result<String, VlcError> {
self.client.send_get(self.http_port(), path, params)
}
fn send_status_cmd(
&self,
cmd: &str,
args: &[(&str, &str)],
) -> std::result::Result<String, VlcError> {
let mut params = Vec::with_capacity(args.len() + 1);
if!cmd.is_empty() {
params.push(("command", cmd));
params.extend(args);
}
self.send_get("requests/status.xml", ¶ms)
}
fn get_playlist(&self) -> std::result::Result<Element, VlcError> {
let xml = self.send_get("requests/playlist.xml", &[])?;
debug!("Playlist XML from VLC: {}", xml);
let element = Element::from_reader(xml.into_bytes().as_slice())?;
Ok(element)
}
fn wait_on_http_interface(&self) -> std::result::Result<(), VlcError> {
let start_time = Instant::now();
while Instant::now() - start_time < VLC_STARTUP_TIMEOUT {
if self.is_ok() {
return Ok(());
}
std::thread::sleep(VLC_STARTUP_CHECK_BACKOFF);
}
Err(VlcError::StartTimeout)
}
fn set_volume(&self, volume: u32) -> std::result::Result<(), VlcError> {
info!("Setting audio volume to {}", volume);
self.send_status_cmd("volume", &[("val", &volume.to_string())])?;
Ok(())
}
fn playlist_ids(element: Element) -> std::result::Result<Vec<u64>, VlcError> {
for node in element.find_all("node") {
if node
.get_attr("name")
.map(|name| name == "Playlist")
.unwrap_or(false)
{
let mut ids = Vec::new();
for leaf in node.find_all("leaf") {
let id_s = leaf.get_attr("id").ok_or_else(|| {
VlcError::BadResponse(format_err!("missing id attribute"))
})?;
let id: u64 = id_s.parse().map_err(|_| {
VlcError::BadResponse(format_err!("cannot parse id: {}", id_s))
})?;
ids.push(id);
}
return Ok(ids);
}
}
Err(VlcError::BadResponse(format_err!(
"no playlist found in XML"
)))
}
fn maybe_restore_pause(&self) -> std::result::Result<(), VlcError> {
// Moving resets the pausing state
if self.locked() {
// Pausing before play starts causes blackscreen
std::thread::sleep(Duration::from_secs(1));
self.send_status_cmd("pl_pause", &[])?;
}
Ok(())
}
fn maybe_pause(&self) -> std::result::Result<(), VlcError> {
if!self.pausing &&!self.sleeping {
self.send_status_cmd("pl_pause", &[])?;
}
Ok(())
}
fn maybe_resume(&mut self, resume: bool) -> std::result::Result<(), VlcError> {
if (self.pausing && resume) || (self.sleeping &&!self.pausing) {
self.send_status_cmd("pl_play", &[])?;
self.pausing = false;
self.sleeping = false;
}
Ok(())
}
}
impl<C: HttpClient> Player for VlcPlayer<C> {
fn start(&mut self, config: SlideshowConfig) -> Result<()> {
let vlc_bin = self
.vlc_config
.vlc_bin
.as_ref()
.map(|s| s.as_ref())
.unwrap_or(VLC_DEFAULT_BIN);
let mut cmd = Command::new(vlc_bin);
cmd.arg("--loop")
.arg("--no-video-title-show")
// Don't show popup for asking whether to fetch media metadata through network
.arg("--no-qt-privacy-ask")
.arg("--no-qt-video-autoresize")
// https://wiki.videolan.org/index.php/VLC_command-line_help
.args(&[
"--image-duration",
&config.show_duration.as_secs().to_string(),
])
.args(&["--extraintf", "http"])
.args(&["--http-password", VLC_HTTP_PASSWORD])
.args(&["--http-host", VLC_HTTP_HOST])
.args(&["--http-port", &self.http_port().to_string()]);
if config.fullscreen {
cmd.arg("--fullscreen");
}
self.process = Some(cmd.spawn()?);
self.wait_on_http_interface()?;
self.config = Some(config);
self.set_volume(self.audio_volume()?)?;
Ok(())
}
fn play_next(&mut self) -> Result<()> {
self.send_status_cmd("pl_next", &[])?;
self.maybe_restore_pause()?;
Ok(())
}
fn play_back(&mut self) -> Result<()> {
self.send_status_cmd("pl_previous", &[])?;
self.maybe_restore_pause()?;
Ok(())
}
fn sleep(&mut self) -> Result<()> {
self.maybe_pause()?;
self.sleeping = true;
Ok(())
}
fn wakeup(&mut self) -> Result<()> {
self.maybe_resume(false)?;
Ok(())
}
fn pause(&mut self) -> Result<()> {
self.maybe_pause()?;
self.pausing = true;
Ok(())
}
fn resume(&mut self) -> Result<()> {
self.maybe_resume(true)?;
Ok(())
}
fn mute(&mut self) -> Result<()> {
if!self.muting {
self.set_volume(0)?;
}
self.muting = true;
Ok(())
}
fn unmute(&mut self) -> Result<()> {
if self.muting {
self.set_volume(self.audio_volume()?)?;
}
self.muting = false;
Ok(())
}
fn update_playlist(&mut self, playlist: Vec<PathBuf>) -> Result<()> {
debug!("Start updating playlist");
// 1. get current playlist
let old_ids = Self::playlist_ids(self.get_playlist()?)?;
// 2. enqueue all new items
for path in playlist {
debug!("Adding new item to playlist: {}", path.display());
self.send_status_cmd("in_enqueue", &[("input", path.to_str().unwrap())])?;
}
// 3. move to the head of new items
let cur_ids = Self::playlist_ids(self.get_playlist()?)?;
let head_id = cur_ids[old_ids.len()];
debug!("Jumping to playlist ID: {}", head_id);
self.send_status_cmd("pl_play", &[("id", &head_id.to_string())])?;
std::thread::sleep(Duration::from_secs(1));
// 4. Remove old items from playlist (assuming current media won't come up so soon)
for id in old_ids {
debug!("Removing old item from playlist: {}", id);
self.send_status_cmd("pl_delete", &[("id", &id.to_string())])?;
}
debug!("Update playlist complete");
Ok(())
}
fn locked(&self) -> bool {
self.pausing || self.sleeping
}
fn | (&self) -> bool {
match self.send_status_cmd("", &[]) {
Ok(_) => true,
Err(e) => {
debug!("Got error response while checking health of VLC: {}", e);
false
}
}
}
}
impl<C: HttpClient> Drop for VlcPlayer<C> {
fn drop(&mut self) {
if let Some(mut proc) = self.process.take() {
// Rust's Command doesn't support other than SIGKILL in portable interface
unsafe {
libc::kill(proc.id() as i32, libc::SIGTERM);
}
match proc.wait() {
Ok(status) => debug!("VLC process exit with {}", status.code().unwrap_or(-1)),
Err(e) => warn!("Failed to stop VLC process gracefully: {}", e),
}
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::cell::{Cell, RefCell};
use std::collections::HashMap;
use std::fs;
use std::io::Write;
use std::os::unix::fs::PermissionsExt;
use tempfile;
impl<F: Fn(&str, &HashMap<&str, &str>) -> std::result::Result<String, VlcError>> HttpClient for F {
fn send_get(
&self,
_port: u32,
path: &str,
params: &[(&str, &str)],
) -> std::result::Result<String, VlcError> {
self(path, ¶ms.into_iter().map(|v| *v).collect())
}
}
fn dummy_bin_player<
C: Fn(&str, &HashMap<&str, &str>) -> std::result::Result<String, VlcError>,
>(
client: C,
) -> (tempfile::NamedTempFile, VlcPlayer<C>) {
let mut dummy_bin = tempfile::NamedTempFile::new().unwrap();
let file = dummy_bin.as_file_mut();
writeln!(file, "#!/bin/sh").unwrap();
writeln!(file, "sleep 60").unwrap();
file.flush().unwrap();
let mut perm = file.metadata().expect("metadata").permissions();
perm.set_mode(0o775);
fs::set_permissions(&dummy_bin, perm).unwrap();
let player = VlcPlayer::new_with_client(
VlcConfig {
vlc_bin: Some(dummy_bin.path().to_str().unwrap().to_string()),
..VlcConfig::default()
},
client,
);
(dummy_bin, player)
}
#[test]
fn test_is_ok() {
let shutdown = Cell::new(false);
let (_dummy_bin, mut player) = dummy_bin_player(|_, _| {
if shutdown.get() {
Err(VlcError::BadResponse(format_err!("")))
} else {
Ok("".to_string())
}
});
player.start(SlideshowConfig::default()).unwrap();
// Player health's good while it's running
assert!(player.is_ok());
// Now process exits and health should not be okay
shutdown.set(true);
assert!(!player.is_ok());
}
#[test]
fn test_pause() {
let req = RefCell::new(None);
let (_dummy_bin, mut player) = dummy_bin_player(|_, p| {
req.borrow_mut()
.replace(p.get("command").unwrap_or(&"").to_string());
Ok("".to_string())
});
player.start(SlideshowConfig::default()).unwrap();
player.pause().unwrap();
assert_eq!(Some("pl_pause".to_string()), req.borrow_mut().take());
// Calling pause twice should be no-op
player.pause().unwrap();
assert_eq!(None, req.borrow_mut().take());
player.resume().unwrap();
assert_eq!(Some("pl_play".to_string()), req.borrow_mut().take());
player.resume().unwrap();
assert_eq!(None, req.borrow_mut().take());
// Do not send pause again if its alredy sleeping
player.sleep().unwrap();
req.borrow_mut().take();
player.pause().unwrap();
assert_eq!(None, req.borrow_mut().take());
// Resume can ignore sleep
player.resume().unwrap();
assert_eq!(Some("pl_play".to_string()), req.borrow_mut().take());
// Resume should reset sleep flag
player.sleep().unwrap();
assert_eq!(Some("pl_pause".to_string()), req.borrow_mut().take());
}
#[test]
fn test_sleep() {
let req = RefCell::new(None);
let (_dummy_bin, mut player) = dummy_bin_player(|_, p| {
req.borrow_mut()
.replace(p.get("command").unwrap_or(&"").to_string());
Ok("".to_string())
});
player.start(SlideshowConfig::default()).unwrap();
player.sleep().unwrap();
assert_eq!(Some("pl_pause".to_string()), req.borrow_mut().take());
// Calling sleep twice should be no-op
player.sleep().unwrap();
assert_eq!(None, req.borrow_mut().take());
player.wakeup().unwrap();
assert_eq!(Some("pl_play".to_string()), req.borrow_mut().take());
// Calling wakeup twice should be no-op
player.wakeup().unwrap();
assert_eq!(None, req.borrow_mut().take());
// Do not send pause again if its alredy sleeping
player.pause().unwrap();
req.borrow_mut().take();
player.sleep().unwrap();
assert_eq!(None, req.borrow_mut().take());
// Wakeup should not resume if it's pausing
player.wakeup().unwrap();
assert_eq!(None, req.borrow_mut().take());
}
}
| is_ok | identifier_name |
player_vlc.rs | use crate::player::{Player, Result, SlideshowConfig};
use elementtree::Element;
use failure::{format_err, Fail};
use libc;
use log::{debug, info, warn};
use reqwest;
use std::path::PathBuf;
use std::process::Child;
use std::process::Command;
use std::time::Duration;
use std::time::Instant;
use url::Url;
const VLC_VOLUME_MAX: u32 = 512;
const VLC_HTTP_PASSWORD: &str = "cherry";
const VLC_HTTP_HOST: &str = "localhost";
const VLC_STARTUP_TIMEOUT: Duration = Duration::from_secs(10);
const VLC_STARTUP_CHECK_BACKOFF: Duration = Duration::from_millis(500);
const VLC_DEFAULT_BIN: &str = "vlc";
const VLC_DEFAULT_HTTP_PORT: u32 = 9843;
const VLC_REQUEST_TIMEOUT: u64 = 30;
#[derive(Debug, Fail)]
pub enum VlcError {
#[fail(display = "Player not started")]
NotStarted,
#[fail(display = "Timed out in waiting player to start")]
StartTimeout,
#[fail(display = "Failed to send request to player: {}", _0)]
BadResponse(#[fail(cause)] failure::Error),
}
impl From<reqwest::Error> for VlcError {
fn from(e: reqwest::Error) -> Self {
VlcError::BadResponse(e.into())
}
}
impl From<elementtree::Error> for VlcError {
fn from(e: elementtree::Error) -> Self {
VlcError::BadResponse(e.into())
}
}
pub struct VlcConfig {
pub vlc_bin: Option<String>,
pub http_port: Option<u32>,
}
impl Default for VlcConfig {
fn default() -> Self {
VlcConfig {
vlc_bin: None,
http_port: None,
}
}
}
pub trait HttpClient {
fn send_get(
&self,
port: u32,
path: &str,
params: &[(&str, &str)],
) -> std::result::Result<String, VlcError>;
}
pub struct ReqwestClient(reqwest::Client);
impl HttpClient for ReqwestClient {
fn send_get(
&self,
port: u32,
path: &str,
params: &[(&str, &str)],
) -> std::result::Result<String, VlcError> {
let url = Url::parse_with_params(
&format!("http://{}:{}/{}", VLC_HTTP_HOST, port, path),
params,
)
.expect("parse vlc url");
debug!("Sending GET to {}", url);
let mut resp = self
.0
.get(url.as_ref())
.basic_auth("", Some(VLC_HTTP_PASSWORD))
.send()?;
if!resp.status().is_success() {
return Err(VlcError::BadResponse(format_err!(
"Bad HTTP status {} : {}",
resp.status(),
resp.text().unwrap_or_else(|_| "N/A".to_string())
)));
}
Ok(resp.text()?)
}
}
pub struct VlcPlayer<C: HttpClient = ReqwestClient> {
vlc_config: VlcConfig,
config: Option<SlideshowConfig>,
process: Option<Child>,
client: C,
pausing: bool,
sleeping: bool,
muting: bool,
}
impl VlcPlayer {
pub fn new(config: VlcConfig) -> Self {
Self::new_with_client(
config,
ReqwestClient(
reqwest::Client::builder()
.timeout(Some(Duration::from_secs(VLC_REQUEST_TIMEOUT)))
.build()
.expect("reqwest client"),
),
)
}
}
impl<C: HttpClient> VlcPlayer<C> {
fn new_with_client(config: VlcConfig, client: C) -> Self {
Self {
vlc_config: config,
config: None,
process: None,
client,
pausing: false,
sleeping: false,
muting: false,
}
}
fn config(&self) -> std::result::Result<&SlideshowConfig, VlcError> {
self.config.as_ref().ok_or(VlcError::NotStarted)
}
/// Convert `audio_volume` set in config into the value
/// range used in VLC player
fn audio_volume(&self) -> std::result::Result<u32, VlcError> {
Ok((VLC_VOLUME_MAX as f32 * self.config()?.audio_volume).round() as u32)
}
fn http_port(&self) -> u32 {
self.vlc_config.http_port.unwrap_or(VLC_DEFAULT_HTTP_PORT)
}
fn send_get(
&self,
path: &str,
params: &[(&str, &str)],
) -> std::result::Result<String, VlcError> {
self.client.send_get(self.http_port(), path, params)
}
fn send_status_cmd(
&self,
cmd: &str,
args: &[(&str, &str)],
) -> std::result::Result<String, VlcError> {
let mut params = Vec::with_capacity(args.len() + 1);
if!cmd.is_empty() {
params.push(("command", cmd));
params.extend(args);
}
self.send_get("requests/status.xml", ¶ms)
}
fn get_playlist(&self) -> std::result::Result<Element, VlcError> {
let xml = self.send_get("requests/playlist.xml", &[])?;
debug!("Playlist XML from VLC: {}", xml);
let element = Element::from_reader(xml.into_bytes().as_slice())?;
Ok(element)
}
fn wait_on_http_interface(&self) -> std::result::Result<(), VlcError> {
let start_time = Instant::now();
while Instant::now() - start_time < VLC_STARTUP_TIMEOUT {
if self.is_ok() {
return Ok(());
}
std::thread::sleep(VLC_STARTUP_CHECK_BACKOFF);
}
Err(VlcError::StartTimeout)
}
fn set_volume(&self, volume: u32) -> std::result::Result<(), VlcError> {
info!("Setting audio volume to {}", volume);
self.send_status_cmd("volume", &[("val", &volume.to_string())])?;
Ok(())
}
fn playlist_ids(element: Element) -> std::result::Result<Vec<u64>, VlcError> {
for node in element.find_all("node") {
if node
.get_attr("name")
.map(|name| name == "Playlist")
.unwrap_or(false)
{
let mut ids = Vec::new();
for leaf in node.find_all("leaf") {
let id_s = leaf.get_attr("id").ok_or_else(|| {
VlcError::BadResponse(format_err!("missing id attribute"))
})?;
let id: u64 = id_s.parse().map_err(|_| {
VlcError::BadResponse(format_err!("cannot parse id: {}", id_s))
})?;
ids.push(id);
}
return Ok(ids);
}
}
Err(VlcError::BadResponse(format_err!(
"no playlist found in XML"
)))
}
fn maybe_restore_pause(&self) -> std::result::Result<(), VlcError> {
// Moving resets the pausing state
if self.locked() {
// Pausing before play starts causes blackscreen
std::thread::sleep(Duration::from_secs(1));
self.send_status_cmd("pl_pause", &[])?;
}
Ok(())
}
fn maybe_pause(&self) -> std::result::Result<(), VlcError> {
if!self.pausing &&!self.sleeping {
self.send_status_cmd("pl_pause", &[])?;
}
Ok(())
}
fn maybe_resume(&mut self, resume: bool) -> std::result::Result<(), VlcError> {
if (self.pausing && resume) || (self.sleeping &&!self.pausing) {
self.send_status_cmd("pl_play", &[])?;
self.pausing = false;
self.sleeping = false;
}
Ok(())
}
}
impl<C: HttpClient> Player for VlcPlayer<C> {
fn start(&mut self, config: SlideshowConfig) -> Result<()> {
let vlc_bin = self
.vlc_config
.vlc_bin
.as_ref()
.map(|s| s.as_ref())
.unwrap_or(VLC_DEFAULT_BIN);
let mut cmd = Command::new(vlc_bin);
cmd.arg("--loop")
.arg("--no-video-title-show")
// Don't show popup for asking whether to fetch media metadata through network
.arg("--no-qt-privacy-ask")
.arg("--no-qt-video-autoresize")
// https://wiki.videolan.org/index.php/VLC_command-line_help
.args(&[
"--image-duration",
&config.show_duration.as_secs().to_string(),
])
.args(&["--extraintf", "http"])
.args(&["--http-password", VLC_HTTP_PASSWORD])
.args(&["--http-host", VLC_HTTP_HOST])
.args(&["--http-port", &self.http_port().to_string()]);
if config.fullscreen {
cmd.arg("--fullscreen");
}
self.process = Some(cmd.spawn()?);
self.wait_on_http_interface()?;
self.config = Some(config);
self.set_volume(self.audio_volume()?)?;
Ok(())
}
fn play_next(&mut self) -> Result<()> {
self.send_status_cmd("pl_next", &[])?;
self.maybe_restore_pause()?;
Ok(())
}
fn play_back(&mut self) -> Result<()> {
self.send_status_cmd("pl_previous", &[])?;
self.maybe_restore_pause()?;
Ok(())
}
fn sleep(&mut self) -> Result<()> {
self.maybe_pause()?;
self.sleeping = true;
Ok(())
}
fn wakeup(&mut self) -> Result<()> {
self.maybe_resume(false)?;
Ok(())
}
fn pause(&mut self) -> Result<()> {
self.maybe_pause()?;
self.pausing = true;
Ok(())
}
fn resume(&mut self) -> Result<()> {
self.maybe_resume(true)?;
Ok(())
}
fn mute(&mut self) -> Result<()> {
if!self.muting {
self.set_volume(0)?;
}
self.muting = true;
Ok(())
}
fn unmute(&mut self) -> Result<()> {
if self.muting {
self.set_volume(self.audio_volume()?)?;
}
self.muting = false;
Ok(())
}
fn update_playlist(&mut self, playlist: Vec<PathBuf>) -> Result<()> {
debug!("Start updating playlist");
// 1. get current playlist
let old_ids = Self::playlist_ids(self.get_playlist()?)?;
// 2. enqueue all new items
for path in playlist {
debug!("Adding new item to playlist: {}", path.display());
self.send_status_cmd("in_enqueue", &[("input", path.to_str().unwrap())])?;
}
// 3. move to the head of new items
let cur_ids = Self::playlist_ids(self.get_playlist()?)?;
let head_id = cur_ids[old_ids.len()];
debug!("Jumping to playlist ID: {}", head_id);
self.send_status_cmd("pl_play", &[("id", &head_id.to_string())])?;
std::thread::sleep(Duration::from_secs(1));
// 4. Remove old items from playlist (assuming current media won't come up so soon)
for id in old_ids {
debug!("Removing old item from playlist: {}", id);
self.send_status_cmd("pl_delete", &[("id", &id.to_string())])?;
}
debug!("Update playlist complete");
Ok(())
}
fn locked(&self) -> bool {
self.pausing || self.sleeping
}
fn is_ok(&self) -> bool |
}
impl<C: HttpClient> Drop for VlcPlayer<C> {
fn drop(&mut self) {
if let Some(mut proc) = self.process.take() {
// Rust's Command doesn't support other than SIGKILL in portable interface
unsafe {
libc::kill(proc.id() as i32, libc::SIGTERM);
}
match proc.wait() {
Ok(status) => debug!("VLC process exit with {}", status.code().unwrap_or(-1)),
Err(e) => warn!("Failed to stop VLC process gracefully: {}", e),
}
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::cell::{Cell, RefCell};
use std::collections::HashMap;
use std::fs;
use std::io::Write;
use std::os::unix::fs::PermissionsExt;
use tempfile;
impl<F: Fn(&str, &HashMap<&str, &str>) -> std::result::Result<String, VlcError>> HttpClient for F {
fn send_get(
&self,
_port: u32,
path: &str,
params: &[(&str, &str)],
) -> std::result::Result<String, VlcError> {
self(path, ¶ms.into_iter().map(|v| *v).collect())
}
}
fn dummy_bin_player<
C: Fn(&str, &HashMap<&str, &str>) -> std::result::Result<String, VlcError>,
>(
client: C,
) -> (tempfile::NamedTempFile, VlcPlayer<C>) {
let mut dummy_bin = tempfile::NamedTempFile::new().unwrap();
let file = dummy_bin.as_file_mut();
writeln!(file, "#!/bin/sh").unwrap();
writeln!(file, "sleep 60").unwrap();
file.flush().unwrap();
let mut perm = file.metadata().expect("metadata").permissions();
perm.set_mode(0o775);
fs::set_permissions(&dummy_bin, perm).unwrap();
let player = VlcPlayer::new_with_client(
VlcConfig {
vlc_bin: Some(dummy_bin.path().to_str().unwrap().to_string()),
..VlcConfig::default()
},
client,
);
(dummy_bin, player)
}
#[test]
fn test_is_ok() {
let shutdown = Cell::new(false);
let (_dummy_bin, mut player) = dummy_bin_player(|_, _| {
if shutdown.get() {
Err(VlcError::BadResponse(format_err!("")))
} else {
Ok("".to_string())
}
});
player.start(SlideshowConfig::default()).unwrap();
// Player health's good while it's running
assert!(player.is_ok());
// Now process exits and health should not be okay
shutdown.set(true);
assert!(!player.is_ok());
}
#[test]
fn test_pause() {
let req = RefCell::new(None);
let (_dummy_bin, mut player) = dummy_bin_player(|_, p| {
req.borrow_mut()
.replace(p.get("command").unwrap_or(&"").to_string());
Ok("".to_string())
});
player.start(SlideshowConfig::default()).unwrap();
player.pause().unwrap();
assert_eq!(Some("pl_pause".to_string()), req.borrow_mut().take());
// Calling pause twice should be no-op
player.pause().unwrap();
assert_eq!(None, req.borrow_mut().take());
player.resume().unwrap();
assert_eq!(Some("pl_play".to_string()), req.borrow_mut().take());
player.resume().unwrap();
assert_eq!(None, req.borrow_mut().take());
// Do not send pause again if its alredy sleeping
player.sleep().unwrap();
req.borrow_mut().take();
player.pause().unwrap();
assert_eq!(None, req.borrow_mut().take());
// Resume can ignore sleep
player.resume().unwrap();
assert_eq!(Some("pl_play".to_string()), req.borrow_mut().take());
// Resume should reset sleep flag
player.sleep().unwrap();
assert_eq!(Some("pl_pause".to_string()), req.borrow_mut().take());
}
#[test]
fn test_sleep() {
let req = RefCell::new(None);
let (_dummy_bin, mut player) = dummy_bin_player(|_, p| {
req.borrow_mut()
.replace(p.get("command").unwrap_or(&"").to_string());
Ok("".to_string())
});
player.start(SlideshowConfig::default()).unwrap();
player.sleep().unwrap();
assert_eq!(Some("pl_pause".to_string()), req.borrow_mut().take());
// Calling sleep twice should be no-op
player.sleep().unwrap();
assert_eq!(None, req.borrow_mut().take());
player.wakeup().unwrap();
assert_eq!(Some("pl_play".to_string()), req.borrow_mut().take());
// Calling wakeup twice should be no-op
player.wakeup().unwrap();
assert_eq!(None, req.borrow_mut().take());
// Do not send pause again if its alredy sleeping
player.pause().unwrap();
req.borrow_mut().take();
player.sleep().unwrap();
assert_eq!(None, req.borrow_mut().take());
// Wakeup should not resume if it's pausing
player.wakeup().unwrap();
assert_eq!(None, req.borrow_mut().take());
}
}
| {
match self.send_status_cmd("", &[]) {
Ok(_) => true,
Err(e) => {
debug!("Got error response while checking health of VLC: {}", e);
false
}
}
} | identifier_body |
player_vlc.rs | use crate::player::{Player, Result, SlideshowConfig};
use elementtree::Element;
use failure::{format_err, Fail};
use libc;
use log::{debug, info, warn};
use reqwest;
use std::path::PathBuf;
use std::process::Child;
use std::process::Command;
use std::time::Duration;
use std::time::Instant;
use url::Url;
const VLC_VOLUME_MAX: u32 = 512;
const VLC_HTTP_PASSWORD: &str = "cherry";
const VLC_HTTP_HOST: &str = "localhost";
const VLC_STARTUP_TIMEOUT: Duration = Duration::from_secs(10);
const VLC_STARTUP_CHECK_BACKOFF: Duration = Duration::from_millis(500);
const VLC_DEFAULT_BIN: &str = "vlc";
const VLC_DEFAULT_HTTP_PORT: u32 = 9843;
const VLC_REQUEST_TIMEOUT: u64 = 30;
#[derive(Debug, Fail)]
pub enum VlcError {
#[fail(display = "Player not started")]
NotStarted,
#[fail(display = "Timed out in waiting player to start")]
StartTimeout,
#[fail(display = "Failed to send request to player: {}", _0)]
BadResponse(#[fail(cause)] failure::Error),
}
impl From<reqwest::Error> for VlcError {
fn from(e: reqwest::Error) -> Self {
VlcError::BadResponse(e.into())
}
}
impl From<elementtree::Error> for VlcError {
fn from(e: elementtree::Error) -> Self {
VlcError::BadResponse(e.into())
}
}
pub struct VlcConfig {
pub vlc_bin: Option<String>,
pub http_port: Option<u32>,
}
impl Default for VlcConfig {
fn default() -> Self {
VlcConfig {
vlc_bin: None,
http_port: None,
}
}
}
pub trait HttpClient {
fn send_get(
&self,
port: u32,
path: &str,
params: &[(&str, &str)],
) -> std::result::Result<String, VlcError>;
}
pub struct ReqwestClient(reqwest::Client);
impl HttpClient for ReqwestClient {
fn send_get(
&self,
port: u32,
path: &str,
params: &[(&str, &str)],
) -> std::result::Result<String, VlcError> {
let url = Url::parse_with_params(
&format!("http://{}:{}/{}", VLC_HTTP_HOST, port, path),
params,
)
.expect("parse vlc url");
debug!("Sending GET to {}", url);
let mut resp = self
.0
.get(url.as_ref())
.basic_auth("", Some(VLC_HTTP_PASSWORD))
.send()?;
if!resp.status().is_success() {
return Err(VlcError::BadResponse(format_err!(
"Bad HTTP status {} : {}",
resp.status(),
resp.text().unwrap_or_else(|_| "N/A".to_string())
)));
}
Ok(resp.text()?)
}
}
pub struct VlcPlayer<C: HttpClient = ReqwestClient> {
vlc_config: VlcConfig,
config: Option<SlideshowConfig>,
process: Option<Child>,
client: C,
pausing: bool,
sleeping: bool,
muting: bool,
}
impl VlcPlayer {
pub fn new(config: VlcConfig) -> Self {
Self::new_with_client(
config,
ReqwestClient(
reqwest::Client::builder()
.timeout(Some(Duration::from_secs(VLC_REQUEST_TIMEOUT)))
.build()
.expect("reqwest client"),
),
)
}
}
impl<C: HttpClient> VlcPlayer<C> {
fn new_with_client(config: VlcConfig, client: C) -> Self {
Self {
vlc_config: config,
config: None,
process: None,
client,
pausing: false,
sleeping: false,
muting: false,
}
}
fn config(&self) -> std::result::Result<&SlideshowConfig, VlcError> {
self.config.as_ref().ok_or(VlcError::NotStarted)
}
/// Convert `audio_volume` set in config into the value
/// range used in VLC player
fn audio_volume(&self) -> std::result::Result<u32, VlcError> {
Ok((VLC_VOLUME_MAX as f32 * self.config()?.audio_volume).round() as u32)
}
fn http_port(&self) -> u32 {
self.vlc_config.http_port.unwrap_or(VLC_DEFAULT_HTTP_PORT)
}
fn send_get(
&self,
path: &str,
params: &[(&str, &str)],
) -> std::result::Result<String, VlcError> {
self.client.send_get(self.http_port(), path, params)
}
fn send_status_cmd(
&self,
cmd: &str,
args: &[(&str, &str)],
) -> std::result::Result<String, VlcError> {
let mut params = Vec::with_capacity(args.len() + 1);
if!cmd.is_empty() {
params.push(("command", cmd));
params.extend(args);
}
self.send_get("requests/status.xml", ¶ms)
}
fn get_playlist(&self) -> std::result::Result<Element, VlcError> {
let xml = self.send_get("requests/playlist.xml", &[])?;
debug!("Playlist XML from VLC: {}", xml);
let element = Element::from_reader(xml.into_bytes().as_slice())?;
Ok(element)
}
fn wait_on_http_interface(&self) -> std::result::Result<(), VlcError> {
let start_time = Instant::now();
while Instant::now() - start_time < VLC_STARTUP_TIMEOUT {
if self.is_ok() {
return Ok(());
}
std::thread::sleep(VLC_STARTUP_CHECK_BACKOFF);
}
Err(VlcError::StartTimeout)
}
fn set_volume(&self, volume: u32) -> std::result::Result<(), VlcError> {
info!("Setting audio volume to {}", volume);
self.send_status_cmd("volume", &[("val", &volume.to_string())])?;
Ok(())
}
fn playlist_ids(element: Element) -> std::result::Result<Vec<u64>, VlcError> {
for node in element.find_all("node") {
if node
.get_attr("name")
.map(|name| name == "Playlist")
.unwrap_or(false)
{
let mut ids = Vec::new();
for leaf in node.find_all("leaf") {
let id_s = leaf.get_attr("id").ok_or_else(|| {
VlcError::BadResponse(format_err!("missing id attribute"))
})?;
let id: u64 = id_s.parse().map_err(|_| {
VlcError::BadResponse(format_err!("cannot parse id: {}", id_s))
})?;
ids.push(id);
}
return Ok(ids);
}
}
Err(VlcError::BadResponse(format_err!(
"no playlist found in XML"
)))
}
fn maybe_restore_pause(&self) -> std::result::Result<(), VlcError> {
// Moving resets the pausing state
if self.locked() {
// Pausing before play starts causes blackscreen
std::thread::sleep(Duration::from_secs(1));
self.send_status_cmd("pl_pause", &[])?;
}
Ok(())
}
fn maybe_pause(&self) -> std::result::Result<(), VlcError> {
if!self.pausing &&!self.sleeping {
self.send_status_cmd("pl_pause", &[])?;
}
Ok(())
}
fn maybe_resume(&mut self, resume: bool) -> std::result::Result<(), VlcError> {
if (self.pausing && resume) || (self.sleeping &&!self.pausing) {
self.send_status_cmd("pl_play", &[])?;
self.pausing = false;
self.sleeping = false;
}
Ok(())
}
}
impl<C: HttpClient> Player for VlcPlayer<C> {
fn start(&mut self, config: SlideshowConfig) -> Result<()> {
let vlc_bin = self
.vlc_config
.vlc_bin
.as_ref()
.map(|s| s.as_ref())
.unwrap_or(VLC_DEFAULT_BIN);
let mut cmd = Command::new(vlc_bin);
cmd.arg("--loop")
.arg("--no-video-title-show")
// Don't show popup for asking whether to fetch media metadata through network
.arg("--no-qt-privacy-ask")
.arg("--no-qt-video-autoresize")
// https://wiki.videolan.org/index.php/VLC_command-line_help
.args(&[
"--image-duration",
&config.show_duration.as_secs().to_string(),
])
.args(&["--extraintf", "http"])
.args(&["--http-password", VLC_HTTP_PASSWORD])
.args(&["--http-host", VLC_HTTP_HOST])
.args(&["--http-port", &self.http_port().to_string()]);
if config.fullscreen {
cmd.arg("--fullscreen");
}
self.process = Some(cmd.spawn()?);
self.wait_on_http_interface()?;
self.config = Some(config);
self.set_volume(self.audio_volume()?)?;
Ok(())
}
fn play_next(&mut self) -> Result<()> {
self.send_status_cmd("pl_next", &[])?;
self.maybe_restore_pause()?;
Ok(())
}
fn play_back(&mut self) -> Result<()> {
self.send_status_cmd("pl_previous", &[])?;
self.maybe_restore_pause()?;
Ok(())
}
fn sleep(&mut self) -> Result<()> {
self.maybe_pause()?;
self.sleeping = true;
Ok(())
}
fn wakeup(&mut self) -> Result<()> {
self.maybe_resume(false)?;
Ok(())
}
fn pause(&mut self) -> Result<()> {
self.maybe_pause()?;
self.pausing = true;
Ok(())
}
fn resume(&mut self) -> Result<()> {
self.maybe_resume(true)?;
Ok(())
}
fn mute(&mut self) -> Result<()> {
if!self.muting {
self.set_volume(0)?;
}
self.muting = true;
Ok(())
}
fn unmute(&mut self) -> Result<()> {
if self.muting {
self.set_volume(self.audio_volume()?)?;
}
self.muting = false;
Ok(())
}
fn update_playlist(&mut self, playlist: Vec<PathBuf>) -> Result<()> {
debug!("Start updating playlist");
// 1. get current playlist
let old_ids = Self::playlist_ids(self.get_playlist()?)?;
// 2. enqueue all new items
for path in playlist {
debug!("Adding new item to playlist: {}", path.display());
self.send_status_cmd("in_enqueue", &[("input", path.to_str().unwrap())])?;
}
// 3. move to the head of new items
let cur_ids = Self::playlist_ids(self.get_playlist()?)?;
let head_id = cur_ids[old_ids.len()];
debug!("Jumping to playlist ID: {}", head_id);
self.send_status_cmd("pl_play", &[("id", &head_id.to_string())])?;
std::thread::sleep(Duration::from_secs(1));
// 4. Remove old items from playlist (assuming current media won't come up so soon)
for id in old_ids {
debug!("Removing old item from playlist: {}", id);
self.send_status_cmd("pl_delete", &[("id", &id.to_string())])?;
}
debug!("Update playlist complete");
Ok(())
}
fn locked(&self) -> bool {
self.pausing || self.sleeping
}
fn is_ok(&self) -> bool {
match self.send_status_cmd("", &[]) {
Ok(_) => true,
Err(e) => {
debug!("Got error response while checking health of VLC: {}", e);
false
}
}
}
}
impl<C: HttpClient> Drop for VlcPlayer<C> {
fn drop(&mut self) {
if let Some(mut proc) = self.process.take() {
// Rust's Command doesn't support other than SIGKILL in portable interface
unsafe {
libc::kill(proc.id() as i32, libc::SIGTERM);
}
match proc.wait() {
Ok(status) => debug!("VLC process exit with {}", status.code().unwrap_or(-1)),
Err(e) => warn!("Failed to stop VLC process gracefully: {}", e),
}
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::cell::{Cell, RefCell};
use std::collections::HashMap;
use std::fs;
use std::io::Write;
use std::os::unix::fs::PermissionsExt;
use tempfile;
impl<F: Fn(&str, &HashMap<&str, &str>) -> std::result::Result<String, VlcError>> HttpClient for F {
fn send_get(
&self,
_port: u32,
path: &str,
params: &[(&str, &str)],
) -> std::result::Result<String, VlcError> {
self(path, ¶ms.into_iter().map(|v| *v).collect())
}
}
| fn dummy_bin_player<
C: Fn(&str, &HashMap<&str, &str>) -> std::result::Result<String, VlcError>,
>(
client: C,
) -> (tempfile::NamedTempFile, VlcPlayer<C>) {
let mut dummy_bin = tempfile::NamedTempFile::new().unwrap();
let file = dummy_bin.as_file_mut();
writeln!(file, "#!/bin/sh").unwrap();
writeln!(file, "sleep 60").unwrap();
file.flush().unwrap();
let mut perm = file.metadata().expect("metadata").permissions();
perm.set_mode(0o775);
fs::set_permissions(&dummy_bin, perm).unwrap();
let player = VlcPlayer::new_with_client(
VlcConfig {
vlc_bin: Some(dummy_bin.path().to_str().unwrap().to_string()),
..VlcConfig::default()
},
client,
);
(dummy_bin, player)
}
#[test]
fn test_is_ok() {
let shutdown = Cell::new(false);
let (_dummy_bin, mut player) = dummy_bin_player(|_, _| {
if shutdown.get() {
Err(VlcError::BadResponse(format_err!("")))
} else {
Ok("".to_string())
}
});
player.start(SlideshowConfig::default()).unwrap();
// Player health's good while it's running
assert!(player.is_ok());
// Now process exits and health should not be okay
shutdown.set(true);
assert!(!player.is_ok());
}
#[test]
fn test_pause() {
let req = RefCell::new(None);
let (_dummy_bin, mut player) = dummy_bin_player(|_, p| {
req.borrow_mut()
.replace(p.get("command").unwrap_or(&"").to_string());
Ok("".to_string())
});
player.start(SlideshowConfig::default()).unwrap();
player.pause().unwrap();
assert_eq!(Some("pl_pause".to_string()), req.borrow_mut().take());
// Calling pause twice should be no-op
player.pause().unwrap();
assert_eq!(None, req.borrow_mut().take());
player.resume().unwrap();
assert_eq!(Some("pl_play".to_string()), req.borrow_mut().take());
player.resume().unwrap();
assert_eq!(None, req.borrow_mut().take());
// Do not send pause again if its alredy sleeping
player.sleep().unwrap();
req.borrow_mut().take();
player.pause().unwrap();
assert_eq!(None, req.borrow_mut().take());
// Resume can ignore sleep
player.resume().unwrap();
assert_eq!(Some("pl_play".to_string()), req.borrow_mut().take());
// Resume should reset sleep flag
player.sleep().unwrap();
assert_eq!(Some("pl_pause".to_string()), req.borrow_mut().take());
}
#[test]
fn test_sleep() {
let req = RefCell::new(None);
let (_dummy_bin, mut player) = dummy_bin_player(|_, p| {
req.borrow_mut()
.replace(p.get("command").unwrap_or(&"").to_string());
Ok("".to_string())
});
player.start(SlideshowConfig::default()).unwrap();
player.sleep().unwrap();
assert_eq!(Some("pl_pause".to_string()), req.borrow_mut().take());
// Calling sleep twice should be no-op
player.sleep().unwrap();
assert_eq!(None, req.borrow_mut().take());
player.wakeup().unwrap();
assert_eq!(Some("pl_play".to_string()), req.borrow_mut().take());
// Calling wakeup twice should be no-op
player.wakeup().unwrap();
assert_eq!(None, req.borrow_mut().take());
// Do not send pause again if its alredy sleeping
player.pause().unwrap();
req.borrow_mut().take();
player.sleep().unwrap();
assert_eq!(None, req.borrow_mut().take());
// Wakeup should not resume if it's pausing
player.wakeup().unwrap();
assert_eq!(None, req.borrow_mut().take());
}
} | random_line_split |
|
player_vlc.rs | use crate::player::{Player, Result, SlideshowConfig};
use elementtree::Element;
use failure::{format_err, Fail};
use libc;
use log::{debug, info, warn};
use reqwest;
use std::path::PathBuf;
use std::process::Child;
use std::process::Command;
use std::time::Duration;
use std::time::Instant;
use url::Url;
const VLC_VOLUME_MAX: u32 = 512;
const VLC_HTTP_PASSWORD: &str = "cherry";
const VLC_HTTP_HOST: &str = "localhost";
const VLC_STARTUP_TIMEOUT: Duration = Duration::from_secs(10);
const VLC_STARTUP_CHECK_BACKOFF: Duration = Duration::from_millis(500);
const VLC_DEFAULT_BIN: &str = "vlc";
const VLC_DEFAULT_HTTP_PORT: u32 = 9843;
const VLC_REQUEST_TIMEOUT: u64 = 30;
#[derive(Debug, Fail)]
pub enum VlcError {
#[fail(display = "Player not started")]
NotStarted,
#[fail(display = "Timed out in waiting player to start")]
StartTimeout,
#[fail(display = "Failed to send request to player: {}", _0)]
BadResponse(#[fail(cause)] failure::Error),
}
impl From<reqwest::Error> for VlcError {
fn from(e: reqwest::Error) -> Self {
VlcError::BadResponse(e.into())
}
}
impl From<elementtree::Error> for VlcError {
fn from(e: elementtree::Error) -> Self {
VlcError::BadResponse(e.into())
}
}
pub struct VlcConfig {
pub vlc_bin: Option<String>,
pub http_port: Option<u32>,
}
impl Default for VlcConfig {
fn default() -> Self {
VlcConfig {
vlc_bin: None,
http_port: None,
}
}
}
pub trait HttpClient {
fn send_get(
&self,
port: u32,
path: &str,
params: &[(&str, &str)],
) -> std::result::Result<String, VlcError>;
}
pub struct ReqwestClient(reqwest::Client);
impl HttpClient for ReqwestClient {
fn send_get(
&self,
port: u32,
path: &str,
params: &[(&str, &str)],
) -> std::result::Result<String, VlcError> {
let url = Url::parse_with_params(
&format!("http://{}:{}/{}", VLC_HTTP_HOST, port, path),
params,
)
.expect("parse vlc url");
debug!("Sending GET to {}", url);
let mut resp = self
.0
.get(url.as_ref())
.basic_auth("", Some(VLC_HTTP_PASSWORD))
.send()?;
if!resp.status().is_success() {
return Err(VlcError::BadResponse(format_err!(
"Bad HTTP status {} : {}",
resp.status(),
resp.text().unwrap_or_else(|_| "N/A".to_string())
)));
}
Ok(resp.text()?)
}
}
pub struct VlcPlayer<C: HttpClient = ReqwestClient> {
vlc_config: VlcConfig,
config: Option<SlideshowConfig>,
process: Option<Child>,
client: C,
pausing: bool,
sleeping: bool,
muting: bool,
}
impl VlcPlayer {
pub fn new(config: VlcConfig) -> Self {
Self::new_with_client(
config,
ReqwestClient(
reqwest::Client::builder()
.timeout(Some(Duration::from_secs(VLC_REQUEST_TIMEOUT)))
.build()
.expect("reqwest client"),
),
)
}
}
impl<C: HttpClient> VlcPlayer<C> {
fn new_with_client(config: VlcConfig, client: C) -> Self {
Self {
vlc_config: config,
config: None,
process: None,
client,
pausing: false,
sleeping: false,
muting: false,
}
}
fn config(&self) -> std::result::Result<&SlideshowConfig, VlcError> {
self.config.as_ref().ok_or(VlcError::NotStarted)
}
/// Convert `audio_volume` set in config into the value
/// range used in VLC player
fn audio_volume(&self) -> std::result::Result<u32, VlcError> {
Ok((VLC_VOLUME_MAX as f32 * self.config()?.audio_volume).round() as u32)
}
fn http_port(&self) -> u32 {
self.vlc_config.http_port.unwrap_or(VLC_DEFAULT_HTTP_PORT)
}
fn send_get(
&self,
path: &str,
params: &[(&str, &str)],
) -> std::result::Result<String, VlcError> {
self.client.send_get(self.http_port(), path, params)
}
fn send_status_cmd(
&self,
cmd: &str,
args: &[(&str, &str)],
) -> std::result::Result<String, VlcError> {
let mut params = Vec::with_capacity(args.len() + 1);
if!cmd.is_empty() {
params.push(("command", cmd));
params.extend(args);
}
self.send_get("requests/status.xml", ¶ms)
}
fn get_playlist(&self) -> std::result::Result<Element, VlcError> {
let xml = self.send_get("requests/playlist.xml", &[])?;
debug!("Playlist XML from VLC: {}", xml);
let element = Element::from_reader(xml.into_bytes().as_slice())?;
Ok(element)
}
fn wait_on_http_interface(&self) -> std::result::Result<(), VlcError> {
let start_time = Instant::now();
while Instant::now() - start_time < VLC_STARTUP_TIMEOUT {
if self.is_ok() {
return Ok(());
}
std::thread::sleep(VLC_STARTUP_CHECK_BACKOFF);
}
Err(VlcError::StartTimeout)
}
fn set_volume(&self, volume: u32) -> std::result::Result<(), VlcError> {
info!("Setting audio volume to {}", volume);
self.send_status_cmd("volume", &[("val", &volume.to_string())])?;
Ok(())
}
fn playlist_ids(element: Element) -> std::result::Result<Vec<u64>, VlcError> {
for node in element.find_all("node") {
if node
.get_attr("name")
.map(|name| name == "Playlist")
.unwrap_or(false)
{
let mut ids = Vec::new();
for leaf in node.find_all("leaf") {
let id_s = leaf.get_attr("id").ok_or_else(|| {
VlcError::BadResponse(format_err!("missing id attribute"))
})?;
let id: u64 = id_s.parse().map_err(|_| {
VlcError::BadResponse(format_err!("cannot parse id: {}", id_s))
})?;
ids.push(id);
}
return Ok(ids);
}
}
Err(VlcError::BadResponse(format_err!(
"no playlist found in XML"
)))
}
fn maybe_restore_pause(&self) -> std::result::Result<(), VlcError> {
// Moving resets the pausing state
if self.locked() {
// Pausing before play starts causes blackscreen
std::thread::sleep(Duration::from_secs(1));
self.send_status_cmd("pl_pause", &[])?;
}
Ok(())
}
fn maybe_pause(&self) -> std::result::Result<(), VlcError> {
if!self.pausing &&!self.sleeping {
self.send_status_cmd("pl_pause", &[])?;
}
Ok(())
}
fn maybe_resume(&mut self, resume: bool) -> std::result::Result<(), VlcError> {
if (self.pausing && resume) || (self.sleeping &&!self.pausing) {
self.send_status_cmd("pl_play", &[])?;
self.pausing = false;
self.sleeping = false;
}
Ok(())
}
}
impl<C: HttpClient> Player for VlcPlayer<C> {
fn start(&mut self, config: SlideshowConfig) -> Result<()> {
let vlc_bin = self
.vlc_config
.vlc_bin
.as_ref()
.map(|s| s.as_ref())
.unwrap_or(VLC_DEFAULT_BIN);
let mut cmd = Command::new(vlc_bin);
cmd.arg("--loop")
.arg("--no-video-title-show")
// Don't show popup for asking whether to fetch media metadata through network
.arg("--no-qt-privacy-ask")
.arg("--no-qt-video-autoresize")
// https://wiki.videolan.org/index.php/VLC_command-line_help
.args(&[
"--image-duration",
&config.show_duration.as_secs().to_string(),
])
.args(&["--extraintf", "http"])
.args(&["--http-password", VLC_HTTP_PASSWORD])
.args(&["--http-host", VLC_HTTP_HOST])
.args(&["--http-port", &self.http_port().to_string()]);
if config.fullscreen {
cmd.arg("--fullscreen");
}
self.process = Some(cmd.spawn()?);
self.wait_on_http_interface()?;
self.config = Some(config);
self.set_volume(self.audio_volume()?)?;
Ok(())
}
fn play_next(&mut self) -> Result<()> {
self.send_status_cmd("pl_next", &[])?;
self.maybe_restore_pause()?;
Ok(())
}
fn play_back(&mut self) -> Result<()> {
self.send_status_cmd("pl_previous", &[])?;
self.maybe_restore_pause()?;
Ok(())
}
fn sleep(&mut self) -> Result<()> {
self.maybe_pause()?;
self.sleeping = true;
Ok(())
}
fn wakeup(&mut self) -> Result<()> {
self.maybe_resume(false)?;
Ok(())
}
fn pause(&mut self) -> Result<()> {
self.maybe_pause()?;
self.pausing = true;
Ok(())
}
fn resume(&mut self) -> Result<()> {
self.maybe_resume(true)?;
Ok(())
}
fn mute(&mut self) -> Result<()> {
if!self.muting {
self.set_volume(0)?;
}
self.muting = true;
Ok(())
}
fn unmute(&mut self) -> Result<()> {
if self.muting {
self.set_volume(self.audio_volume()?)?;
}
self.muting = false;
Ok(())
}
fn update_playlist(&mut self, playlist: Vec<PathBuf>) -> Result<()> {
debug!("Start updating playlist");
// 1. get current playlist
let old_ids = Self::playlist_ids(self.get_playlist()?)?;
// 2. enqueue all new items
for path in playlist {
debug!("Adding new item to playlist: {}", path.display());
self.send_status_cmd("in_enqueue", &[("input", path.to_str().unwrap())])?;
}
// 3. move to the head of new items
let cur_ids = Self::playlist_ids(self.get_playlist()?)?;
let head_id = cur_ids[old_ids.len()];
debug!("Jumping to playlist ID: {}", head_id);
self.send_status_cmd("pl_play", &[("id", &head_id.to_string())])?;
std::thread::sleep(Duration::from_secs(1));
// 4. Remove old items from playlist (assuming current media won't come up so soon)
for id in old_ids {
debug!("Removing old item from playlist: {}", id);
self.send_status_cmd("pl_delete", &[("id", &id.to_string())])?;
}
debug!("Update playlist complete");
Ok(())
}
fn locked(&self) -> bool {
self.pausing || self.sleeping
}
fn is_ok(&self) -> bool {
match self.send_status_cmd("", &[]) {
Ok(_) => true,
Err(e) => {
debug!("Got error response while checking health of VLC: {}", e);
false
}
}
}
}
impl<C: HttpClient> Drop for VlcPlayer<C> {
fn drop(&mut self) {
if let Some(mut proc) = self.process.take() |
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::cell::{Cell, RefCell};
use std::collections::HashMap;
use std::fs;
use std::io::Write;
use std::os::unix::fs::PermissionsExt;
use tempfile;
impl<F: Fn(&str, &HashMap<&str, &str>) -> std::result::Result<String, VlcError>> HttpClient for F {
fn send_get(
&self,
_port: u32,
path: &str,
params: &[(&str, &str)],
) -> std::result::Result<String, VlcError> {
self(path, ¶ms.into_iter().map(|v| *v).collect())
}
}
fn dummy_bin_player<
C: Fn(&str, &HashMap<&str, &str>) -> std::result::Result<String, VlcError>,
>(
client: C,
) -> (tempfile::NamedTempFile, VlcPlayer<C>) {
let mut dummy_bin = tempfile::NamedTempFile::new().unwrap();
let file = dummy_bin.as_file_mut();
writeln!(file, "#!/bin/sh").unwrap();
writeln!(file, "sleep 60").unwrap();
file.flush().unwrap();
let mut perm = file.metadata().expect("metadata").permissions();
perm.set_mode(0o775);
fs::set_permissions(&dummy_bin, perm).unwrap();
let player = VlcPlayer::new_with_client(
VlcConfig {
vlc_bin: Some(dummy_bin.path().to_str().unwrap().to_string()),
..VlcConfig::default()
},
client,
);
(dummy_bin, player)
}
#[test]
fn test_is_ok() {
let shutdown = Cell::new(false);
let (_dummy_bin, mut player) = dummy_bin_player(|_, _| {
if shutdown.get() {
Err(VlcError::BadResponse(format_err!("")))
} else {
Ok("".to_string())
}
});
player.start(SlideshowConfig::default()).unwrap();
// Player health's good while it's running
assert!(player.is_ok());
// Now process exits and health should not be okay
shutdown.set(true);
assert!(!player.is_ok());
}
#[test]
fn test_pause() {
let req = RefCell::new(None);
let (_dummy_bin, mut player) = dummy_bin_player(|_, p| {
req.borrow_mut()
.replace(p.get("command").unwrap_or(&"").to_string());
Ok("".to_string())
});
player.start(SlideshowConfig::default()).unwrap();
player.pause().unwrap();
assert_eq!(Some("pl_pause".to_string()), req.borrow_mut().take());
// Calling pause twice should be no-op
player.pause().unwrap();
assert_eq!(None, req.borrow_mut().take());
player.resume().unwrap();
assert_eq!(Some("pl_play".to_string()), req.borrow_mut().take());
player.resume().unwrap();
assert_eq!(None, req.borrow_mut().take());
// Do not send pause again if its alredy sleeping
player.sleep().unwrap();
req.borrow_mut().take();
player.pause().unwrap();
assert_eq!(None, req.borrow_mut().take());
// Resume can ignore sleep
player.resume().unwrap();
assert_eq!(Some("pl_play".to_string()), req.borrow_mut().take());
// Resume should reset sleep flag
player.sleep().unwrap();
assert_eq!(Some("pl_pause".to_string()), req.borrow_mut().take());
}
#[test]
fn test_sleep() {
let req = RefCell::new(None);
let (_dummy_bin, mut player) = dummy_bin_player(|_, p| {
req.borrow_mut()
.replace(p.get("command").unwrap_or(&"").to_string());
Ok("".to_string())
});
player.start(SlideshowConfig::default()).unwrap();
player.sleep().unwrap();
assert_eq!(Some("pl_pause".to_string()), req.borrow_mut().take());
// Calling sleep twice should be no-op
player.sleep().unwrap();
assert_eq!(None, req.borrow_mut().take());
player.wakeup().unwrap();
assert_eq!(Some("pl_play".to_string()), req.borrow_mut().take());
// Calling wakeup twice should be no-op
player.wakeup().unwrap();
assert_eq!(None, req.borrow_mut().take());
// Do not send pause again if its alredy sleeping
player.pause().unwrap();
req.borrow_mut().take();
player.sleep().unwrap();
assert_eq!(None, req.borrow_mut().take());
// Wakeup should not resume if it's pausing
player.wakeup().unwrap();
assert_eq!(None, req.borrow_mut().take());
}
}
| {
// Rust's Command doesn't support other than SIGKILL in portable interface
unsafe {
libc::kill(proc.id() as i32, libc::SIGTERM);
}
match proc.wait() {
Ok(status) => debug!("VLC process exit with {}", status.code().unwrap_or(-1)),
Err(e) => warn!("Failed to stop VLC process gracefully: {}", e),
}
} | conditional_block |
ipc.rs | //! IPC Transport for *nix
#[cfg(unix)]
extern crate tokio_uds;
use std::collections::BTreeMap;
use std::io::{self, Read, Write};
use std::path::Path;
use std::sync::{atomic, Arc};
#[cfg(unix)]
use self::tokio_uds::UnixStream;
use crate::api::SubscriptionId;
use crate::helpers;
use crate::rpc;
use crate::transports::shared::{EventLoopHandle, Response};
use crate::transports::tokio_core::reactor;
use crate::transports::tokio_io::io::{ReadHalf, WriteHalf};
use crate::transports::tokio_io::AsyncRead;
use crate::transports::Result;
use crate::{BatchTransport, DuplexTransport, Error, RequestId, Transport};
use futures::sync::{mpsc, oneshot};
use futures::{self, Future, Stream};
use parking_lot::Mutex;
macro_rules! try_nb {
($e:expr) => {
match $e {
Ok(t) => t,
Err(ref e) if e.kind() == io::ErrorKind::WouldBlock => return Ok(futures::Async::NotReady),
Err(e) => {
log::warn!("Unexpected IO error: {:?}", e);
return Err(());
}
}
};
}
type Pending = oneshot::Sender<Result<Vec<Result<rpc::Value>>>>;
type Subscription = mpsc::UnboundedSender<rpc::Value>;
/// A future representing pending IPC request, resolves to a response.
pub type IpcTask<F> = Response<F, Vec<Result<rpc::Value>>>;
/// Unix Domain Sockets (IPC) transport
#[derive(Debug, Clone)]
pub struct Ipc {
id: Arc<atomic::AtomicUsize>,
pending: Arc<Mutex<BTreeMap<RequestId, Pending>>>,
subscriptions: Arc<Mutex<BTreeMap<SubscriptionId, Subscription>>>,
write_sender: mpsc::UnboundedSender<Vec<u8>>,
}
impl Ipc {
/// Create new IPC transport with separate event loop.
/// NOTE: Dropping event loop handle will stop the transport layer!
///
/// IPC is only available on Unix. On other systems, this always returns an error.
pub fn new<P>(path: P) -> Result<(EventLoopHandle, Self)>
where
P: AsRef<Path>,
{
let path = path.as_ref().to_owned();
EventLoopHandle::spawn(move |handle| Self::with_event_loop(&path, &handle).map_err(Into::into))
}
/// Create new IPC transport within existing Event Loop.
///
/// IPC is only available on Unix. On other systems, this always returns an error.
#[cfg(unix)]
pub fn with_event_loop<P>(path: P, handle: &reactor::Handle) -> Result<Self>
where
P: AsRef<Path>,
{
log::trace!("Connecting to: {:?}", path.as_ref());
let stream = UnixStream::connect(path, handle)?;
Self::with_stream(stream, handle)
}
/// Creates new IPC transport from existing `UnixStream` and `Handle`
#[cfg(unix)]
fn with_stream(stream: UnixStream, handle: &reactor::Handle) -> Result<Self> {
let (read, write) = stream.split();
let (write_sender, write_receiver) = mpsc::unbounded();
let pending: Arc<Mutex<BTreeMap<RequestId, Pending>>> = Default::default();
let subscriptions: Arc<Mutex<BTreeMap<SubscriptionId, Subscription>>> = Default::default();
let r = ReadStream {
read,
pending: pending.clone(),
subscriptions: subscriptions.clone(),
buffer: vec![],
current_pos: 0,
};
let w = WriteStream {
write,
incoming: write_receiver,
state: WriteState::WaitingForRequest,
};
handle.spawn(r);
handle.spawn(w);
Ok(Ipc {
id: Arc::new(atomic::AtomicUsize::new(1)),
write_sender,
pending,
subscriptions,
})
}
#[cfg(not(unix))]
pub fn with_event_loop<P>(_path: P, _handle: &reactor::Handle) -> Result<Self> {
return Err(Error::Transport("IPC transport is only supported on Unix".into()).into());
}
fn send_request<F, O>(&self, id: RequestId, request: rpc::Request, extract: F) -> IpcTask<F>
where
F: Fn(Vec<Result<rpc::Value>>) -> O,
{
let request = helpers::to_string(&request);
log::debug!("[{}] Calling: {}", id, request);
let (tx, rx) = futures::oneshot();
self.pending.lock().insert(id, tx);
let result = self
.write_sender
.unbounded_send(request.into_bytes())
.map_err(|_| Error::Io(io::ErrorKind::BrokenPipe.into()));
Response::new(id, result, rx, extract)
}
}
impl Transport for Ipc {
type Out = IpcTask<fn(Vec<Result<rpc::Value>>) -> Result<rpc::Value>>;
fn prepare(&self, method: &str, params: Vec<rpc::Value>) -> (RequestId, rpc::Call) {
let id = self.id.fetch_add(1, atomic::Ordering::AcqRel);
let request = helpers::build_request(id, method, params);
(id, request)
}
fn send(&self, id: RequestId, request: rpc::Call) -> Self::Out {
self.send_request(id, rpc::Request::Single(request), single_response)
}
}
fn single_response(response: Vec<Result<rpc::Value>>) -> Result<rpc::Value> {
match response.into_iter().next() {
Some(res) => res,
None => Err(Error::InvalidResponse("Expected single, got batch.".into())),
}
}
impl BatchTransport for Ipc {
type Batch = IpcTask<fn(Vec<Result<rpc::Value>>) -> Result<Vec<Result<rpc::Value>>>>;
fn | <T>(&self, requests: T) -> Self::Batch
where
T: IntoIterator<Item = (RequestId, rpc::Call)>,
{
let mut it = requests.into_iter();
let (id, first) = it.next().map(|x| (x.0, Some(x.1))).unwrap_or_else(|| (0, None));
let requests = first.into_iter().chain(it.map(|x| x.1)).collect();
self.send_request(id, rpc::Request::Batch(requests), Ok)
}
}
impl DuplexTransport for Ipc {
type NotificationStream = Box<dyn Stream<Item = rpc::Value, Error = Error> + Send +'static>;
fn subscribe(&self, id: &SubscriptionId) -> Self::NotificationStream {
let (tx, rx) = mpsc::unbounded();
if self.subscriptions.lock().insert(id.clone(), tx).is_some() {
log::warn!("Replacing already-registered subscription with id {:?}", id)
}
Box::new(rx.map_err(|()| Error::Transport("No data available".into())))
}
fn unsubscribe(&self, id: &SubscriptionId) {
self.subscriptions.lock().remove(id);
}
}
enum WriteState {
WaitingForRequest,
Writing { buffer: Vec<u8>, current_pos: usize },
}
/// Writing part of the IPC transport
/// Awaits new requests using `mpsc::UnboundedReceiver` and writes them to the socket.
#[cfg(unix)]
struct WriteStream {
write: WriteHalf<UnixStream>,
incoming: mpsc::UnboundedReceiver<Vec<u8>>,
state: WriteState,
}
#[cfg(unix)]
impl Future for WriteStream {
type Item = ();
type Error = ();
fn poll(&mut self) -> futures::Poll<Self::Item, Self::Error> {
loop {
self.state = match self.state {
WriteState::WaitingForRequest => {
// Ask for more to write
let to_send = try_ready!(self.incoming.poll());
if let Some(to_send) = to_send {
log::trace!("Got new message to write: {:?}", String::from_utf8_lossy(&to_send));
WriteState::Writing {
buffer: to_send,
current_pos: 0,
}
} else {
return Ok(futures::Async::NotReady);
}
}
WriteState::Writing {
ref buffer,
ref mut current_pos,
} => {
// Write everything in the buffer
while *current_pos < buffer.len() {
let n = try_nb!(self.write.write(&buffer[*current_pos..]));
*current_pos += n;
if n == 0 {
log::warn!("IO Error: Zero write.");
return Err(()); // zero write?
}
}
WriteState::WaitingForRequest
}
};
}
}
}
/// Reading part of the IPC transport.
/// Reads data on the socket and tries to dispatch it to awaiting requests.
#[cfg(unix)]
struct ReadStream {
read: ReadHalf<UnixStream>,
pending: Arc<Mutex<BTreeMap<RequestId, Pending>>>,
subscriptions: Arc<Mutex<BTreeMap<SubscriptionId, Subscription>>>,
buffer: Vec<u8>,
current_pos: usize,
}
#[cfg(unix)]
impl Future for ReadStream {
type Item = ();
type Error = ();
fn poll(&mut self) -> futures::Poll<Self::Item, Self::Error> {
const DEFAULT_BUF_SIZE: usize = 4096;
let mut new_write_size = 128;
loop {
if self.current_pos == self.buffer.len() {
if new_write_size < DEFAULT_BUF_SIZE {
new_write_size *= 2;
}
self.buffer.resize(self.current_pos + new_write_size, 0);
}
let read = try_nb!(self.read.read(&mut self.buffer[self.current_pos..]));
if read == 0 {
return Ok(futures::Async::NotReady);
}
let mut min = self.current_pos;
self.current_pos += read;
while let Some((response, len)) = Self::extract_response(&self.buffer[0..self.current_pos], min) {
// Respond
self.respond(response);
// copy rest of buffer to the beginning
for i in len..self.current_pos {
self.buffer.swap(i, i - len);
}
// truncate the buffer
let new_len = self.current_pos - len;
self.buffer.truncate(new_len + new_write_size);
// Set new positions
self.current_pos = new_len;
min = 0;
}
}
}
}
enum Message {
Rpc(Vec<rpc::Output>),
Notification(rpc::Notification),
}
#[cfg(unix)]
impl ReadStream {
fn respond(&self, response: Message) {
match response {
Message::Rpc(outputs) => {
let id = match outputs.get(0) {
Some(&rpc::Output::Success(ref success)) => success.id.clone(),
Some(&rpc::Output::Failure(ref failure)) => failure.id.clone(),
None => rpc::Id::Num(0),
};
if let rpc::Id::Num(num) = id {
if let Some(request) = self.pending.lock().remove(&(num as usize)) {
log::trace!("Responding to (id: {:?}) with {:?}", num, outputs);
if let Err(err) = request.send(helpers::to_results_from_outputs(outputs)) {
log::warn!("Sending a response to deallocated channel: {:?}", err);
}
} else {
log::warn!("Got response for unknown request (id: {:?})", num);
}
} else {
log::warn!("Got unsupported response (id: {:?})", id);
}
}
Message::Notification(notification) => {
if let rpc::Params::Map(params) = notification.params {
let id = params.get("subscription");
let result = params.get("result");
if let (Some(&rpc::Value::String(ref id)), Some(result)) = (id, result) {
let id: SubscriptionId = id.clone().into();
if let Some(stream) = self.subscriptions.lock().get(&id) {
if let Err(e) = stream.unbounded_send(result.clone()) {
log::error!("Error sending notification (id: {:?}): {:?}", id, e);
}
} else {
log::warn!("Got notification for unknown subscription (id: {:?})", id);
}
} else {
log::error!("Got unsupported notification (id: {:?})", id);
}
}
}
}
}
fn extract_response(buf: &[u8], min: usize) -> Option<(Message, usize)> {
for pos in (min..buf.len()).rev() {
// Look for end character
if buf[pos] == b']' || buf[pos] == b'}' {
// Try to deserialize
let pos = pos + 1;
match helpers::to_response_from_slice(&buf[0..pos]) {
Ok(rpc::Response::Single(output)) => return Some((Message::Rpc(vec![output]), pos)),
Ok(rpc::Response::Batch(outputs)) => return Some((Message::Rpc(outputs), pos)),
// just continue
_ => {}
}
match helpers::to_notification_from_slice(&buf[0..pos]) {
Ok(notification) => return Some((Message::Notification(notification), pos)),
_ => {}
}
}
}
None
}
}
#[cfg(all(test, unix))]
mod tests {
extern crate tokio_core;
extern crate tokio_uds;
use super::Ipc;
use crate::rpc;
use crate::Transport;
use futures::{self, Future};
use std::io::{self, Read, Write};
#[test]
fn should_send_a_request() {
// given
let mut eloop = tokio_core::reactor::Core::new().unwrap();
let handle = eloop.handle();
let (server, client) = tokio_uds::UnixStream::pair(&handle).unwrap();
let ipc = Ipc::with_stream(client, &handle).unwrap();
eloop.remote().spawn(move |_| {
struct Task {
server: tokio_uds::UnixStream,
}
impl Future for Task {
type Item = ();
type Error = ();
fn poll(&mut self) -> futures::Poll<(), ()> {
let mut data = [0; 2048];
// Read request
let read = try_nb!(self.server.read(&mut data));
let request = String::from_utf8(data[0..read].to_vec()).unwrap();
assert_eq!(
&request,
r#"{"jsonrpc":"2.0","method":"eth_accounts","params":["1"],"id":1}"#
);
// Write response
let response = r#"{"jsonrpc":"2.0","id":1,"result":"x"}"#;
self.server.write_all(response.as_bytes()).unwrap();
self.server.flush().unwrap();
Ok(futures::Async::Ready(()))
}
}
Task { server }
});
// when
let res = ipc.execute("eth_accounts", vec![rpc::Value::String("1".into())]);
// then
assert_eq!(eloop.run(res), Ok(rpc::Value::String("x".into())));
}
#[test]
fn should_handle_double_response() {
// given
let mut eloop = tokio_core::reactor::Core::new().unwrap();
let handle = eloop.handle();
let (server, client) = tokio_uds::UnixStream::pair(&handle).unwrap();
let ipc = Ipc::with_stream(client, &handle).unwrap();
eloop.remote().spawn(move |_| {
struct Task {
server: tokio_uds::UnixStream,
}
impl Future for Task {
type Item = ();
type Error = ();
fn poll(&mut self) -> futures::Poll<(), ()> {
let mut data = [0; 2048];
// Read request
let read = try_nb!(self.server.read(&mut data));
let request = String::from_utf8(data[0..read].to_vec()).unwrap();
assert_eq!(&request, r#"{"jsonrpc":"2.0","method":"eth_accounts","params":["1"],"id":1}{"jsonrpc":"2.0","method":"eth_accounts","params":["1"],"id":2}"#);
// Write response
let response = r#"{"jsonrpc":"2.0","id":1,"result":"x"}{"jsonrpc":"2.0","id":2,"result":"x"}"#;
self.server.write_all(response.as_bytes()).unwrap();
self.server.flush().unwrap();
Ok(futures::Async::Ready(()))
}
}
Task { server }
});
// when
let res1 = ipc.execute("eth_accounts", vec![rpc::Value::String("1".into())]);
let res2 = ipc.execute("eth_accounts", vec![rpc::Value::String("1".into())]);
// then
assert_eq!(
eloop.run(res1.join(res2)),
Ok((rpc::Value::String("x".into()), rpc::Value::String("x".into())))
);
}
}
| send_batch | identifier_name |
ipc.rs | //! IPC Transport for *nix
#[cfg(unix)]
extern crate tokio_uds;
use std::collections::BTreeMap;
use std::io::{self, Read, Write};
use std::path::Path;
use std::sync::{atomic, Arc};
#[cfg(unix)]
use self::tokio_uds::UnixStream;
use crate::api::SubscriptionId;
use crate::helpers;
use crate::rpc;
use crate::transports::shared::{EventLoopHandle, Response};
use crate::transports::tokio_core::reactor;
use crate::transports::tokio_io::io::{ReadHalf, WriteHalf};
use crate::transports::tokio_io::AsyncRead;
use crate::transports::Result;
use crate::{BatchTransport, DuplexTransport, Error, RequestId, Transport};
use futures::sync::{mpsc, oneshot};
use futures::{self, Future, Stream};
use parking_lot::Mutex;
macro_rules! try_nb {
($e:expr) => {
match $e {
Ok(t) => t,
Err(ref e) if e.kind() == io::ErrorKind::WouldBlock => return Ok(futures::Async::NotReady),
Err(e) => {
log::warn!("Unexpected IO error: {:?}", e);
return Err(());
}
}
};
}
type Pending = oneshot::Sender<Result<Vec<Result<rpc::Value>>>>;
type Subscription = mpsc::UnboundedSender<rpc::Value>;
/// A future representing pending IPC request, resolves to a response.
pub type IpcTask<F> = Response<F, Vec<Result<rpc::Value>>>;
/// Unix Domain Sockets (IPC) transport
#[derive(Debug, Clone)]
pub struct Ipc {
id: Arc<atomic::AtomicUsize>,
pending: Arc<Mutex<BTreeMap<RequestId, Pending>>>,
subscriptions: Arc<Mutex<BTreeMap<SubscriptionId, Subscription>>>,
write_sender: mpsc::UnboundedSender<Vec<u8>>,
}
impl Ipc {
/// Create new IPC transport with separate event loop.
/// NOTE: Dropping event loop handle will stop the transport layer!
///
/// IPC is only available on Unix. On other systems, this always returns an error.
pub fn new<P>(path: P) -> Result<(EventLoopHandle, Self)>
where
P: AsRef<Path>,
{
let path = path.as_ref().to_owned();
EventLoopHandle::spawn(move |handle| Self::with_event_loop(&path, &handle).map_err(Into::into))
}
/// Create new IPC transport within existing Event Loop.
///
/// IPC is only available on Unix. On other systems, this always returns an error.
#[cfg(unix)]
pub fn with_event_loop<P>(path: P, handle: &reactor::Handle) -> Result<Self>
where
P: AsRef<Path>,
{
log::trace!("Connecting to: {:?}", path.as_ref());
let stream = UnixStream::connect(path, handle)?;
Self::with_stream(stream, handle)
}
/// Creates new IPC transport from existing `UnixStream` and `Handle`
#[cfg(unix)]
fn with_stream(stream: UnixStream, handle: &reactor::Handle) -> Result<Self> {
let (read, write) = stream.split();
let (write_sender, write_receiver) = mpsc::unbounded();
let pending: Arc<Mutex<BTreeMap<RequestId, Pending>>> = Default::default();
let subscriptions: Arc<Mutex<BTreeMap<SubscriptionId, Subscription>>> = Default::default();
let r = ReadStream {
read,
pending: pending.clone(),
subscriptions: subscriptions.clone(),
buffer: vec![],
current_pos: 0,
};
let w = WriteStream {
write,
incoming: write_receiver,
state: WriteState::WaitingForRequest,
};
handle.spawn(r);
handle.spawn(w);
Ok(Ipc {
id: Arc::new(atomic::AtomicUsize::new(1)),
write_sender,
pending,
subscriptions,
})
}
#[cfg(not(unix))]
pub fn with_event_loop<P>(_path: P, _handle: &reactor::Handle) -> Result<Self> {
return Err(Error::Transport("IPC transport is only supported on Unix".into()).into());
}
fn send_request<F, O>(&self, id: RequestId, request: rpc::Request, extract: F) -> IpcTask<F>
where
F: Fn(Vec<Result<rpc::Value>>) -> O,
{
let request = helpers::to_string(&request);
log::debug!("[{}] Calling: {}", id, request);
let (tx, rx) = futures::oneshot();
self.pending.lock().insert(id, tx);
let result = self
.write_sender
.unbounded_send(request.into_bytes())
.map_err(|_| Error::Io(io::ErrorKind::BrokenPipe.into()));
Response::new(id, result, rx, extract)
}
}
impl Transport for Ipc {
type Out = IpcTask<fn(Vec<Result<rpc::Value>>) -> Result<rpc::Value>>;
fn prepare(&self, method: &str, params: Vec<rpc::Value>) -> (RequestId, rpc::Call) {
let id = self.id.fetch_add(1, atomic::Ordering::AcqRel);
let request = helpers::build_request(id, method, params);
(id, request)
}
fn send(&self, id: RequestId, request: rpc::Call) -> Self::Out {
self.send_request(id, rpc::Request::Single(request), single_response)
}
}
fn single_response(response: Vec<Result<rpc::Value>>) -> Result<rpc::Value> {
match response.into_iter().next() {
Some(res) => res,
None => Err(Error::InvalidResponse("Expected single, got batch.".into())),
}
}
impl BatchTransport for Ipc {
type Batch = IpcTask<fn(Vec<Result<rpc::Value>>) -> Result<Vec<Result<rpc::Value>>>>;
fn send_batch<T>(&self, requests: T) -> Self::Batch
where
T: IntoIterator<Item = (RequestId, rpc::Call)>,
{
let mut it = requests.into_iter();
let (id, first) = it.next().map(|x| (x.0, Some(x.1))).unwrap_or_else(|| (0, None));
let requests = first.into_iter().chain(it.map(|x| x.1)).collect();
self.send_request(id, rpc::Request::Batch(requests), Ok)
}
}
impl DuplexTransport for Ipc {
type NotificationStream = Box<dyn Stream<Item = rpc::Value, Error = Error> + Send +'static>;
fn subscribe(&self, id: &SubscriptionId) -> Self::NotificationStream {
let (tx, rx) = mpsc::unbounded();
if self.subscriptions.lock().insert(id.clone(), tx).is_some() {
log::warn!("Replacing already-registered subscription with id {:?}", id)
}
Box::new(rx.map_err(|()| Error::Transport("No data available".into())))
}
fn unsubscribe(&self, id: &SubscriptionId) {
self.subscriptions.lock().remove(id);
}
}
enum WriteState {
WaitingForRequest,
Writing { buffer: Vec<u8>, current_pos: usize },
}
/// Writing part of the IPC transport
/// Awaits new requests using `mpsc::UnboundedReceiver` and writes them to the socket.
#[cfg(unix)]
struct WriteStream {
write: WriteHalf<UnixStream>,
incoming: mpsc::UnboundedReceiver<Vec<u8>>,
state: WriteState,
}
#[cfg(unix)]
impl Future for WriteStream {
type Item = ();
type Error = ();
fn poll(&mut self) -> futures::Poll<Self::Item, Self::Error> {
loop {
self.state = match self.state {
WriteState::WaitingForRequest => {
// Ask for more to write
let to_send = try_ready!(self.incoming.poll());
if let Some(to_send) = to_send {
log::trace!("Got new message to write: {:?}", String::from_utf8_lossy(&to_send));
WriteState::Writing {
buffer: to_send,
current_pos: 0,
}
} else {
return Ok(futures::Async::NotReady);
}
}
WriteState::Writing {
ref buffer,
ref mut current_pos,
} => {
// Write everything in the buffer
while *current_pos < buffer.len() {
let n = try_nb!(self.write.write(&buffer[*current_pos..]));
*current_pos += n;
if n == 0 {
log::warn!("IO Error: Zero write.");
return Err(()); // zero write?
}
}
WriteState::WaitingForRequest
}
};
}
}
}
/// Reading part of the IPC transport.
/// Reads data on the socket and tries to dispatch it to awaiting requests.
#[cfg(unix)]
struct ReadStream {
read: ReadHalf<UnixStream>,
pending: Arc<Mutex<BTreeMap<RequestId, Pending>>>,
subscriptions: Arc<Mutex<BTreeMap<SubscriptionId, Subscription>>>,
buffer: Vec<u8>,
current_pos: usize,
}
#[cfg(unix)]
impl Future for ReadStream {
type Item = ();
type Error = ();
fn poll(&mut self) -> futures::Poll<Self::Item, Self::Error> {
const DEFAULT_BUF_SIZE: usize = 4096;
let mut new_write_size = 128;
loop {
if self.current_pos == self.buffer.len() {
if new_write_size < DEFAULT_BUF_SIZE {
new_write_size *= 2;
}
self.buffer.resize(self.current_pos + new_write_size, 0);
}
let read = try_nb!(self.read.read(&mut self.buffer[self.current_pos..]));
if read == 0 {
return Ok(futures::Async::NotReady);
}
let mut min = self.current_pos;
self.current_pos += read;
while let Some((response, len)) = Self::extract_response(&self.buffer[0..self.current_pos], min) {
// Respond
self.respond(response);
// copy rest of buffer to the beginning
for i in len..self.current_pos {
self.buffer.swap(i, i - len);
}
// truncate the buffer
let new_len = self.current_pos - len;
self.buffer.truncate(new_len + new_write_size);
// Set new positions
self.current_pos = new_len;
min = 0;
}
}
}
}
enum Message {
Rpc(Vec<rpc::Output>),
Notification(rpc::Notification),
}
#[cfg(unix)]
impl ReadStream {
fn respond(&self, response: Message) {
match response {
Message::Rpc(outputs) => {
let id = match outputs.get(0) {
Some(&rpc::Output::Success(ref success)) => success.id.clone(),
Some(&rpc::Output::Failure(ref failure)) => failure.id.clone(),
None => rpc::Id::Num(0),
};
if let rpc::Id::Num(num) = id | else {
log::warn!("Got unsupported response (id: {:?})", id);
}
}
Message::Notification(notification) => {
if let rpc::Params::Map(params) = notification.params {
let id = params.get("subscription");
let result = params.get("result");
if let (Some(&rpc::Value::String(ref id)), Some(result)) = (id, result) {
let id: SubscriptionId = id.clone().into();
if let Some(stream) = self.subscriptions.lock().get(&id) {
if let Err(e) = stream.unbounded_send(result.clone()) {
log::error!("Error sending notification (id: {:?}): {:?}", id, e);
}
} else {
log::warn!("Got notification for unknown subscription (id: {:?})", id);
}
} else {
log::error!("Got unsupported notification (id: {:?})", id);
}
}
}
}
}
fn extract_response(buf: &[u8], min: usize) -> Option<(Message, usize)> {
for pos in (min..buf.len()).rev() {
// Look for end character
if buf[pos] == b']' || buf[pos] == b'}' {
// Try to deserialize
let pos = pos + 1;
match helpers::to_response_from_slice(&buf[0..pos]) {
Ok(rpc::Response::Single(output)) => return Some((Message::Rpc(vec![output]), pos)),
Ok(rpc::Response::Batch(outputs)) => return Some((Message::Rpc(outputs), pos)),
// just continue
_ => {}
}
match helpers::to_notification_from_slice(&buf[0..pos]) {
Ok(notification) => return Some((Message::Notification(notification), pos)),
_ => {}
}
}
}
None
}
}
#[cfg(all(test, unix))]
mod tests {
extern crate tokio_core;
extern crate tokio_uds;
use super::Ipc;
use crate::rpc;
use crate::Transport;
use futures::{self, Future};
use std::io::{self, Read, Write};
#[test]
fn should_send_a_request() {
// given
let mut eloop = tokio_core::reactor::Core::new().unwrap();
let handle = eloop.handle();
let (server, client) = tokio_uds::UnixStream::pair(&handle).unwrap();
let ipc = Ipc::with_stream(client, &handle).unwrap();
eloop.remote().spawn(move |_| {
struct Task {
server: tokio_uds::UnixStream,
}
impl Future for Task {
type Item = ();
type Error = ();
fn poll(&mut self) -> futures::Poll<(), ()> {
let mut data = [0; 2048];
// Read request
let read = try_nb!(self.server.read(&mut data));
let request = String::from_utf8(data[0..read].to_vec()).unwrap();
assert_eq!(
&request,
r#"{"jsonrpc":"2.0","method":"eth_accounts","params":["1"],"id":1}"#
);
// Write response
let response = r#"{"jsonrpc":"2.0","id":1,"result":"x"}"#;
self.server.write_all(response.as_bytes()).unwrap();
self.server.flush().unwrap();
Ok(futures::Async::Ready(()))
}
}
Task { server }
});
// when
let res = ipc.execute("eth_accounts", vec![rpc::Value::String("1".into())]);
// then
assert_eq!(eloop.run(res), Ok(rpc::Value::String("x".into())));
}
#[test]
fn should_handle_double_response() {
// given
let mut eloop = tokio_core::reactor::Core::new().unwrap();
let handle = eloop.handle();
let (server, client) = tokio_uds::UnixStream::pair(&handle).unwrap();
let ipc = Ipc::with_stream(client, &handle).unwrap();
eloop.remote().spawn(move |_| {
struct Task {
server: tokio_uds::UnixStream,
}
impl Future for Task {
type Item = ();
type Error = ();
fn poll(&mut self) -> futures::Poll<(), ()> {
let mut data = [0; 2048];
// Read request
let read = try_nb!(self.server.read(&mut data));
let request = String::from_utf8(data[0..read].to_vec()).unwrap();
assert_eq!(&request, r#"{"jsonrpc":"2.0","method":"eth_accounts","params":["1"],"id":1}{"jsonrpc":"2.0","method":"eth_accounts","params":["1"],"id":2}"#);
// Write response
let response = r#"{"jsonrpc":"2.0","id":1,"result":"x"}{"jsonrpc":"2.0","id":2,"result":"x"}"#;
self.server.write_all(response.as_bytes()).unwrap();
self.server.flush().unwrap();
Ok(futures::Async::Ready(()))
}
}
Task { server }
});
// when
let res1 = ipc.execute("eth_accounts", vec![rpc::Value::String("1".into())]);
let res2 = ipc.execute("eth_accounts", vec![rpc::Value::String("1".into())]);
// then
assert_eq!(
eloop.run(res1.join(res2)),
Ok((rpc::Value::String("x".into()), rpc::Value::String("x".into())))
);
}
}
| {
if let Some(request) = self.pending.lock().remove(&(num as usize)) {
log::trace!("Responding to (id: {:?}) with {:?}", num, outputs);
if let Err(err) = request.send(helpers::to_results_from_outputs(outputs)) {
log::warn!("Sending a response to deallocated channel: {:?}", err);
}
} else {
log::warn!("Got response for unknown request (id: {:?})", num);
}
} | conditional_block |
ipc.rs | //! IPC Transport for *nix
#[cfg(unix)]
extern crate tokio_uds;
use std::collections::BTreeMap;
use std::io::{self, Read, Write};
use std::path::Path;
use std::sync::{atomic, Arc};
#[cfg(unix)]
use self::tokio_uds::UnixStream;
use crate::api::SubscriptionId;
use crate::helpers;
use crate::rpc;
use crate::transports::shared::{EventLoopHandle, Response};
use crate::transports::tokio_core::reactor;
use crate::transports::tokio_io::io::{ReadHalf, WriteHalf};
use crate::transports::tokio_io::AsyncRead;
use crate::transports::Result;
use crate::{BatchTransport, DuplexTransport, Error, RequestId, Transport};
use futures::sync::{mpsc, oneshot};
use futures::{self, Future, Stream};
use parking_lot::Mutex;
macro_rules! try_nb {
($e:expr) => {
match $e {
Ok(t) => t,
Err(ref e) if e.kind() == io::ErrorKind::WouldBlock => return Ok(futures::Async::NotReady),
Err(e) => {
log::warn!("Unexpected IO error: {:?}", e);
return Err(());
}
}
};
}
type Pending = oneshot::Sender<Result<Vec<Result<rpc::Value>>>>;
type Subscription = mpsc::UnboundedSender<rpc::Value>;
/// A future representing pending IPC request, resolves to a response.
pub type IpcTask<F> = Response<F, Vec<Result<rpc::Value>>>;
/// Unix Domain Sockets (IPC) transport
#[derive(Debug, Clone)]
pub struct Ipc {
id: Arc<atomic::AtomicUsize>,
pending: Arc<Mutex<BTreeMap<RequestId, Pending>>>,
subscriptions: Arc<Mutex<BTreeMap<SubscriptionId, Subscription>>>,
write_sender: mpsc::UnboundedSender<Vec<u8>>,
}
impl Ipc {
/// Create new IPC transport with separate event loop.
/// NOTE: Dropping event loop handle will stop the transport layer!
///
/// IPC is only available on Unix. On other systems, this always returns an error.
pub fn new<P>(path: P) -> Result<(EventLoopHandle, Self)>
where
P: AsRef<Path>,
{
let path = path.as_ref().to_owned();
EventLoopHandle::spawn(move |handle| Self::with_event_loop(&path, &handle).map_err(Into::into))
}
/// Create new IPC transport within existing Event Loop.
///
/// IPC is only available on Unix. On other systems, this always returns an error.
#[cfg(unix)]
pub fn with_event_loop<P>(path: P, handle: &reactor::Handle) -> Result<Self>
where
P: AsRef<Path>,
{
log::trace!("Connecting to: {:?}", path.as_ref());
let stream = UnixStream::connect(path, handle)?;
Self::with_stream(stream, handle)
}
/// Creates new IPC transport from existing `UnixStream` and `Handle`
#[cfg(unix)]
fn with_stream(stream: UnixStream, handle: &reactor::Handle) -> Result<Self> {
let (read, write) = stream.split();
let (write_sender, write_receiver) = mpsc::unbounded();
let pending: Arc<Mutex<BTreeMap<RequestId, Pending>>> = Default::default();
let subscriptions: Arc<Mutex<BTreeMap<SubscriptionId, Subscription>>> = Default::default();
let r = ReadStream {
read,
pending: pending.clone(),
subscriptions: subscriptions.clone(),
buffer: vec![],
current_pos: 0,
};
let w = WriteStream {
write,
incoming: write_receiver,
state: WriteState::WaitingForRequest,
};
handle.spawn(r);
handle.spawn(w);
Ok(Ipc {
id: Arc::new(atomic::AtomicUsize::new(1)),
write_sender,
pending,
subscriptions,
})
}
#[cfg(not(unix))]
pub fn with_event_loop<P>(_path: P, _handle: &reactor::Handle) -> Result<Self> {
return Err(Error::Transport("IPC transport is only supported on Unix".into()).into());
}
fn send_request<F, O>(&self, id: RequestId, request: rpc::Request, extract: F) -> IpcTask<F>
where
F: Fn(Vec<Result<rpc::Value>>) -> O,
{
let request = helpers::to_string(&request);
log::debug!("[{}] Calling: {}", id, request);
let (tx, rx) = futures::oneshot();
self.pending.lock().insert(id, tx);
let result = self
.write_sender
.unbounded_send(request.into_bytes())
.map_err(|_| Error::Io(io::ErrorKind::BrokenPipe.into()));
Response::new(id, result, rx, extract)
}
}
impl Transport for Ipc {
type Out = IpcTask<fn(Vec<Result<rpc::Value>>) -> Result<rpc::Value>>;
fn prepare(&self, method: &str, params: Vec<rpc::Value>) -> (RequestId, rpc::Call) {
let id = self.id.fetch_add(1, atomic::Ordering::AcqRel);
let request = helpers::build_request(id, method, params);
(id, request)
}
fn send(&self, id: RequestId, request: rpc::Call) -> Self::Out {
self.send_request(id, rpc::Request::Single(request), single_response)
}
}
fn single_response(response: Vec<Result<rpc::Value>>) -> Result<rpc::Value> {
match response.into_iter().next() {
Some(res) => res,
None => Err(Error::InvalidResponse("Expected single, got batch.".into())),
}
}
impl BatchTransport for Ipc {
type Batch = IpcTask<fn(Vec<Result<rpc::Value>>) -> Result<Vec<Result<rpc::Value>>>>;
fn send_batch<T>(&self, requests: T) -> Self::Batch
where
T: IntoIterator<Item = (RequestId, rpc::Call)>,
{
let mut it = requests.into_iter();
let (id, first) = it.next().map(|x| (x.0, Some(x.1))).unwrap_or_else(|| (0, None));
let requests = first.into_iter().chain(it.map(|x| x.1)).collect();
self.send_request(id, rpc::Request::Batch(requests), Ok)
}
}
impl DuplexTransport for Ipc {
type NotificationStream = Box<dyn Stream<Item = rpc::Value, Error = Error> + Send +'static>;
fn subscribe(&self, id: &SubscriptionId) -> Self::NotificationStream |
fn unsubscribe(&self, id: &SubscriptionId) {
self.subscriptions.lock().remove(id);
}
}
enum WriteState {
WaitingForRequest,
Writing { buffer: Vec<u8>, current_pos: usize },
}
/// Writing part of the IPC transport
/// Awaits new requests using `mpsc::UnboundedReceiver` and writes them to the socket.
#[cfg(unix)]
struct WriteStream {
write: WriteHalf<UnixStream>,
incoming: mpsc::UnboundedReceiver<Vec<u8>>,
state: WriteState,
}
#[cfg(unix)]
impl Future for WriteStream {
type Item = ();
type Error = ();
fn poll(&mut self) -> futures::Poll<Self::Item, Self::Error> {
loop {
self.state = match self.state {
WriteState::WaitingForRequest => {
// Ask for more to write
let to_send = try_ready!(self.incoming.poll());
if let Some(to_send) = to_send {
log::trace!("Got new message to write: {:?}", String::from_utf8_lossy(&to_send));
WriteState::Writing {
buffer: to_send,
current_pos: 0,
}
} else {
return Ok(futures::Async::NotReady);
}
}
WriteState::Writing {
ref buffer,
ref mut current_pos,
} => {
// Write everything in the buffer
while *current_pos < buffer.len() {
let n = try_nb!(self.write.write(&buffer[*current_pos..]));
*current_pos += n;
if n == 0 {
log::warn!("IO Error: Zero write.");
return Err(()); // zero write?
}
}
WriteState::WaitingForRequest
}
};
}
}
}
/// Reading part of the IPC transport.
/// Reads data on the socket and tries to dispatch it to awaiting requests.
#[cfg(unix)]
struct ReadStream {
read: ReadHalf<UnixStream>,
pending: Arc<Mutex<BTreeMap<RequestId, Pending>>>,
subscriptions: Arc<Mutex<BTreeMap<SubscriptionId, Subscription>>>,
buffer: Vec<u8>,
current_pos: usize,
}
#[cfg(unix)]
impl Future for ReadStream {
type Item = ();
type Error = ();
fn poll(&mut self) -> futures::Poll<Self::Item, Self::Error> {
const DEFAULT_BUF_SIZE: usize = 4096;
let mut new_write_size = 128;
loop {
if self.current_pos == self.buffer.len() {
if new_write_size < DEFAULT_BUF_SIZE {
new_write_size *= 2;
}
self.buffer.resize(self.current_pos + new_write_size, 0);
}
let read = try_nb!(self.read.read(&mut self.buffer[self.current_pos..]));
if read == 0 {
return Ok(futures::Async::NotReady);
}
let mut min = self.current_pos;
self.current_pos += read;
while let Some((response, len)) = Self::extract_response(&self.buffer[0..self.current_pos], min) {
// Respond
self.respond(response);
// copy rest of buffer to the beginning
for i in len..self.current_pos {
self.buffer.swap(i, i - len);
}
// truncate the buffer
let new_len = self.current_pos - len;
self.buffer.truncate(new_len + new_write_size);
// Set new positions
self.current_pos = new_len;
min = 0;
}
}
}
}
enum Message {
Rpc(Vec<rpc::Output>),
Notification(rpc::Notification),
}
#[cfg(unix)]
impl ReadStream {
fn respond(&self, response: Message) {
match response {
Message::Rpc(outputs) => {
let id = match outputs.get(0) {
Some(&rpc::Output::Success(ref success)) => success.id.clone(),
Some(&rpc::Output::Failure(ref failure)) => failure.id.clone(),
None => rpc::Id::Num(0),
};
if let rpc::Id::Num(num) = id {
if let Some(request) = self.pending.lock().remove(&(num as usize)) {
log::trace!("Responding to (id: {:?}) with {:?}", num, outputs);
if let Err(err) = request.send(helpers::to_results_from_outputs(outputs)) {
log::warn!("Sending a response to deallocated channel: {:?}", err);
}
} else {
log::warn!("Got response for unknown request (id: {:?})", num);
}
} else {
log::warn!("Got unsupported response (id: {:?})", id);
}
}
Message::Notification(notification) => {
if let rpc::Params::Map(params) = notification.params {
let id = params.get("subscription");
let result = params.get("result");
if let (Some(&rpc::Value::String(ref id)), Some(result)) = (id, result) {
let id: SubscriptionId = id.clone().into();
if let Some(stream) = self.subscriptions.lock().get(&id) {
if let Err(e) = stream.unbounded_send(result.clone()) {
log::error!("Error sending notification (id: {:?}): {:?}", id, e);
}
} else {
log::warn!("Got notification for unknown subscription (id: {:?})", id);
}
} else {
log::error!("Got unsupported notification (id: {:?})", id);
}
}
}
}
}
fn extract_response(buf: &[u8], min: usize) -> Option<(Message, usize)> {
for pos in (min..buf.len()).rev() {
// Look for end character
if buf[pos] == b']' || buf[pos] == b'}' {
// Try to deserialize
let pos = pos + 1;
match helpers::to_response_from_slice(&buf[0..pos]) {
Ok(rpc::Response::Single(output)) => return Some((Message::Rpc(vec![output]), pos)),
Ok(rpc::Response::Batch(outputs)) => return Some((Message::Rpc(outputs), pos)),
// just continue
_ => {}
}
match helpers::to_notification_from_slice(&buf[0..pos]) {
Ok(notification) => return Some((Message::Notification(notification), pos)),
_ => {}
}
}
}
None
}
}
#[cfg(all(test, unix))]
mod tests {
extern crate tokio_core;
extern crate tokio_uds;
use super::Ipc;
use crate::rpc;
use crate::Transport;
use futures::{self, Future};
use std::io::{self, Read, Write};
#[test]
fn should_send_a_request() {
// given
let mut eloop = tokio_core::reactor::Core::new().unwrap();
let handle = eloop.handle();
let (server, client) = tokio_uds::UnixStream::pair(&handle).unwrap();
let ipc = Ipc::with_stream(client, &handle).unwrap();
eloop.remote().spawn(move |_| {
struct Task {
server: tokio_uds::UnixStream,
}
impl Future for Task {
type Item = ();
type Error = ();
fn poll(&mut self) -> futures::Poll<(), ()> {
let mut data = [0; 2048];
// Read request
let read = try_nb!(self.server.read(&mut data));
let request = String::from_utf8(data[0..read].to_vec()).unwrap();
assert_eq!(
&request,
r#"{"jsonrpc":"2.0","method":"eth_accounts","params":["1"],"id":1}"#
);
// Write response
let response = r#"{"jsonrpc":"2.0","id":1,"result":"x"}"#;
self.server.write_all(response.as_bytes()).unwrap();
self.server.flush().unwrap();
Ok(futures::Async::Ready(()))
}
}
Task { server }
});
// when
let res = ipc.execute("eth_accounts", vec![rpc::Value::String("1".into())]);
// then
assert_eq!(eloop.run(res), Ok(rpc::Value::String("x".into())));
}
#[test]
fn should_handle_double_response() {
// given
let mut eloop = tokio_core::reactor::Core::new().unwrap();
let handle = eloop.handle();
let (server, client) = tokio_uds::UnixStream::pair(&handle).unwrap();
let ipc = Ipc::with_stream(client, &handle).unwrap();
eloop.remote().spawn(move |_| {
struct Task {
server: tokio_uds::UnixStream,
}
impl Future for Task {
type Item = ();
type Error = ();
fn poll(&mut self) -> futures::Poll<(), ()> {
let mut data = [0; 2048];
// Read request
let read = try_nb!(self.server.read(&mut data));
let request = String::from_utf8(data[0..read].to_vec()).unwrap();
assert_eq!(&request, r#"{"jsonrpc":"2.0","method":"eth_accounts","params":["1"],"id":1}{"jsonrpc":"2.0","method":"eth_accounts","params":["1"],"id":2}"#);
// Write response
let response = r#"{"jsonrpc":"2.0","id":1,"result":"x"}{"jsonrpc":"2.0","id":2,"result":"x"}"#;
self.server.write_all(response.as_bytes()).unwrap();
self.server.flush().unwrap();
Ok(futures::Async::Ready(()))
}
}
Task { server }
});
// when
let res1 = ipc.execute("eth_accounts", vec![rpc::Value::String("1".into())]);
let res2 = ipc.execute("eth_accounts", vec![rpc::Value::String("1".into())]);
// then
assert_eq!(
eloop.run(res1.join(res2)),
Ok((rpc::Value::String("x".into()), rpc::Value::String("x".into())))
);
}
}
| {
let (tx, rx) = mpsc::unbounded();
if self.subscriptions.lock().insert(id.clone(), tx).is_some() {
log::warn!("Replacing already-registered subscription with id {:?}", id)
}
Box::new(rx.map_err(|()| Error::Transport("No data available".into())))
} | identifier_body |
ipc.rs | //! IPC Transport for *nix
#[cfg(unix)]
extern crate tokio_uds;
use std::collections::BTreeMap;
use std::io::{self, Read, Write};
use std::path::Path;
use std::sync::{atomic, Arc};
#[cfg(unix)]
use self::tokio_uds::UnixStream;
use crate::api::SubscriptionId;
use crate::helpers;
use crate::rpc;
use crate::transports::shared::{EventLoopHandle, Response};
use crate::transports::tokio_core::reactor;
use crate::transports::tokio_io::io::{ReadHalf, WriteHalf};
use crate::transports::tokio_io::AsyncRead;
use crate::transports::Result;
use crate::{BatchTransport, DuplexTransport, Error, RequestId, Transport};
use futures::sync::{mpsc, oneshot};
use futures::{self, Future, Stream};
use parking_lot::Mutex;
macro_rules! try_nb {
($e:expr) => {
match $e {
Ok(t) => t,
Err(ref e) if e.kind() == io::ErrorKind::WouldBlock => return Ok(futures::Async::NotReady),
Err(e) => {
log::warn!("Unexpected IO error: {:?}", e);
return Err(());
}
}
};
}
type Pending = oneshot::Sender<Result<Vec<Result<rpc::Value>>>>;
type Subscription = mpsc::UnboundedSender<rpc::Value>;
/// A future representing pending IPC request, resolves to a response.
pub type IpcTask<F> = Response<F, Vec<Result<rpc::Value>>>;
/// Unix Domain Sockets (IPC) transport
#[derive(Debug, Clone)]
pub struct Ipc {
id: Arc<atomic::AtomicUsize>,
pending: Arc<Mutex<BTreeMap<RequestId, Pending>>>,
subscriptions: Arc<Mutex<BTreeMap<SubscriptionId, Subscription>>>,
write_sender: mpsc::UnboundedSender<Vec<u8>>,
}
impl Ipc {
/// Create new IPC transport with separate event loop.
/// NOTE: Dropping event loop handle will stop the transport layer!
///
/// IPC is only available on Unix. On other systems, this always returns an error.
pub fn new<P>(path: P) -> Result<(EventLoopHandle, Self)>
where
P: AsRef<Path>,
{
let path = path.as_ref().to_owned();
EventLoopHandle::spawn(move |handle| Self::with_event_loop(&path, &handle).map_err(Into::into))
}
/// Create new IPC transport within existing Event Loop.
///
/// IPC is only available on Unix. On other systems, this always returns an error.
#[cfg(unix)]
pub fn with_event_loop<P>(path: P, handle: &reactor::Handle) -> Result<Self>
where
P: AsRef<Path>,
{
log::trace!("Connecting to: {:?}", path.as_ref());
let stream = UnixStream::connect(path, handle)?;
Self::with_stream(stream, handle)
}
/// Creates new IPC transport from existing `UnixStream` and `Handle`
#[cfg(unix)]
fn with_stream(stream: UnixStream, handle: &reactor::Handle) -> Result<Self> {
let (read, write) = stream.split();
let (write_sender, write_receiver) = mpsc::unbounded();
let pending: Arc<Mutex<BTreeMap<RequestId, Pending>>> = Default::default();
let subscriptions: Arc<Mutex<BTreeMap<SubscriptionId, Subscription>>> = Default::default();
let r = ReadStream {
read,
pending: pending.clone(),
subscriptions: subscriptions.clone(),
buffer: vec![],
current_pos: 0,
};
let w = WriteStream {
write,
incoming: write_receiver,
state: WriteState::WaitingForRequest,
};
handle.spawn(r);
handle.spawn(w);
Ok(Ipc {
id: Arc::new(atomic::AtomicUsize::new(1)),
write_sender,
pending,
subscriptions,
})
}
#[cfg(not(unix))]
pub fn with_event_loop<P>(_path: P, _handle: &reactor::Handle) -> Result<Self> {
return Err(Error::Transport("IPC transport is only supported on Unix".into()).into());
}
fn send_request<F, O>(&self, id: RequestId, request: rpc::Request, extract: F) -> IpcTask<F>
where
F: Fn(Vec<Result<rpc::Value>>) -> O,
{
let request = helpers::to_string(&request);
log::debug!("[{}] Calling: {}", id, request);
let (tx, rx) = futures::oneshot();
self.pending.lock().insert(id, tx);
let result = self
.write_sender
.unbounded_send(request.into_bytes())
.map_err(|_| Error::Io(io::ErrorKind::BrokenPipe.into()));
Response::new(id, result, rx, extract)
}
}
impl Transport for Ipc {
type Out = IpcTask<fn(Vec<Result<rpc::Value>>) -> Result<rpc::Value>>;
fn prepare(&self, method: &str, params: Vec<rpc::Value>) -> (RequestId, rpc::Call) {
let id = self.id.fetch_add(1, atomic::Ordering::AcqRel);
let request = helpers::build_request(id, method, params);
(id, request)
}
fn send(&self, id: RequestId, request: rpc::Call) -> Self::Out {
self.send_request(id, rpc::Request::Single(request), single_response)
}
}
fn single_response(response: Vec<Result<rpc::Value>>) -> Result<rpc::Value> {
match response.into_iter().next() {
Some(res) => res,
None => Err(Error::InvalidResponse("Expected single, got batch.".into())),
}
}
impl BatchTransport for Ipc {
type Batch = IpcTask<fn(Vec<Result<rpc::Value>>) -> Result<Vec<Result<rpc::Value>>>>;
fn send_batch<T>(&self, requests: T) -> Self::Batch | let requests = first.into_iter().chain(it.map(|x| x.1)).collect();
self.send_request(id, rpc::Request::Batch(requests), Ok)
}
}
impl DuplexTransport for Ipc {
type NotificationStream = Box<dyn Stream<Item = rpc::Value, Error = Error> + Send +'static>;
fn subscribe(&self, id: &SubscriptionId) -> Self::NotificationStream {
let (tx, rx) = mpsc::unbounded();
if self.subscriptions.lock().insert(id.clone(), tx).is_some() {
log::warn!("Replacing already-registered subscription with id {:?}", id)
}
Box::new(rx.map_err(|()| Error::Transport("No data available".into())))
}
fn unsubscribe(&self, id: &SubscriptionId) {
self.subscriptions.lock().remove(id);
}
}
enum WriteState {
WaitingForRequest,
Writing { buffer: Vec<u8>, current_pos: usize },
}
/// Writing part of the IPC transport
/// Awaits new requests using `mpsc::UnboundedReceiver` and writes them to the socket.
#[cfg(unix)]
struct WriteStream {
write: WriteHalf<UnixStream>,
incoming: mpsc::UnboundedReceiver<Vec<u8>>,
state: WriteState,
}
#[cfg(unix)]
impl Future for WriteStream {
type Item = ();
type Error = ();
fn poll(&mut self) -> futures::Poll<Self::Item, Self::Error> {
loop {
self.state = match self.state {
WriteState::WaitingForRequest => {
// Ask for more to write
let to_send = try_ready!(self.incoming.poll());
if let Some(to_send) = to_send {
log::trace!("Got new message to write: {:?}", String::from_utf8_lossy(&to_send));
WriteState::Writing {
buffer: to_send,
current_pos: 0,
}
} else {
return Ok(futures::Async::NotReady);
}
}
WriteState::Writing {
ref buffer,
ref mut current_pos,
} => {
// Write everything in the buffer
while *current_pos < buffer.len() {
let n = try_nb!(self.write.write(&buffer[*current_pos..]));
*current_pos += n;
if n == 0 {
log::warn!("IO Error: Zero write.");
return Err(()); // zero write?
}
}
WriteState::WaitingForRequest
}
};
}
}
}
/// Reading part of the IPC transport.
/// Reads data on the socket and tries to dispatch it to awaiting requests.
#[cfg(unix)]
struct ReadStream {
read: ReadHalf<UnixStream>,
pending: Arc<Mutex<BTreeMap<RequestId, Pending>>>,
subscriptions: Arc<Mutex<BTreeMap<SubscriptionId, Subscription>>>,
buffer: Vec<u8>,
current_pos: usize,
}
#[cfg(unix)]
impl Future for ReadStream {
type Item = ();
type Error = ();
fn poll(&mut self) -> futures::Poll<Self::Item, Self::Error> {
const DEFAULT_BUF_SIZE: usize = 4096;
let mut new_write_size = 128;
loop {
if self.current_pos == self.buffer.len() {
if new_write_size < DEFAULT_BUF_SIZE {
new_write_size *= 2;
}
self.buffer.resize(self.current_pos + new_write_size, 0);
}
let read = try_nb!(self.read.read(&mut self.buffer[self.current_pos..]));
if read == 0 {
return Ok(futures::Async::NotReady);
}
let mut min = self.current_pos;
self.current_pos += read;
while let Some((response, len)) = Self::extract_response(&self.buffer[0..self.current_pos], min) {
// Respond
self.respond(response);
// copy rest of buffer to the beginning
for i in len..self.current_pos {
self.buffer.swap(i, i - len);
}
// truncate the buffer
let new_len = self.current_pos - len;
self.buffer.truncate(new_len + new_write_size);
// Set new positions
self.current_pos = new_len;
min = 0;
}
}
}
}
enum Message {
Rpc(Vec<rpc::Output>),
Notification(rpc::Notification),
}
#[cfg(unix)]
impl ReadStream {
fn respond(&self, response: Message) {
match response {
Message::Rpc(outputs) => {
let id = match outputs.get(0) {
Some(&rpc::Output::Success(ref success)) => success.id.clone(),
Some(&rpc::Output::Failure(ref failure)) => failure.id.clone(),
None => rpc::Id::Num(0),
};
if let rpc::Id::Num(num) = id {
if let Some(request) = self.pending.lock().remove(&(num as usize)) {
log::trace!("Responding to (id: {:?}) with {:?}", num, outputs);
if let Err(err) = request.send(helpers::to_results_from_outputs(outputs)) {
log::warn!("Sending a response to deallocated channel: {:?}", err);
}
} else {
log::warn!("Got response for unknown request (id: {:?})", num);
}
} else {
log::warn!("Got unsupported response (id: {:?})", id);
}
}
Message::Notification(notification) => {
if let rpc::Params::Map(params) = notification.params {
let id = params.get("subscription");
let result = params.get("result");
if let (Some(&rpc::Value::String(ref id)), Some(result)) = (id, result) {
let id: SubscriptionId = id.clone().into();
if let Some(stream) = self.subscriptions.lock().get(&id) {
if let Err(e) = stream.unbounded_send(result.clone()) {
log::error!("Error sending notification (id: {:?}): {:?}", id, e);
}
} else {
log::warn!("Got notification for unknown subscription (id: {:?})", id);
}
} else {
log::error!("Got unsupported notification (id: {:?})", id);
}
}
}
}
}
fn extract_response(buf: &[u8], min: usize) -> Option<(Message, usize)> {
for pos in (min..buf.len()).rev() {
// Look for end character
if buf[pos] == b']' || buf[pos] == b'}' {
// Try to deserialize
let pos = pos + 1;
match helpers::to_response_from_slice(&buf[0..pos]) {
Ok(rpc::Response::Single(output)) => return Some((Message::Rpc(vec![output]), pos)),
Ok(rpc::Response::Batch(outputs)) => return Some((Message::Rpc(outputs), pos)),
// just continue
_ => {}
}
match helpers::to_notification_from_slice(&buf[0..pos]) {
Ok(notification) => return Some((Message::Notification(notification), pos)),
_ => {}
}
}
}
None
}
}
#[cfg(all(test, unix))]
mod tests {
extern crate tokio_core;
extern crate tokio_uds;
use super::Ipc;
use crate::rpc;
use crate::Transport;
use futures::{self, Future};
use std::io::{self, Read, Write};
#[test]
fn should_send_a_request() {
// given
let mut eloop = tokio_core::reactor::Core::new().unwrap();
let handle = eloop.handle();
let (server, client) = tokio_uds::UnixStream::pair(&handle).unwrap();
let ipc = Ipc::with_stream(client, &handle).unwrap();
eloop.remote().spawn(move |_| {
struct Task {
server: tokio_uds::UnixStream,
}
impl Future for Task {
type Item = ();
type Error = ();
fn poll(&mut self) -> futures::Poll<(), ()> {
let mut data = [0; 2048];
// Read request
let read = try_nb!(self.server.read(&mut data));
let request = String::from_utf8(data[0..read].to_vec()).unwrap();
assert_eq!(
&request,
r#"{"jsonrpc":"2.0","method":"eth_accounts","params":["1"],"id":1}"#
);
// Write response
let response = r#"{"jsonrpc":"2.0","id":1,"result":"x"}"#;
self.server.write_all(response.as_bytes()).unwrap();
self.server.flush().unwrap();
Ok(futures::Async::Ready(()))
}
}
Task { server }
});
// when
let res = ipc.execute("eth_accounts", vec![rpc::Value::String("1".into())]);
// then
assert_eq!(eloop.run(res), Ok(rpc::Value::String("x".into())));
}
#[test]
fn should_handle_double_response() {
// given
let mut eloop = tokio_core::reactor::Core::new().unwrap();
let handle = eloop.handle();
let (server, client) = tokio_uds::UnixStream::pair(&handle).unwrap();
let ipc = Ipc::with_stream(client, &handle).unwrap();
eloop.remote().spawn(move |_| {
struct Task {
server: tokio_uds::UnixStream,
}
impl Future for Task {
type Item = ();
type Error = ();
fn poll(&mut self) -> futures::Poll<(), ()> {
let mut data = [0; 2048];
// Read request
let read = try_nb!(self.server.read(&mut data));
let request = String::from_utf8(data[0..read].to_vec()).unwrap();
assert_eq!(&request, r#"{"jsonrpc":"2.0","method":"eth_accounts","params":["1"],"id":1}{"jsonrpc":"2.0","method":"eth_accounts","params":["1"],"id":2}"#);
// Write response
let response = r#"{"jsonrpc":"2.0","id":1,"result":"x"}{"jsonrpc":"2.0","id":2,"result":"x"}"#;
self.server.write_all(response.as_bytes()).unwrap();
self.server.flush().unwrap();
Ok(futures::Async::Ready(()))
}
}
Task { server }
});
// when
let res1 = ipc.execute("eth_accounts", vec![rpc::Value::String("1".into())]);
let res2 = ipc.execute("eth_accounts", vec![rpc::Value::String("1".into())]);
// then
assert_eq!(
eloop.run(res1.join(res2)),
Ok((rpc::Value::String("x".into()), rpc::Value::String("x".into())))
);
}
} | where
T: IntoIterator<Item = (RequestId, rpc::Call)>,
{
let mut it = requests.into_iter();
let (id, first) = it.next().map(|x| (x.0, Some(x.1))).unwrap_or_else(|| (0, None)); | random_line_split |
mod.rs | unsafe { slice::from_raw_parts(list.values, list_len) }
.iter()
.zip(unsafe { slice::from_raw_parts(set.ds, ds_len) })
.map(|(val, source)| unsafe {
let v = match ::std::mem::transmute(source.type_) {
ValueType::Gauge => Value::Gauge(val.gauge),
ValueType::Counter => Value::Counter(val.counter),
ValueType::Derive => Value::Derive(val.derive),
ValueType::Absolute => Value::Absolute(val.absolute),
};
let name = receive_array(&source.name, plugin, "data source name")?;
Ok(ValueReport {
name,
value: v,
min: source.min,
max: source.max,
})
})
.collect();
assert!(list.time > 0);
assert!(list.interval > 0);
let plugin_instance =
receive_array(&list.plugin_instance, plugin, "plugin_instance").map(empty_to_none)?;
let type_ = receive_array(&list.type_, plugin, "type")?;
let type_instance =
receive_array(&list.type_instance, plugin, "type_instance").map(empty_to_none)?;
let host = receive_array(&list.host, plugin, "host")?;
let meta = from_meta_data(plugin, list.meta)?;
Ok(ValueList {
values: values?,
plugin_instance,
plugin,
type_,
type_instance,
host,
time: CdTime::from(list.time).into(),
interval: CdTime::from(list.interval).into(),
meta,
original_list: list,
original_set: set,
})
}
}
#[derive(Debug, PartialEq, Clone)]
struct SubmitValueList<'a> {
values: &'a [Value],
plugin_instance: Option<&'a str>,
plugin: &'a str,
type_: &'a str,
type_instance: Option<&'a str>,
host: Option<&'a str>,
time: Option<DateTime<Utc>>,
interval: Option<Duration>,
meta: HashMap<&'a str, MetaValue>,
}
/// Creates a value list to report values to collectd.
#[derive(Debug, PartialEq, Clone)]
pub struct ValueListBuilder<'a> {
list: SubmitValueList<'a>,
}
impl<'a> ValueListBuilder<'a> {
/// Primes a value list for submission. `plugin` will most likely be the name from the
/// `PluginManager` and `type_` is the datatype found in types.db
pub fn new<T: Into<&'a str>, U: Into<&'a str>>(plugin: T, type_: U) -> ValueListBuilder<'a> {
ValueListBuilder {
list: SubmitValueList {
values: &[],
plugin_instance: None,
plugin: plugin.into(),
type_: type_.into(),
type_instance: None,
host: None,
time: None,
interval: None,
meta: HashMap::new(),
},
}
}
/// A set of observed values that belong to the same plugin and type instance
pub fn values(mut self, values: &'a [Value]) -> ValueListBuilder<'a> {
self.list.values = values;
self
}
/// Distinguishes entities that yield metrics. Each core would be a different instance of the
/// same plugin, as each core reports "idle", "user", "system" metrics.
pub fn plugin_instance<T: Into<&'a str>>(mut self, plugin_instance: T) -> ValueListBuilder<'a> {
self.list.plugin_instance = Some(plugin_instance.into());
self
}
/// The type instance is used to separate values of identical type which nonetheless belong to
/// one another. For instance, even though "free", "used", and "total" all have types of
/// "Memory" they are different type instances.
pub fn type_instance<T: Into<&'a str>>(mut self, type_instance: T) -> ValueListBuilder<'a> {
self.list.type_instance = Some(type_instance.into());
self
}
/// Override the machine's hostname that the observed values will be attributed to. Best to
/// override when observing values from another machine
pub fn host<T: Into<&'a str>>(mut self, host: T) -> ValueListBuilder<'a> {
self.list.host = Some(host.into());
self
}
/// The timestamp at which the value was collected. Overrides the default time, which is when
/// collectd receives the values from `submit`. Use only if there is a significant delay is
/// metrics gathering or if submitting values from the past.
pub fn time(mut self, dt: DateTime<Utc>) -> ValueListBuilder<'a> {
self.list.time = Some(dt);
self
}
/// The interval in which new values are to be expected. This is typically handled at a global
/// or plugin level. Use at your own discretion.
pub fn interval(mut self, interval: Duration) -> ValueListBuilder<'a> {
self.list.interval = Some(interval);
self
}
/// Add a metadata entry.
///
/// Multiple entries can be added by calling this method. If the same key is used, only the last
/// entry is kept.
pub fn metadata(mut self, key: &'a str, value: MetaValue) -> ValueListBuilder<'a> {
self.list.meta.insert(key, value);
self
}
/// Submits the observed values to collectd and returns errors if encountered
pub fn submit(self) -> Result<(), SubmitError> {
let mut v: Vec<value_t> = self.list.values.iter().map(|&x| x.into()).collect();
let plugin_instance = self
.list
.plugin_instance
.map(|x| submit_array_res(x, "plugin_instance"))
.unwrap_or_else(|| Ok([0 as c_char; ARR_LENGTH]))?;
let type_instance = self
.list
.type_instance
.map(|x| submit_array_res(x, "type_instance"))
.unwrap_or_else(|| Ok([0 as c_char; ARR_LENGTH]))?;
let host = self
.list
.host
.map(|x| submit_array_res(x, "host"))
.transpose()?;
// If a custom host is not provided by the plugin, we default to the global
// hostname. In versions prior to collectd 5.7, it was required to propagate the
// global hostname (hostname_g) in the submission. In collectd 5.7, one could
// submit an empty array or hostname_g and they would equate to the same thing. In
// collectd 5.8, hostname_g had the type signature changed so it could no longer be
// submitted and would cause garbage to be read (and thus could have very much
// unintended side effects)
let host = host.unwrap_or([0 as c_char; ARR_LENGTH]);
let len = v.len() as u64;
let plugin = submit_array_res(self.list.plugin, "plugin")?;
let type_ = submit_array_res(self.list.type_, "type")?;
let meta = to_meta_data(&self.list.meta)?;
let list = value_list_t {
values: v.as_mut_ptr(),
values_len: len,
plugin_instance,
plugin,
type_,
type_instance,
host,
time: self.list.time.map(CdTime::from).unwrap_or(CdTime(0)).into(),
interval: self
.list
.interval
.map(CdTime::from)
.unwrap_or(CdTime(0))
.into(),
meta,
};
match unsafe { plugin_dispatch_values(&list) } {
0 => Ok(()),
i => Err(SubmitError::Dispatch(i)),
}
}
}
fn to_meta_data<'a, 'b: 'a, T>(meta_hm: T) -> Result<*mut meta_data_t, SubmitError>
where
T: IntoIterator<Item = (&'a &'b str, &'a MetaValue)>,
{
let meta = unsafe { meta_data_create() };
let conversion_result = to_meta_data_with_meta(meta_hm, meta);
match conversion_result {
Ok(()) => Ok(meta),
Err(error) => {
unsafe {
meta_data_destroy(meta);
}
Err(error)
}
}
}
fn to_meta_data_with_meta<'a, 'b: 'a, T>(
meta_hm: T,
meta: *mut meta_data_t,
) -> Result<(), SubmitError>
where
T: IntoIterator<Item = (&'a &'b str, &'a MetaValue)>,
{
for (key, value) in meta_hm.into_iter() {
let c_key = CString::new(*key).map_err(|e| SubmitError::Field {
name: "meta key",
err: ArrayError::NullPresent(e.nul_position(), key.to_string()),
})?;
match value {
MetaValue::String(str) => {
let c_value = CString::new(str.as_str()).map_err(|e| SubmitError::Field {
name: "meta value",
err: ArrayError::NullPresent(e.nul_position(), str.to_string()),
})?;
unsafe {
meta_data_add_string(meta, c_key.as_ptr(), c_value.as_ptr());
}
}
MetaValue::SignedInt(i) => unsafe {
meta_data_add_signed_int(meta, c_key.as_ptr(), *i);
},
MetaValue::UnsignedInt(u) => unsafe {
meta_data_add_unsigned_int(meta, c_key.as_ptr(), *u);
},
MetaValue::Double(d) => unsafe {
meta_data_add_double(meta, c_key.as_ptr(), *d);
},
MetaValue::Boolean(b) => unsafe {
meta_data_add_boolean(meta, c_key.as_ptr(), *b);
},
}
}
Ok(())
}
fn from_meta_data(
plugin: &str,
meta: *mut meta_data_t,
) -> Result<HashMap<String, MetaValue>, ReceiveError> {
if meta.is_null() {
return Ok(HashMap::new());
}
let mut c_toc: *mut *mut c_char = ptr::null_mut();
let count_or_err = unsafe { meta_data_toc(meta, &mut c_toc as *mut *mut *mut c_char) };
if count_or_err < 0 {
return Err(ReceiveError::Metadata {
plugin: plugin.to_string(),
field: "toc".to_string(),
msg: "invalid parameters to meta_data_toc",
});
}
let count = count_or_err as usize;
if count == 0 {
return Ok(HashMap::new());
}
let toc = unsafe { slice::from_raw_parts(c_toc, count) };
let conversion_result = from_meta_data_with_toc(plugin, meta, toc);
for c_key_ptr in toc {
unsafe {
libc::free(*c_key_ptr as *mut c_void);
}
}
unsafe {
libc::free(c_toc as *mut c_void);
}
conversion_result
}
fn from_meta_data_with_toc(
plugin: &str,
meta: *mut meta_data_t,
toc: &[*mut c_char],
) -> Result<HashMap<String, MetaValue>, ReceiveError> {
let mut meta_hm = HashMap::with_capacity(toc.len());
for c_key_ptr in toc {
let (c_key, key, value_type) = unsafe {
let c_key: &CStr = CStr::from_ptr(*c_key_ptr);
let key: String = c_key
.to_str()
.map_err(|e| ReceiveError::Utf8 {
plugin: plugin.to_string(),
field: "metadata key",
err: e,
})?
.to_string();
let value_type: u32 = meta_data_type(meta, c_key.as_ptr()) as u32;
(c_key, key, value_type)
};
match value_type {
MD_TYPE_BOOLEAN => {
let mut c_value = false;
unsafe {
meta_data_get_boolean(meta, c_key.as_ptr(), &mut c_value as *mut bool);
}
meta_hm.insert(key, MetaValue::Boolean(c_value));
}
MD_TYPE_DOUBLE => {
let mut c_value = 0.0;
unsafe {
meta_data_get_double(meta, c_key.as_ptr(), &mut c_value as *mut f64);
}
meta_hm.insert(key, MetaValue::Double(c_value));
}
MD_TYPE_SIGNED_INT => {
let mut c_value = 0i64;
unsafe {
meta_data_get_signed_int(meta, c_key.as_ptr(), &mut c_value as *mut i64);
}
meta_hm.insert(key, MetaValue::SignedInt(c_value));
}
MD_TYPE_STRING => {
let value: String = unsafe {
let mut c_value: *mut c_char = ptr::null_mut();
meta_data_get_string(meta, c_key.as_ptr(), &mut c_value as *mut *mut c_char);
CStr::from_ptr(c_value)
.to_str()
.map_err(|e| ReceiveError::Utf8 {
plugin: plugin.to_string(),
field: "metadata value",
err: e,
})?
.to_string()
};
meta_hm.insert(key, MetaValue::String(value));
}
MD_TYPE_UNSIGNED_INT => {
let mut c_value = 0u64;
unsafe {
meta_data_get_unsigned_int(meta, c_key.as_ptr(), &mut c_value as *mut u64);
}
meta_hm.insert(key, MetaValue::UnsignedInt(c_value));
}
_ => {
return Err(ReceiveError::Metadata {
plugin: plugin.to_string(),
field: key,
msg: "unknown metadata type",
});
}
}
}
Ok(meta_hm)
}
fn submit_array_res(s: &str, name: &'static str) -> Result<[c_char; ARR_LENGTH], SubmitError> {
to_array_res(s).map_err(|e| SubmitError::Field { name, err: e })
}
/// Collectd stores textual data in fixed sized arrays, so this function will convert a string
/// slice into array compatible with collectd's text fields. Be aware that `ARR_LENGTH` is 64
/// before collectd 5.7
fn to_array_res(s: &str) -> Result<[c_char; ARR_LENGTH], ArrayError> {
// By checking if the length is greater than or *equal* to, we guarantee a trailing null
if s.len() >= ARR_LENGTH {
return Err(ArrayError::TooLong(s.len()));
}
let bytes = s.as_bytes();
// Using memchr to find a null and work around it is 10x faster than
// using a CString to get the bytes_with_nul and cut the time to submit
// values to collectd in half.
if let Some(ind) = memchr(0, bytes) {
return Err(ArrayError::NullPresent(ind, s.to_string()));
}
let mut arr = [0; ARR_LENGTH];
arr[0..bytes.len()].copy_from_slice(bytes);
Ok(unsafe { ::std::mem::transmute(arr) })
}
fn receive_array<'a>(
s: &'a [c_char; ARR_LENGTH],
plugin: &str,
field: &'static str,
) -> Result<&'a str, ReceiveError> {
from_array(s).map_err(|e| ReceiveError::Utf8 {
plugin: String::from(plugin),
field,
err: e,
})
}
/// Turns a fixed size character array into string slice, if possible
pub fn from_array(s: &[c_char; ARR_LENGTH]) -> Result<&str, Utf8Error> {
unsafe {
let a = s as *const [c_char; ARR_LENGTH] as *const c_char;
CStr::from_ptr(a).to_str()
}
}
/// Returns if the string is empty or not
pub fn empty_to_none(s: &str) -> Option<&str> {
if s.is_empty() {
None
} else {
Some(s)
}
}
pub fn length(len: u64) -> usize {
len as usize
}
pub fn get_default_interval() -> u64 {
0
}
#[cfg(test)]
mod tests {
use self::cdtime::nanos_to_collectd;
use super::*;
use crate::bindings::data_source_t;
use std::os::raw::c_char;
#[test]
fn test_empty_to_none() {
assert_eq!(None, empty_to_none(""));
let s = "hi";
assert_eq!(Some("hi"), empty_to_none(s));
}
#[test]
fn test_from_array() {
let mut name: [c_char; ARR_LENGTH] = [0; ARR_LENGTH];
name[0] = b'h' as c_char;
name[1] = b'i' as c_char;
assert_eq!(Ok("hi"), from_array(&name));
}
#[test]
fn test_to_array() {
let actual = to_array_res("Hi");
assert!(actual.is_ok());
assert_eq!(&actual.unwrap()[..2], &[b'H' as c_char, b'i' as c_char]);
}
#[test]
fn test_to_array_res_nul() {
let actual = to_array_res("hi\0");
assert!(actual.is_err());
}
#[test]
fn test_to_array_res_too_long() {
let actual = to_array_res(
"Hello check this out, I am a long string and there is no signs of stopping; well, maybe one day I will stop when I get too longggggggggggggggggggggggggggggggggggg",
);
assert!(actual.is_err());
}
#[test]
fn test_submit() {
let values = vec![Value::Gauge(15.0), Value::Gauge(10.0), Value::Gauge(12.0)];
let result = ValueListBuilder::new("my-plugin", "load")
.values(&values)
.submit();
assert_eq!(result.unwrap(), ());
}
#[test]
fn test_recv_value_list_conversion() {
let empty: [c_char; ARR_LENGTH] = [0; ARR_LENGTH];
let mut metric: [c_char; ARR_LENGTH] = [0; ARR_LENGTH];
metric[0] = b'h' as c_char;
metric[1] = b'o' as c_char;
| random_line_split |
||
mod.rs |
x.is_nan()
} else {
false
}
}
}
impl fmt::Display for Value {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match *self {
Value::Counter(x) | Value::Absolute(x) => write!(f, "{}", x),
Value::Gauge(x) => write!(f, "{}", x),
Value::Derive(x) => write!(f, "{}", x),
}
}
}
impl From<Value> for value_t {
fn from(x: Value) -> Self {
match x {
Value::Counter(x) => value_t { counter: x },
Value::Gauge(x) => value_t { gauge: x },
Value::Derive(x) => value_t { derive: x },
Value::Absolute(x) => value_t { absolute: x },
}
}
}
/// Name and value of a reported metric
#[derive(Debug, PartialEq, Clone, Copy)]
pub struct ValueReport<'a> {
/// Name of the metric. If values has a length of 1, this is often just "value"
pub name: &'a str,
/// The value reported
pub value: Value,
/// Minimum value seen in an interval
pub min: f64,
/// Maximum value seen in an interval
pub max: f64,
}
/// Contains values and metadata that collectd has collected from plugins
#[derive(Debug, PartialEq, Clone)]
pub struct ValueList<'a> {
pub values: Vec<ValueReport<'a>>,
/// The plugin that submitted this value. This would be your `PluginManager` when submitting
/// values
pub plugin: &'a str,
/// Distinguishes entities that yield metrics. Each core would be a different instance of the
/// same plugin, as each core reports "idle", "user", "system" metrics.
pub plugin_instance: Option<&'a str>,
/// This is the string found in types.db, determines how many values are expected and how they
/// should be interpreted
pub type_: &'a str,
/// The type instance is used to separate values of identical type which nonetheless belong to
/// one another. For instance, even though "free", "used", and "total" all have types of
/// "Memory" they are different type instances.
pub type_instance: Option<&'a str>,
/// The hostname where the values were collectd
pub host: &'a str,
/// The timestamp at which the value was collected
pub time: DateTime<Utc>,
/// The interval in which new values are to be expected
pub interval: Duration,
/// Metadata associated to the reported values
pub meta: HashMap<String, MetaValue>,
// Keep the original list and set around for calculating rates on demand
original_list: *const value_list_t,
original_set: *const data_set_t,
}
impl<'a> ValueList<'a> {
/// Collectd does not automatically convert `Derived` values into a rate. This is why many
/// write plugins have a `StoreRates` config option so that these rates are calculated on
/// demand from collectd's internal cache. This function will return a vector that can supercede
/// the `values` field that contains the rate of all non-gauge values. Values that are gauges
/// remain unchanged, so one doesn't need to resort back to `values` field as this function
/// will return everything prepped for submission.
pub fn rates(&self) -> Result<Cow<'_, Vec<ValueReport<'a>>>, CacheRateError> {
// As an optimization step, if we know all values are gauges there is no need to call out
// to uc_get_rate as no values will be changed
let all_gauges = self.values.iter().all(|x| match x.value {
Value::Gauge(_) => true,
_ => false,
});
if all_gauges {
return Ok(Cow::Borrowed(&self.values));
}
let ptr = unsafe { uc_get_rate(self.original_set, self.original_list) };
if!ptr.is_null() {
let nv = unsafe { slice::from_raw_parts(ptr, self.values.len()) }
.iter()
.zip(self.values.iter())
.map(|(rate, report)| match report.value {
Value::Gauge(_) => *report,
_ => ValueReport {
value: Value::Gauge(*rate),
..*report
},
})
.collect();
Ok(Cow::Owned(nv))
} else {
Err(CacheRateError)
}
}
pub fn from<'b>(
set: &'b data_set_t,
list: &'b value_list_t,
) -> Result<ValueList<'b>, ReceiveError> {
let plugin = receive_array(&list.plugin, "", "plugin name")?;
let ds_len = length(set.ds_num);
let list_len = length(list.values_len);
let values: Result<Vec<ValueReport<'_>>, ReceiveError> =
unsafe { slice::from_raw_parts(list.values, list_len) }
.iter()
.zip(unsafe { slice::from_raw_parts(set.ds, ds_len) })
.map(|(val, source)| unsafe {
let v = match ::std::mem::transmute(source.type_) {
ValueType::Gauge => Value::Gauge(val.gauge),
ValueType::Counter => Value::Counter(val.counter),
ValueType::Derive => Value::Derive(val.derive),
ValueType::Absolute => Value::Absolute(val.absolute),
};
let name = receive_array(&source.name, plugin, "data source name")?;
Ok(ValueReport {
name,
value: v,
min: source.min,
max: source.max,
})
})
.collect();
assert!(list.time > 0);
assert!(list.interval > 0);
let plugin_instance =
receive_array(&list.plugin_instance, plugin, "plugin_instance").map(empty_to_none)?;
let type_ = receive_array(&list.type_, plugin, "type")?;
let type_instance =
receive_array(&list.type_instance, plugin, "type_instance").map(empty_to_none)?;
let host = receive_array(&list.host, plugin, "host")?;
let meta = from_meta_data(plugin, list.meta)?;
Ok(ValueList {
values: values?,
plugin_instance,
plugin,
type_,
type_instance,
host,
time: CdTime::from(list.time).into(),
interval: CdTime::from(list.interval).into(),
meta,
original_list: list,
original_set: set,
})
}
}
#[derive(Debug, PartialEq, Clone)]
struct SubmitValueList<'a> {
values: &'a [Value],
plugin_instance: Option<&'a str>,
plugin: &'a str,
type_: &'a str,
type_instance: Option<&'a str>,
host: Option<&'a str>,
time: Option<DateTime<Utc>>,
interval: Option<Duration>,
meta: HashMap<&'a str, MetaValue>,
}
/// Creates a value list to report values to collectd.
#[derive(Debug, PartialEq, Clone)]
pub struct ValueListBuilder<'a> {
list: SubmitValueList<'a>,
}
impl<'a> ValueListBuilder<'a> {
/// Primes a value list for submission. `plugin` will most likely be the name from the
/// `PluginManager` and `type_` is the datatype found in types.db
pub fn new<T: Into<&'a str>, U: Into<&'a str>>(plugin: T, type_: U) -> ValueListBuilder<'a> {
ValueListBuilder {
list: SubmitValueList {
values: &[],
plugin_instance: None,
plugin: plugin.into(),
type_: type_.into(),
type_instance: None,
host: None,
time: None,
interval: None,
meta: HashMap::new(),
},
}
}
/// A set of observed values that belong to the same plugin and type instance
pub fn values(mut self, values: &'a [Value]) -> ValueListBuilder<'a> {
self.list.values = values;
self
}
/// Distinguishes entities that yield metrics. Each core would be a different instance of the
/// same plugin, as each core reports "idle", "user", "system" metrics.
pub fn plugin_instance<T: Into<&'a str>>(mut self, plugin_instance: T) -> ValueListBuilder<'a> {
self.list.plugin_instance = Some(plugin_instance.into());
self
}
/// The type instance is used to separate values of identical type which nonetheless belong to
/// one another. For instance, even though "free", "used", and "total" all have types of
/// "Memory" they are different type instances.
pub fn type_instance<T: Into<&'a str>>(mut self, type_instance: T) -> ValueListBuilder<'a> {
self.list.type_instance = Some(type_instance.into());
self
}
/// Override the machine's hostname that the observed values will be attributed to. Best to
/// override when observing values from another machine
pub fn host<T: Into<&'a str>>(mut self, host: T) -> ValueListBuilder<'a> {
self.list.host = Some(host.into());
self
}
/// The timestamp at which the value was collected. Overrides the default time, which is when
/// collectd receives the values from `submit`. Use only if there is a significant delay is
/// metrics gathering or if submitting values from the past.
pub fn time(mut self, dt: DateTime<Utc>) -> ValueListBuilder<'a> |
/// The interval in which new values are to be expected. This is typically handled at a global
/// or plugin level. Use at your own discretion.
pub fn interval(mut self, interval: Duration) -> ValueListBuilder<'a> {
self.list.interval = Some(interval);
self
}
/// Add a metadata entry.
///
/// Multiple entries can be added by calling this method. If the same key is used, only the last
/// entry is kept.
pub fn metadata(mut self, key: &'a str, value: MetaValue) -> ValueListBuilder<'a> {
self.list.meta.insert(key, value);
self
}
/// Submits the observed values to collectd and returns errors if encountered
pub fn submit(self) -> Result<(), SubmitError> {
let mut v: Vec<value_t> = self.list.values.iter().map(|&x| x.into()).collect();
let plugin_instance = self
.list
.plugin_instance
.map(|x| submit_array_res(x, "plugin_instance"))
.unwrap_or_else(|| Ok([0 as c_char; ARR_LENGTH]))?;
let type_instance = self
.list
.type_instance
.map(|x| submit_array_res(x, "type_instance"))
.unwrap_or_else(|| Ok([0 as c_char; ARR_LENGTH]))?;
let host = self
.list
.host
.map(|x| submit_array_res(x, "host"))
.transpose()?;
// If a custom host is not provided by the plugin, we default to the global
// hostname. In versions prior to collectd 5.7, it was required to propagate the
// global hostname (hostname_g) in the submission. In collectd 5.7, one could
// submit an empty array or hostname_g and they would equate to the same thing. In
// collectd 5.8, hostname_g had the type signature changed so it could no longer be
// submitted and would cause garbage to be read (and thus could have very much
// unintended side effects)
let host = host.unwrap_or([0 as c_char; ARR_LENGTH]);
let len = v.len() as u64;
let plugin = submit_array_res(self.list.plugin, "plugin")?;
let type_ = submit_array_res(self.list.type_, "type")?;
let meta = to_meta_data(&self.list.meta)?;
let list = value_list_t {
values: v.as_mut_ptr(),
values_len: len,
plugin_instance,
plugin,
type_,
type_instance,
host,
time: self.list.time.map(CdTime::from).unwrap_or(CdTime(0)).into(),
interval: self
.list
.interval
.map(CdTime::from)
.unwrap_or(CdTime(0))
.into(),
meta,
};
match unsafe { plugin_dispatch_values(&list) } {
0 => Ok(()),
i => Err(SubmitError::Dispatch(i)),
}
}
}
fn to_meta_data<'a, 'b: 'a, T>(meta_hm: T) -> Result<*mut meta_data_t, SubmitError>
where
T: IntoIterator<Item = (&'a &'b str, &'a MetaValue)>,
{
let meta = unsafe { meta_data_create() };
let conversion_result = to_meta_data_with_meta(meta_hm, meta);
match conversion_result {
Ok(()) => Ok(meta),
Err(error) => {
unsafe {
meta_data_destroy(meta);
}
Err(error)
}
}
}
fn to_meta_data_with_meta<'a, 'b: 'a, T>(
meta_hm: T,
meta: *mut meta_data_t,
) -> Result<(), SubmitError>
where
T: IntoIterator<Item = (&'a &'b str, &'a MetaValue)>,
{
for (key, value) in meta_hm.into_iter() {
let c_key = CString::new(*key).map_err(|e| SubmitError::Field {
name: "meta key",
err: ArrayError::NullPresent(e.nul_position(), key.to_string()),
})?;
match value {
MetaValue::String(str) => {
let c_value = CString::new(str.as_str()).map_err(|e| SubmitError::Field {
name: "meta value",
err: ArrayError::NullPresent(e.nul_position(), str.to_string()),
})?;
unsafe {
meta_data_add_string(meta, c_key.as_ptr(), c_value.as_ptr());
}
}
MetaValue::SignedInt(i) => unsafe {
meta_data_add_signed_int(meta, c_key.as_ptr(), *i);
},
MetaValue::UnsignedInt(u) => unsafe {
meta_data_add_unsigned_int(meta, c_key.as_ptr(), *u);
},
MetaValue::Double(d) => unsafe {
meta_data_add_double(meta, c_key.as_ptr(), *d);
},
MetaValue::Boolean(b) => unsafe {
meta_data_add_boolean(meta, c_key.as_ptr(), *b);
},
}
}
Ok(())
}
fn from_meta_data(
plugin: &str,
meta: *mut meta_data_t,
) -> Result<HashMap<String, MetaValue>, ReceiveError> {
if meta.is_null() {
return Ok(HashMap::new());
}
let mut c_toc: *mut *mut c_char = ptr::null_mut();
let count_or_err = unsafe { meta_data_toc(meta, &mut c_toc as *mut *mut *mut c_char) };
if count_or_err < 0 {
return Err(ReceiveError::Metadata {
plugin: plugin.to_string(),
field: "toc".to_string(),
msg: "invalid parameters to meta_data_toc",
});
}
let count = count_or_err as usize;
if count == 0 {
return Ok(HashMap::new());
}
let toc = unsafe { slice::from_raw_parts(c_toc, count) };
let conversion_result = from_meta_data_with_toc(plugin, meta, toc);
for c_key_ptr in toc {
unsafe {
libc::free(*c_key_ptr as *mut c_void);
}
}
unsafe {
libc::free(c_toc as *mut c_void);
}
conversion_result
}
fn from_meta_data_with_toc(
plugin: &str,
meta: *mut meta_data_t,
toc: &[*mut c_char],
) -> Result<HashMap<String, MetaValue>, ReceiveError> {
let mut meta_hm = HashMap::with_capacity(toc.len());
for c_key_ptr in toc {
let (c_key, key, value_type) = unsafe {
let c_key: &CStr = CStr::from_ptr(*c_key_ptr);
let key: String = c_key
.to_str()
.map_err(|e| ReceiveError::Utf8 {
plugin: plugin.to_string(),
field: "metadata key",
err: e,
})?
.to_string();
let value_type: u32 = meta_data_type(meta, c_key.as_ptr()) as u32;
(c_key, key, value_type)
};
match value_type {
MD_TYPE_BOOLEAN => {
let mut c_value = false;
unsafe {
meta_data_get_boolean(meta, c_key.as_ptr(), &mut c_value as *mut bool);
}
meta_hm.insert(key, MetaValue::Boolean(c_value));
}
MD_TYPE_DOUBLE => {
let mut c_value = 0.0;
unsafe {
meta_data_get_double(meta, c_key.as_ptr(), &mut c_value as *mut f64);
}
meta_hm.insert(key, MetaValue::Double(c_value));
}
MD_TYPE_SIGNED_INT => {
let mut c_value = 0i64;
unsafe {
meta_data_get_signed_int(meta, c_key.as_ptr(), &mut c_value as *mut i64);
}
meta_hm.insert(key, MetaValue::SignedInt(c_value));
}
MD_TYPE_STRING => {
let value: String = unsafe {
let mut c_value: *mut c_char = ptr::null_mut();
meta_data_get_string(meta, c_key.as_ptr(), &mut c_value as *mut *mut c_char);
CStr::from_ptr(c_value)
.to_str()
.map_err(|e| ReceiveError::Utf8 {
plugin: plugin.to_string(),
field: "metadata value",
err: e,
})?
.to_string()
};
meta_hm.insert(key, MetaValue::String(value));
}
MD_TYPE_UNSIGNED_INT => {
let mut c_value = 0u64;
unsafe {
meta_data_get_unsigned_int(meta, c_key.as_ptr(), &mut c_value as *mut u64);
}
meta_hm.insert(key, MetaValue::UnsignedInt(c_value));
}
| {
self.list.time = Some(dt);
self
} | identifier_body |
mod.rs |
x.is_nan()
} else {
false
}
}
}
impl fmt::Display for Value {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match *self {
Value::Counter(x) | Value::Absolute(x) => write!(f, "{}", x),
Value::Gauge(x) => write!(f, "{}", x),
Value::Derive(x) => write!(f, "{}", x),
}
}
}
impl From<Value> for value_t {
fn from(x: Value) -> Self {
match x {
Value::Counter(x) => value_t { counter: x },
Value::Gauge(x) => value_t { gauge: x },
Value::Derive(x) => value_t { derive: x },
Value::Absolute(x) => value_t { absolute: x },
}
}
}
/// Name and value of a reported metric
#[derive(Debug, PartialEq, Clone, Copy)]
pub struct | <'a> {
/// Name of the metric. If values has a length of 1, this is often just "value"
pub name: &'a str,
/// The value reported
pub value: Value,
/// Minimum value seen in an interval
pub min: f64,
/// Maximum value seen in an interval
pub max: f64,
}
/// Contains values and metadata that collectd has collected from plugins
#[derive(Debug, PartialEq, Clone)]
pub struct ValueList<'a> {
pub values: Vec<ValueReport<'a>>,
/// The plugin that submitted this value. This would be your `PluginManager` when submitting
/// values
pub plugin: &'a str,
/// Distinguishes entities that yield metrics. Each core would be a different instance of the
/// same plugin, as each core reports "idle", "user", "system" metrics.
pub plugin_instance: Option<&'a str>,
/// This is the string found in types.db, determines how many values are expected and how they
/// should be interpreted
pub type_: &'a str,
/// The type instance is used to separate values of identical type which nonetheless belong to
/// one another. For instance, even though "free", "used", and "total" all have types of
/// "Memory" they are different type instances.
pub type_instance: Option<&'a str>,
/// The hostname where the values were collectd
pub host: &'a str,
/// The timestamp at which the value was collected
pub time: DateTime<Utc>,
/// The interval in which new values are to be expected
pub interval: Duration,
/// Metadata associated to the reported values
pub meta: HashMap<String, MetaValue>,
// Keep the original list and set around for calculating rates on demand
original_list: *const value_list_t,
original_set: *const data_set_t,
}
impl<'a> ValueList<'a> {
/// Collectd does not automatically convert `Derived` values into a rate. This is why many
/// write plugins have a `StoreRates` config option so that these rates are calculated on
/// demand from collectd's internal cache. This function will return a vector that can supercede
/// the `values` field that contains the rate of all non-gauge values. Values that are gauges
/// remain unchanged, so one doesn't need to resort back to `values` field as this function
/// will return everything prepped for submission.
pub fn rates(&self) -> Result<Cow<'_, Vec<ValueReport<'a>>>, CacheRateError> {
// As an optimization step, if we know all values are gauges there is no need to call out
// to uc_get_rate as no values will be changed
let all_gauges = self.values.iter().all(|x| match x.value {
Value::Gauge(_) => true,
_ => false,
});
if all_gauges {
return Ok(Cow::Borrowed(&self.values));
}
let ptr = unsafe { uc_get_rate(self.original_set, self.original_list) };
if!ptr.is_null() {
let nv = unsafe { slice::from_raw_parts(ptr, self.values.len()) }
.iter()
.zip(self.values.iter())
.map(|(rate, report)| match report.value {
Value::Gauge(_) => *report,
_ => ValueReport {
value: Value::Gauge(*rate),
..*report
},
})
.collect();
Ok(Cow::Owned(nv))
} else {
Err(CacheRateError)
}
}
pub fn from<'b>(
set: &'b data_set_t,
list: &'b value_list_t,
) -> Result<ValueList<'b>, ReceiveError> {
let plugin = receive_array(&list.plugin, "", "plugin name")?;
let ds_len = length(set.ds_num);
let list_len = length(list.values_len);
let values: Result<Vec<ValueReport<'_>>, ReceiveError> =
unsafe { slice::from_raw_parts(list.values, list_len) }
.iter()
.zip(unsafe { slice::from_raw_parts(set.ds, ds_len) })
.map(|(val, source)| unsafe {
let v = match ::std::mem::transmute(source.type_) {
ValueType::Gauge => Value::Gauge(val.gauge),
ValueType::Counter => Value::Counter(val.counter),
ValueType::Derive => Value::Derive(val.derive),
ValueType::Absolute => Value::Absolute(val.absolute),
};
let name = receive_array(&source.name, plugin, "data source name")?;
Ok(ValueReport {
name,
value: v,
min: source.min,
max: source.max,
})
})
.collect();
assert!(list.time > 0);
assert!(list.interval > 0);
let plugin_instance =
receive_array(&list.plugin_instance, plugin, "plugin_instance").map(empty_to_none)?;
let type_ = receive_array(&list.type_, plugin, "type")?;
let type_instance =
receive_array(&list.type_instance, plugin, "type_instance").map(empty_to_none)?;
let host = receive_array(&list.host, plugin, "host")?;
let meta = from_meta_data(plugin, list.meta)?;
Ok(ValueList {
values: values?,
plugin_instance,
plugin,
type_,
type_instance,
host,
time: CdTime::from(list.time).into(),
interval: CdTime::from(list.interval).into(),
meta,
original_list: list,
original_set: set,
})
}
}
#[derive(Debug, PartialEq, Clone)]
struct SubmitValueList<'a> {
values: &'a [Value],
plugin_instance: Option<&'a str>,
plugin: &'a str,
type_: &'a str,
type_instance: Option<&'a str>,
host: Option<&'a str>,
time: Option<DateTime<Utc>>,
interval: Option<Duration>,
meta: HashMap<&'a str, MetaValue>,
}
/// Creates a value list to report values to collectd.
#[derive(Debug, PartialEq, Clone)]
pub struct ValueListBuilder<'a> {
list: SubmitValueList<'a>,
}
impl<'a> ValueListBuilder<'a> {
/// Primes a value list for submission. `plugin` will most likely be the name from the
/// `PluginManager` and `type_` is the datatype found in types.db
pub fn new<T: Into<&'a str>, U: Into<&'a str>>(plugin: T, type_: U) -> ValueListBuilder<'a> {
ValueListBuilder {
list: SubmitValueList {
values: &[],
plugin_instance: None,
plugin: plugin.into(),
type_: type_.into(),
type_instance: None,
host: None,
time: None,
interval: None,
meta: HashMap::new(),
},
}
}
/// A set of observed values that belong to the same plugin and type instance
pub fn values(mut self, values: &'a [Value]) -> ValueListBuilder<'a> {
self.list.values = values;
self
}
/// Distinguishes entities that yield metrics. Each core would be a different instance of the
/// same plugin, as each core reports "idle", "user", "system" metrics.
pub fn plugin_instance<T: Into<&'a str>>(mut self, plugin_instance: T) -> ValueListBuilder<'a> {
self.list.plugin_instance = Some(plugin_instance.into());
self
}
/// The type instance is used to separate values of identical type which nonetheless belong to
/// one another. For instance, even though "free", "used", and "total" all have types of
/// "Memory" they are different type instances.
pub fn type_instance<T: Into<&'a str>>(mut self, type_instance: T) -> ValueListBuilder<'a> {
self.list.type_instance = Some(type_instance.into());
self
}
/// Override the machine's hostname that the observed values will be attributed to. Best to
/// override when observing values from another machine
pub fn host<T: Into<&'a str>>(mut self, host: T) -> ValueListBuilder<'a> {
self.list.host = Some(host.into());
self
}
/// The timestamp at which the value was collected. Overrides the default time, which is when
/// collectd receives the values from `submit`. Use only if there is a significant delay is
/// metrics gathering or if submitting values from the past.
pub fn time(mut self, dt: DateTime<Utc>) -> ValueListBuilder<'a> {
self.list.time = Some(dt);
self
}
/// The interval in which new values are to be expected. This is typically handled at a global
/// or plugin level. Use at your own discretion.
pub fn interval(mut self, interval: Duration) -> ValueListBuilder<'a> {
self.list.interval = Some(interval);
self
}
/// Add a metadata entry.
///
/// Multiple entries can be added by calling this method. If the same key is used, only the last
/// entry is kept.
pub fn metadata(mut self, key: &'a str, value: MetaValue) -> ValueListBuilder<'a> {
self.list.meta.insert(key, value);
self
}
/// Submits the observed values to collectd and returns errors if encountered
pub fn submit(self) -> Result<(), SubmitError> {
let mut v: Vec<value_t> = self.list.values.iter().map(|&x| x.into()).collect();
let plugin_instance = self
.list
.plugin_instance
.map(|x| submit_array_res(x, "plugin_instance"))
.unwrap_or_else(|| Ok([0 as c_char; ARR_LENGTH]))?;
let type_instance = self
.list
.type_instance
.map(|x| submit_array_res(x, "type_instance"))
.unwrap_or_else(|| Ok([0 as c_char; ARR_LENGTH]))?;
let host = self
.list
.host
.map(|x| submit_array_res(x, "host"))
.transpose()?;
// If a custom host is not provided by the plugin, we default to the global
// hostname. In versions prior to collectd 5.7, it was required to propagate the
// global hostname (hostname_g) in the submission. In collectd 5.7, one could
// submit an empty array or hostname_g and they would equate to the same thing. In
// collectd 5.8, hostname_g had the type signature changed so it could no longer be
// submitted and would cause garbage to be read (and thus could have very much
// unintended side effects)
let host = host.unwrap_or([0 as c_char; ARR_LENGTH]);
let len = v.len() as u64;
let plugin = submit_array_res(self.list.plugin, "plugin")?;
let type_ = submit_array_res(self.list.type_, "type")?;
let meta = to_meta_data(&self.list.meta)?;
let list = value_list_t {
values: v.as_mut_ptr(),
values_len: len,
plugin_instance,
plugin,
type_,
type_instance,
host,
time: self.list.time.map(CdTime::from).unwrap_or(CdTime(0)).into(),
interval: self
.list
.interval
.map(CdTime::from)
.unwrap_or(CdTime(0))
.into(),
meta,
};
match unsafe { plugin_dispatch_values(&list) } {
0 => Ok(()),
i => Err(SubmitError::Dispatch(i)),
}
}
}
fn to_meta_data<'a, 'b: 'a, T>(meta_hm: T) -> Result<*mut meta_data_t, SubmitError>
where
T: IntoIterator<Item = (&'a &'b str, &'a MetaValue)>,
{
let meta = unsafe { meta_data_create() };
let conversion_result = to_meta_data_with_meta(meta_hm, meta);
match conversion_result {
Ok(()) => Ok(meta),
Err(error) => {
unsafe {
meta_data_destroy(meta);
}
Err(error)
}
}
}
fn to_meta_data_with_meta<'a, 'b: 'a, T>(
meta_hm: T,
meta: *mut meta_data_t,
) -> Result<(), SubmitError>
where
T: IntoIterator<Item = (&'a &'b str, &'a MetaValue)>,
{
for (key, value) in meta_hm.into_iter() {
let c_key = CString::new(*key).map_err(|e| SubmitError::Field {
name: "meta key",
err: ArrayError::NullPresent(e.nul_position(), key.to_string()),
})?;
match value {
MetaValue::String(str) => {
let c_value = CString::new(str.as_str()).map_err(|e| SubmitError::Field {
name: "meta value",
err: ArrayError::NullPresent(e.nul_position(), str.to_string()),
})?;
unsafe {
meta_data_add_string(meta, c_key.as_ptr(), c_value.as_ptr());
}
}
MetaValue::SignedInt(i) => unsafe {
meta_data_add_signed_int(meta, c_key.as_ptr(), *i);
},
MetaValue::UnsignedInt(u) => unsafe {
meta_data_add_unsigned_int(meta, c_key.as_ptr(), *u);
},
MetaValue::Double(d) => unsafe {
meta_data_add_double(meta, c_key.as_ptr(), *d);
},
MetaValue::Boolean(b) => unsafe {
meta_data_add_boolean(meta, c_key.as_ptr(), *b);
},
}
}
Ok(())
}
fn from_meta_data(
plugin: &str,
meta: *mut meta_data_t,
) -> Result<HashMap<String, MetaValue>, ReceiveError> {
if meta.is_null() {
return Ok(HashMap::new());
}
let mut c_toc: *mut *mut c_char = ptr::null_mut();
let count_or_err = unsafe { meta_data_toc(meta, &mut c_toc as *mut *mut *mut c_char) };
if count_or_err < 0 {
return Err(ReceiveError::Metadata {
plugin: plugin.to_string(),
field: "toc".to_string(),
msg: "invalid parameters to meta_data_toc",
});
}
let count = count_or_err as usize;
if count == 0 {
return Ok(HashMap::new());
}
let toc = unsafe { slice::from_raw_parts(c_toc, count) };
let conversion_result = from_meta_data_with_toc(plugin, meta, toc);
for c_key_ptr in toc {
unsafe {
libc::free(*c_key_ptr as *mut c_void);
}
}
unsafe {
libc::free(c_toc as *mut c_void);
}
conversion_result
}
fn from_meta_data_with_toc(
plugin: &str,
meta: *mut meta_data_t,
toc: &[*mut c_char],
) -> Result<HashMap<String, MetaValue>, ReceiveError> {
let mut meta_hm = HashMap::with_capacity(toc.len());
for c_key_ptr in toc {
let (c_key, key, value_type) = unsafe {
let c_key: &CStr = CStr::from_ptr(*c_key_ptr);
let key: String = c_key
.to_str()
.map_err(|e| ReceiveError::Utf8 {
plugin: plugin.to_string(),
field: "metadata key",
err: e,
})?
.to_string();
let value_type: u32 = meta_data_type(meta, c_key.as_ptr()) as u32;
(c_key, key, value_type)
};
match value_type {
MD_TYPE_BOOLEAN => {
let mut c_value = false;
unsafe {
meta_data_get_boolean(meta, c_key.as_ptr(), &mut c_value as *mut bool);
}
meta_hm.insert(key, MetaValue::Boolean(c_value));
}
MD_TYPE_DOUBLE => {
let mut c_value = 0.0;
unsafe {
meta_data_get_double(meta, c_key.as_ptr(), &mut c_value as *mut f64);
}
meta_hm.insert(key, MetaValue::Double(c_value));
}
MD_TYPE_SIGNED_INT => {
let mut c_value = 0i64;
unsafe {
meta_data_get_signed_int(meta, c_key.as_ptr(), &mut c_value as *mut i64);
}
meta_hm.insert(key, MetaValue::SignedInt(c_value));
}
MD_TYPE_STRING => {
let value: String = unsafe {
let mut c_value: *mut c_char = ptr::null_mut();
meta_data_get_string(meta, c_key.as_ptr(), &mut c_value as *mut *mut c_char);
CStr::from_ptr(c_value)
.to_str()
.map_err(|e| ReceiveError::Utf8 {
plugin: plugin.to_string(),
field: "metadata value",
err: e,
})?
.to_string()
};
meta_hm.insert(key, MetaValue::String(value));
}
MD_TYPE_UNSIGNED_INT => {
let mut c_value = 0u64;
unsafe {
meta_data_get_unsigned_int(meta, c_key.as_ptr(), &mut c_value as *mut u64);
}
meta_hm.insert(key, MetaValue::UnsignedInt(c_value));
}
| ValueReport | identifier_name |
main.rs | // bin2src - convert a binary file to source code in various languages
//
// Copyright (C) 2020 Alexandre Gomiero de Oliveira
//
// MIT License
//
// Copyright (c) 2020-2021 Alexandre Gomiero de Oliveira
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in all
// copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
// SOFTWARE.
#![warn(anonymous_parameters)]
#![warn(bare_trait_objects)]
#![warn(elided_lifetimes_in_paths)]
#![warn(single_use_lifetimes)]
#![warn(trivial_casts)]
#![warn(trivial_numeric_casts)]
#![warn(unused_import_braces)]
#![warn(unused_qualifications)]
mod lang;
use std::env;
const VERSION: &'static str = "0.0.57";
const AUTHOR: &'static str = "Alexandre Gomiero de Oliveira";
#[derive(Debug)]
pub enum Lang {
C,
Cshell,
Pascal,
Python,
Rust,
Undef,
}
fn main() {
let args: Vec<String> = match get_args_as_strings() {
Ok(e) => e,
Err(e) => {
println!("\n{}", e);
print_help();
return;
}
};
if args.len() < 4 {
print_help();
return;
};
let mut parse_result: generator::GeneratorInput = match parse(args) {
Ok(s) => s,
Err(e) => {
println!("\nArgument parser error: {}", e);
print_help();
return;
}
};
match parse_result.generate() {
Err(e) => panic!("Generator error: {}", e),
_ => "",
};
}
fn get_args_as_strings() -> Result<Vec<String>, &'static str> {
let mut ret: Vec<String> = Vec::new();
let args = env::args_os();
for cmd in args {
ret.push(match cmd.into_string() {
Ok(c) => c,
_ => return Err("Invalid unicode character found"),
});
}
Ok(ret)
}
fn parse(args: Vec<String>) -> Result<generator::GeneratorInput, String> {
let mut parse_args = args.iter().skip(1); // Skip program name
let mut inp_file: String = String::new();
let mut out_lang: Lang = Lang::Undef;
let mut out_dir: String = String::new();
let mut out_file: String = String::new();
let mut out_hex: bool = false;
while let Some(cmd) = parse_args.next() {
let cmd_name: &str;
if cmd.starts_with("--") {
cmd_name = &cmd[2..];
} else if cmd.starts_with("-") {
cmd_name = &cmd[1..];
} else {
inp_file = String::from(&cmd[..]);
break;
}
match cmd_name {
"l" | "out-language" => {
let value = match parse_args.next() {
Some(c) => c,
None => return Err(format!("Missing language")),
};
out_lang = match value.as_str() {
"c" => Lang::C,
"cshell" => Lang::Cshell,
"pascal" => Lang::Pascal,
"python" => Lang::Python,
"rust" => Lang::Rust,
l @ _ => return Err(format!("Language not implemented: {}", l)),
};
}
"d" | "out-dir" => {
let value = match parse_args.next() {
Some(c) => c,
None => return Err(format!("Invalid directory")),
};
out_dir = String::from(value);
}
"f" | "out-file" => {
let value = match parse_args.next() {
Some(c) => c,
None => return Err(format!("Invalid output file")),
};
out_file = String::from(value);
}
"h" | "hex" => {
out_hex = true;
}
c @ _ => return Err(format!("Unknow command: {}", c)),
}
}
if inp_file.is_empty() {
return Err(String::from("Invalid input file"));
};
if out_dir.is_empty() {
out_dir = String::from("./");
};
Ok(generator::GeneratorInput {
input_file: inp_file,
output_file: out_file,
output_dir: out_dir,
lang: out_lang,
hex: out_hex,
})
}
fn print_help() {
print!(
"
bin2src - version {}
Copyright (C) 2020 {}
This program comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it
under certain conditions; for details access LICENSE file at:
https://github.com/gomiero/bin2src/
bin2src - Converts a binary file to an array of bytes, defined at a source of another language, so you can embed it into your program.
Usage: bin2src < -l LANG | --out-lang LANG > [ OPTIONS ] < FILE >
LANG and FILE are required and FILE must be the last argument.
Options:
-l, --out-language LANG specify the language, where LANG={{c|cshell|pascal|python|rust}}
-d, --out-dir PATH specify where to output source(s) file(s);
if not specified, generate in current directory
-f, --out-file OUTFILE specify the output file(s) name (* without extension *);
if not specified, output file(s) will have the same name
of input file (without extra dots).
-h, --hex output bytes in hexadecimal (for C shellcode this flag has
diferent behaviors. See the Github site for more information)
Currently supported languages:
- C
- C for shellcode
- Pascal
- Python
- Rust
", VERSION, AUTHOR);
}
mod generator {
use super::lang::c;
use super::lang::cshell;
use super::lang::pascal;
use super::lang::python;
use super::lang::rust;
use super::Lang;
use std::error::Error;
use std::fs;
use std::io::{BufReader, BufWriter, ErrorKind, Read, Write};
use std::path::PathBuf;
#[inline]
pub fn camel(s: &String) -> String {
let mut ss = s.clone().to_lowercase();
let mut first = ss.remove(0).to_uppercase().to_string();
first.push_str(ss.as_str());
first
}
#[derive(Debug)]
pub struct GeneratorOutput {
pub ifile_name: String,
pub ifile_path: PathBuf,
pub ifile_size: u64,
pub odir_path: PathBuf,
pub ofile_name: String,
pub hex: bool,
}
impl GeneratorOutput {
pub fn open_inp_file(&mut self) -> Result<BufReader<fs::File>, &'static str> {
let inp_file: BufReader<fs::File> =
match fs::OpenOptions::new().read(true).open(&self.ifile_path) {
Ok(f) => BufReader::with_capacity(32768, f),
Err(e) => {
return match e.kind() {
ErrorKind::PermissionDenied => Err("Permission"),
ErrorKind::NotFound => Err("Not found"),
_ => Err("Can't open file"),
}
}
};
Ok(inp_file)
}
pub fn write_data(
&mut self,
f: &mut BufWriter<fs::File>,
numbytes: u64,
write_if: fn(bool, bool, &mut BufWriter<fs::File>, u8) -> Result<(), Box<dyn Error>>,
sep: String,
) -> Result<(), &'static str> {
let mut ifile = self.open_inp_file()?;
let mut doblock = || -> Result<(), Box<dyn Error>> {
let mut buf = [0u8; 4096];
let mut count = 0;
'outter: loop {
let sz = ifile.read(&mut buf[..])?;
if sz == 0 {
f.flush()?;
break;
} else if sz <= 4096 {
for b in 0..sz {
if count == self.ifile_size - 1 {
write_if(self.hex, false, f, buf[b])?;
break 'outter;
};
write_if(self.hex, true, f, buf[b])?;
count += 1;
if count % numbytes == 0 {
write!(f, "{}", sep)?;
};
}
};
}
Ok(())
};
if let Err(_err) = doblock() {
Err("Error when writing data block")
} else {
Ok(())
}
}
pub fn set_output_fname(&mut self) {
if self.ofile_name.is_empty() {
self.ofile_name = self
.ifile_path
.file_stem()
.unwrap()
.to_str()
.unwrap()
.to_string();
if let Some(pos) = self.ofile_name.find(".") {
self.ofile_name.truncate(pos);
}
};
}
}
#[derive(Debug)]
pub struct GeneratorInput {
pub input_file: String,
pub output_file: String,
pub output_dir: String,
pub lang: Lang,
pub hex: bool,
}
impl GeneratorInput {
fn input_file_test(&mut self) -> Result<(String, PathBuf, u64), &'static str> {
let ifpath: PathBuf = PathBuf::from(&self.input_file);
if!(ifpath.exists() || ifpath.is_file()) {
Err("Input file does not exists or is not a file")
} else {
let ifname: String = String::from(ifpath.file_name().unwrap().to_str().unwrap());
let ifsize = ifpath.metadata().unwrap().len();
Ok((ifname, ifpath, ifsize))
}
}
fn output_dir_test(&mut self) -> Result<PathBuf, &'static str> {
let ofpath: PathBuf = PathBuf::from(&self.output_dir);
// Test for output dir
if!(ofpath.exists() || ofpath.is_dir()) {
Err("Output folder does not exists or is inacessible")
} else |
}
pub fn generate(&mut self) -> Result<(), &'static str> {
// Test for input file
let (ifname, ifpath, ifsize) = self.input_file_test()?;
// Test for output dir
let ofpath: PathBuf = self.output_dir_test()?;
let go = GeneratorOutput {
ifile_name: ifname,
ifile_path: ifpath,
ifile_size: ifsize,
odir_path: ofpath,
ofile_name: String::from(&self.output_file),
hex: self.hex,
};
match match &self.lang {
Lang::C => c::C::new(go).generate_files(),
Lang::Cshell => cshell::Cshell::new(go).generate_files(),
Lang::Pascal => pascal::Pascal::new(go).generate_files(),
Lang::Python => python::Python::new(go).generate_files(),
Lang::Rust => rust::Rust::new(go).generate_files(),
_ => Err("Language not implemented yet"),
} {
Ok(_) => {
println!("Source(s) created.");
Ok(())
}
Err(e) => Err(e),
}
}
}
}
| {
Ok(ofpath)
} | conditional_block |
main.rs | // bin2src - convert a binary file to source code in various languages
//
// Copyright (C) 2020 Alexandre Gomiero de Oliveira
//
// MIT License
//
// Copyright (c) 2020-2021 Alexandre Gomiero de Oliveira
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in all
// copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
// SOFTWARE.
#![warn(anonymous_parameters)]
#![warn(bare_trait_objects)]
#![warn(elided_lifetimes_in_paths)]
#![warn(single_use_lifetimes)]
#![warn(trivial_casts)]
#![warn(trivial_numeric_casts)]
#![warn(unused_import_braces)]
#![warn(unused_qualifications)]
mod lang;
use std::env;
const VERSION: &'static str = "0.0.57";
const AUTHOR: &'static str = "Alexandre Gomiero de Oliveira";
#[derive(Debug)]
pub enum Lang {
C,
Cshell,
Pascal,
Python,
Rust,
Undef,
}
fn main() {
let args: Vec<String> = match get_args_as_strings() {
Ok(e) => e,
Err(e) => {
println!("\n{}", e);
print_help();
return;
}
};
if args.len() < 4 {
print_help();
return;
};
let mut parse_result: generator::GeneratorInput = match parse(args) {
Ok(s) => s,
Err(e) => {
println!("\nArgument parser error: {}", e);
print_help();
return;
}
};
match parse_result.generate() {
Err(e) => panic!("Generator error: {}", e),
_ => "",
};
}
fn get_args_as_strings() -> Result<Vec<String>, &'static str> {
let mut ret: Vec<String> = Vec::new();
let args = env::args_os();
for cmd in args {
ret.push(match cmd.into_string() {
Ok(c) => c,
_ => return Err("Invalid unicode character found"),
});
}
Ok(ret)
}
fn parse(args: Vec<String>) -> Result<generator::GeneratorInput, String> {
let mut parse_args = args.iter().skip(1); // Skip program name
let mut inp_file: String = String::new();
let mut out_lang: Lang = Lang::Undef;
let mut out_dir: String = String::new();
let mut out_file: String = String::new();
let mut out_hex: bool = false;
while let Some(cmd) = parse_args.next() {
let cmd_name: &str;
if cmd.starts_with("--") {
cmd_name = &cmd[2..];
} else if cmd.starts_with("-") {
cmd_name = &cmd[1..];
} else {
inp_file = String::from(&cmd[..]);
break;
}
match cmd_name {
"l" | "out-language" => {
let value = match parse_args.next() {
Some(c) => c,
None => return Err(format!("Missing language")),
};
out_lang = match value.as_str() {
"c" => Lang::C,
"cshell" => Lang::Cshell,
"pascal" => Lang::Pascal,
"python" => Lang::Python,
"rust" => Lang::Rust,
l @ _ => return Err(format!("Language not implemented: {}", l)),
};
}
"d" | "out-dir" => {
let value = match parse_args.next() {
Some(c) => c,
None => return Err(format!("Invalid directory")),
};
out_dir = String::from(value);
}
"f" | "out-file" => {
let value = match parse_args.next() {
Some(c) => c,
None => return Err(format!("Invalid output file")),
};
out_file = String::from(value);
}
"h" | "hex" => {
out_hex = true;
}
c @ _ => return Err(format!("Unknow command: {}", c)),
}
}
if inp_file.is_empty() {
return Err(String::from("Invalid input file"));
};
if out_dir.is_empty() {
out_dir = String::from("./");
};
Ok(generator::GeneratorInput {
input_file: inp_file,
output_file: out_file,
output_dir: out_dir,
lang: out_lang,
hex: out_hex,
})
}
fn print_help() {
print!(
"
bin2src - version {}
Copyright (C) 2020 {}
This program comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it
under certain conditions; for details access LICENSE file at:
https://github.com/gomiero/bin2src/
bin2src - Converts a binary file to an array of bytes, defined at a source of another language, so you can embed it into your program.
Usage: bin2src < -l LANG | --out-lang LANG > [ OPTIONS ] < FILE >
LANG and FILE are required and FILE must be the last argument.
Options:
-l, --out-language LANG specify the language, where LANG={{c|cshell|pascal|python|rust}}
-d, --out-dir PATH specify where to output source(s) file(s);
if not specified, generate in current directory
-f, --out-file OUTFILE specify the output file(s) name (* without extension *);
if not specified, output file(s) will have the same name
of input file (without extra dots).
-h, --hex output bytes in hexadecimal (for C shellcode this flag has
diferent behaviors. See the Github site for more information)
Currently supported languages:
- C
- C for shellcode
- Pascal
- Python
- Rust
", VERSION, AUTHOR);
}
mod generator {
use super::lang::c;
use super::lang::cshell;
use super::lang::pascal;
use super::lang::python;
use super::lang::rust;
use super::Lang;
use std::error::Error;
use std::fs;
use std::io::{BufReader, BufWriter, ErrorKind, Read, Write};
use std::path::PathBuf;
#[inline]
pub fn camel(s: &String) -> String {
let mut ss = s.clone().to_lowercase();
let mut first = ss.remove(0).to_uppercase().to_string();
first.push_str(ss.as_str());
first
}
#[derive(Debug)]
pub struct GeneratorOutput {
pub ifile_name: String,
pub ifile_path: PathBuf,
pub ifile_size: u64,
pub odir_path: PathBuf,
pub ofile_name: String,
pub hex: bool,
}
impl GeneratorOutput {
pub fn open_inp_file(&mut self) -> Result<BufReader<fs::File>, &'static str> {
let inp_file: BufReader<fs::File> =
match fs::OpenOptions::new().read(true).open(&self.ifile_path) {
Ok(f) => BufReader::with_capacity(32768, f),
Err(e) => {
return match e.kind() {
ErrorKind::PermissionDenied => Err("Permission"),
ErrorKind::NotFound => Err("Not found"),
_ => Err("Can't open file"),
}
}
};
Ok(inp_file)
}
pub fn write_data(
&mut self,
f: &mut BufWriter<fs::File>,
numbytes: u64,
write_if: fn(bool, bool, &mut BufWriter<fs::File>, u8) -> Result<(), Box<dyn Error>>,
sep: String,
) -> Result<(), &'static str> {
let mut ifile = self.open_inp_file()?;
let mut doblock = || -> Result<(), Box<dyn Error>> {
let mut buf = [0u8; 4096];
let mut count = 0;
'outter: loop {
let sz = ifile.read(&mut buf[..])?;
if sz == 0 {
f.flush()?;
break;
} else if sz <= 4096 {
for b in 0..sz {
if count == self.ifile_size - 1 {
write_if(self.hex, false, f, buf[b])?;
break 'outter;
};
write_if(self.hex, true, f, buf[b])?;
count += 1;
if count % numbytes == 0 {
write!(f, "{}", sep)?;
};
}
};
}
Ok(())
};
if let Err(_err) = doblock() {
Err("Error when writing data block")
} else {
Ok(())
}
}
pub fn set_output_fname(&mut self) {
if self.ofile_name.is_empty() {
self.ofile_name = self
.ifile_path
.file_stem()
.unwrap()
.to_str()
.unwrap()
.to_string();
if let Some(pos) = self.ofile_name.find(".") {
self.ofile_name.truncate(pos);
}
};
}
}
#[derive(Debug)]
pub struct | {
pub input_file: String,
pub output_file: String,
pub output_dir: String,
pub lang: Lang,
pub hex: bool,
}
impl GeneratorInput {
fn input_file_test(&mut self) -> Result<(String, PathBuf, u64), &'static str> {
let ifpath: PathBuf = PathBuf::from(&self.input_file);
if!(ifpath.exists() || ifpath.is_file()) {
Err("Input file does not exists or is not a file")
} else {
let ifname: String = String::from(ifpath.file_name().unwrap().to_str().unwrap());
let ifsize = ifpath.metadata().unwrap().len();
Ok((ifname, ifpath, ifsize))
}
}
fn output_dir_test(&mut self) -> Result<PathBuf, &'static str> {
let ofpath: PathBuf = PathBuf::from(&self.output_dir);
// Test for output dir
if!(ofpath.exists() || ofpath.is_dir()) {
Err("Output folder does not exists or is inacessible")
} else {
Ok(ofpath)
}
}
pub fn generate(&mut self) -> Result<(), &'static str> {
// Test for input file
let (ifname, ifpath, ifsize) = self.input_file_test()?;
// Test for output dir
let ofpath: PathBuf = self.output_dir_test()?;
let go = GeneratorOutput {
ifile_name: ifname,
ifile_path: ifpath,
ifile_size: ifsize,
odir_path: ofpath,
ofile_name: String::from(&self.output_file),
hex: self.hex,
};
match match &self.lang {
Lang::C => c::C::new(go).generate_files(),
Lang::Cshell => cshell::Cshell::new(go).generate_files(),
Lang::Pascal => pascal::Pascal::new(go).generate_files(),
Lang::Python => python::Python::new(go).generate_files(),
Lang::Rust => rust::Rust::new(go).generate_files(),
_ => Err("Language not implemented yet"),
} {
Ok(_) => {
println!("Source(s) created.");
Ok(())
}
Err(e) => Err(e),
}
}
}
}
| GeneratorInput | identifier_name |
main.rs | // bin2src - convert a binary file to source code in various languages
//
// Copyright (C) 2020 Alexandre Gomiero de Oliveira
//
// MIT License
//
// Copyright (c) 2020-2021 Alexandre Gomiero de Oliveira
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in all
// copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
// SOFTWARE.
#![warn(anonymous_parameters)]
#![warn(bare_trait_objects)]
#![warn(elided_lifetimes_in_paths)]
#![warn(single_use_lifetimes)]
#![warn(trivial_casts)]
#![warn(trivial_numeric_casts)]
#![warn(unused_import_braces)]
#![warn(unused_qualifications)]
mod lang;
use std::env;
const VERSION: &'static str = "0.0.57";
const AUTHOR: &'static str = "Alexandre Gomiero de Oliveira";
#[derive(Debug)]
pub enum Lang {
C,
Cshell,
Pascal,
Python,
Rust,
Undef,
}
fn main() {
let args: Vec<String> = match get_args_as_strings() {
Ok(e) => e,
Err(e) => {
println!("\n{}", e);
print_help();
return;
}
};
if args.len() < 4 {
print_help();
return;
};
let mut parse_result: generator::GeneratorInput = match parse(args) {
Ok(s) => s,
Err(e) => {
println!("\nArgument parser error: {}", e);
print_help();
return;
}
}; | _ => "",
};
}
fn get_args_as_strings() -> Result<Vec<String>, &'static str> {
let mut ret: Vec<String> = Vec::new();
let args = env::args_os();
for cmd in args {
ret.push(match cmd.into_string() {
Ok(c) => c,
_ => return Err("Invalid unicode character found"),
});
}
Ok(ret)
}
fn parse(args: Vec<String>) -> Result<generator::GeneratorInput, String> {
let mut parse_args = args.iter().skip(1); // Skip program name
let mut inp_file: String = String::new();
let mut out_lang: Lang = Lang::Undef;
let mut out_dir: String = String::new();
let mut out_file: String = String::new();
let mut out_hex: bool = false;
while let Some(cmd) = parse_args.next() {
let cmd_name: &str;
if cmd.starts_with("--") {
cmd_name = &cmd[2..];
} else if cmd.starts_with("-") {
cmd_name = &cmd[1..];
} else {
inp_file = String::from(&cmd[..]);
break;
}
match cmd_name {
"l" | "out-language" => {
let value = match parse_args.next() {
Some(c) => c,
None => return Err(format!("Missing language")),
};
out_lang = match value.as_str() {
"c" => Lang::C,
"cshell" => Lang::Cshell,
"pascal" => Lang::Pascal,
"python" => Lang::Python,
"rust" => Lang::Rust,
l @ _ => return Err(format!("Language not implemented: {}", l)),
};
}
"d" | "out-dir" => {
let value = match parse_args.next() {
Some(c) => c,
None => return Err(format!("Invalid directory")),
};
out_dir = String::from(value);
}
"f" | "out-file" => {
let value = match parse_args.next() {
Some(c) => c,
None => return Err(format!("Invalid output file")),
};
out_file = String::from(value);
}
"h" | "hex" => {
out_hex = true;
}
c @ _ => return Err(format!("Unknow command: {}", c)),
}
}
if inp_file.is_empty() {
return Err(String::from("Invalid input file"));
};
if out_dir.is_empty() {
out_dir = String::from("./");
};
Ok(generator::GeneratorInput {
input_file: inp_file,
output_file: out_file,
output_dir: out_dir,
lang: out_lang,
hex: out_hex,
})
}
fn print_help() {
print!(
"
bin2src - version {}
Copyright (C) 2020 {}
This program comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it
under certain conditions; for details access LICENSE file at:
https://github.com/gomiero/bin2src/
bin2src - Converts a binary file to an array of bytes, defined at a source of another language, so you can embed it into your program.
Usage: bin2src < -l LANG | --out-lang LANG > [ OPTIONS ] < FILE >
LANG and FILE are required and FILE must be the last argument.
Options:
-l, --out-language LANG specify the language, where LANG={{c|cshell|pascal|python|rust}}
-d, --out-dir PATH specify where to output source(s) file(s);
if not specified, generate in current directory
-f, --out-file OUTFILE specify the output file(s) name (* without extension *);
if not specified, output file(s) will have the same name
of input file (without extra dots).
-h, --hex output bytes in hexadecimal (for C shellcode this flag has
diferent behaviors. See the Github site for more information)
Currently supported languages:
- C
- C for shellcode
- Pascal
- Python
- Rust
", VERSION, AUTHOR);
}
mod generator {
use super::lang::c;
use super::lang::cshell;
use super::lang::pascal;
use super::lang::python;
use super::lang::rust;
use super::Lang;
use std::error::Error;
use std::fs;
use std::io::{BufReader, BufWriter, ErrorKind, Read, Write};
use std::path::PathBuf;
#[inline]
pub fn camel(s: &String) -> String {
let mut ss = s.clone().to_lowercase();
let mut first = ss.remove(0).to_uppercase().to_string();
first.push_str(ss.as_str());
first
}
#[derive(Debug)]
pub struct GeneratorOutput {
pub ifile_name: String,
pub ifile_path: PathBuf,
pub ifile_size: u64,
pub odir_path: PathBuf,
pub ofile_name: String,
pub hex: bool,
}
impl GeneratorOutput {
pub fn open_inp_file(&mut self) -> Result<BufReader<fs::File>, &'static str> {
let inp_file: BufReader<fs::File> =
match fs::OpenOptions::new().read(true).open(&self.ifile_path) {
Ok(f) => BufReader::with_capacity(32768, f),
Err(e) => {
return match e.kind() {
ErrorKind::PermissionDenied => Err("Permission"),
ErrorKind::NotFound => Err("Not found"),
_ => Err("Can't open file"),
}
}
};
Ok(inp_file)
}
pub fn write_data(
&mut self,
f: &mut BufWriter<fs::File>,
numbytes: u64,
write_if: fn(bool, bool, &mut BufWriter<fs::File>, u8) -> Result<(), Box<dyn Error>>,
sep: String,
) -> Result<(), &'static str> {
let mut ifile = self.open_inp_file()?;
let mut doblock = || -> Result<(), Box<dyn Error>> {
let mut buf = [0u8; 4096];
let mut count = 0;
'outter: loop {
let sz = ifile.read(&mut buf[..])?;
if sz == 0 {
f.flush()?;
break;
} else if sz <= 4096 {
for b in 0..sz {
if count == self.ifile_size - 1 {
write_if(self.hex, false, f, buf[b])?;
break 'outter;
};
write_if(self.hex, true, f, buf[b])?;
count += 1;
if count % numbytes == 0 {
write!(f, "{}", sep)?;
};
}
};
}
Ok(())
};
if let Err(_err) = doblock() {
Err("Error when writing data block")
} else {
Ok(())
}
}
pub fn set_output_fname(&mut self) {
if self.ofile_name.is_empty() {
self.ofile_name = self
.ifile_path
.file_stem()
.unwrap()
.to_str()
.unwrap()
.to_string();
if let Some(pos) = self.ofile_name.find(".") {
self.ofile_name.truncate(pos);
}
};
}
}
#[derive(Debug)]
pub struct GeneratorInput {
pub input_file: String,
pub output_file: String,
pub output_dir: String,
pub lang: Lang,
pub hex: bool,
}
impl GeneratorInput {
fn input_file_test(&mut self) -> Result<(String, PathBuf, u64), &'static str> {
let ifpath: PathBuf = PathBuf::from(&self.input_file);
if!(ifpath.exists() || ifpath.is_file()) {
Err("Input file does not exists or is not a file")
} else {
let ifname: String = String::from(ifpath.file_name().unwrap().to_str().unwrap());
let ifsize = ifpath.metadata().unwrap().len();
Ok((ifname, ifpath, ifsize))
}
}
fn output_dir_test(&mut self) -> Result<PathBuf, &'static str> {
let ofpath: PathBuf = PathBuf::from(&self.output_dir);
// Test for output dir
if!(ofpath.exists() || ofpath.is_dir()) {
Err("Output folder does not exists or is inacessible")
} else {
Ok(ofpath)
}
}
pub fn generate(&mut self) -> Result<(), &'static str> {
// Test for input file
let (ifname, ifpath, ifsize) = self.input_file_test()?;
// Test for output dir
let ofpath: PathBuf = self.output_dir_test()?;
let go = GeneratorOutput {
ifile_name: ifname,
ifile_path: ifpath,
ifile_size: ifsize,
odir_path: ofpath,
ofile_name: String::from(&self.output_file),
hex: self.hex,
};
match match &self.lang {
Lang::C => c::C::new(go).generate_files(),
Lang::Cshell => cshell::Cshell::new(go).generate_files(),
Lang::Pascal => pascal::Pascal::new(go).generate_files(),
Lang::Python => python::Python::new(go).generate_files(),
Lang::Rust => rust::Rust::new(go).generate_files(),
_ => Err("Language not implemented yet"),
} {
Ok(_) => {
println!("Source(s) created.");
Ok(())
}
Err(e) => Err(e),
}
}
}
} |
match parse_result.generate() {
Err(e) => panic!("Generator error: {}", e), | random_line_split |
value.rs | //! Types representing for data which will be retrieved from the driver.
//! Currently this data is expected to look like a JSON object but this may be
//! changed in the future. Driver authors must cast the data they retrieve from | use serde::de::{Deserialize, Deserializer, Error as DeError, Visitor, SeqVisitor, MapVisitor};
use serde::de::impls::VecVisitor;
use serde_json;
use error::Error;
/// The type which represents the key for maps used throughout the Ardite
/// codebase.
///
/// Functions similarly to an object key in JavaScript.
pub type Key = String;
/// Represents a [JSON pointer][1] to a document property. Examples of a
/// pointer in this context include `/hello/world` or `/a/b/c/d`.
///
/// These pointers are represented as a list of keys.
///
/// [1]: https://duckduckgo.com/?q=json+pointer&atb=v1&ia=about
pub type Pointer = Vec<Key>;
/// Ordered representation of a map of key/value pairs, like a JSON object.
/// Backed by a linear map to maintain order and have high performance for
/// small objects.
// TODO: newtype pattern?
pub type Object = LinearMap<Key, Value>;
/// Ordered array of values, like a JSON array.
// TODO: newtype pattern?
pub type Array = Vec<Value>;
/// Various value types. Based on types in the [JSON standard][1] (see section
/// 5).
///
/// [1]: http://ecma-international.org/publications/files/ECMA-ST/ECMA-404.pdf
#[derive(PartialEq, Clone, Debug)]
pub enum Value {
/// The abscense of any value.
Null,
/// True or false.
Boolean(bool),
/// An integer numeric value.
I64(i64),
/// A floating point numeric value.
F64(f64),
/// A list of characters.
String(String),
/// A map of key/value pairs.
Object(Object),
/// A list of values.
Array(Array)
}
impl Value {
/// Gets a value at a specific point. Helpful for retrieving nested values.
pub fn get(&self, mut pointer: Pointer) -> Option<&Value> {
match *self {
Value::Object(ref map) => {
if pointer.is_empty() {
Some(self)
} else if let Some(value) = map.get(&pointer.remove(0)) {
value.get(pointer)
} else {
None
}
},
Value::Array(ref vec) => {
if pointer.is_empty() {
Some(self)
} else if let Some(value) = pointer.remove(0).parse::<usize>().ok().map_or(None, |i| vec.get(i)) {
value.get(pointer)
} else {
None
}
},
_ => if pointer.is_empty() { Some(self) } else { None }
}
}
/// Creates a `Value` from a JSON string.
pub fn from_json(json: &str) -> Result<Value, Error> {
serde_json::from_str(json).map_err(Error::from)
}
/// Converts a `Value` into a JSON string.
pub fn to_json(&self) -> Result<String, Error> {
serde_json::to_string(self).map_err(Error::from)
}
/// Converts a `Value` into a nice and indented JSON string.
pub fn to_json_pretty(&self) -> Result<String, Error> {
serde_json::to_string_pretty(self).map_err(Error::from)
}
}
impl Serialize for Value {
#[inline]
fn serialize<S>(&self, serializer: &mut S) -> Result<(), S::Error> where S: Serializer {
match *self {
Value::Null => serializer.serialize_unit(),
Value::Boolean(value) => serializer.serialize_bool(value),
Value::I64(value) => serializer.serialize_i64(value),
Value::F64(value) => serializer.serialize_f64(value),
Value::String(ref value) => serializer.serialize_str(&value),
Value::Array(ref value) => value.serialize(serializer),
Value::Object(ref value) => value.serialize(serializer)
}
}
}
impl Deserialize for Value {
#[inline]
fn deserialize<D>(deserializer: &mut D) -> Result<Value, D::Error> where D: Deserializer {
struct ValueVisitor;
impl Visitor for ValueVisitor {
type Value = Value;
#[inline] fn visit_bool<E>(&mut self, value: bool) -> Result<Value, E> { Ok(Value::Boolean(value)) }
#[inline] fn visit_u64<E>(&mut self, value: u64) -> Result<Value, E> { Ok(Value::I64(value as i64)) }
#[inline] fn visit_i64<E>(&mut self, value: i64) -> Result<Value, E> { Ok(Value::I64(value)) }
#[inline] fn visit_f64<E>(&mut self, value: f64) -> Result<Value, E> { Ok(Value::F64(value)) }
#[inline] fn visit_str<E>(&mut self, value: &str) -> Result<Value, E> where E: DeError { self.visit_string(value.to_owned()) }
#[inline] fn visit_string<E>(&mut self, value: String) -> Result<Value, E> { Ok(Value::String(value)) }
#[inline] fn visit_none<E>(&mut self) -> Result<Value, E> { Ok(Value::Null) }
#[inline] fn visit_some<D>(&mut self, deserializer: &mut D) -> Result<Value, D::Error> where D: Deserializer { Deserialize::deserialize(deserializer) }
#[inline] fn visit_unit<E>(&mut self) -> Result<Value, E> { Ok(Value::Null) }
#[inline] fn visit_seq<V>(&mut self, visitor: V) -> Result<Value, V::Error> where V: SeqVisitor { let values = try!(VecVisitor::new().visit_seq(visitor)); Ok(Value::Array(values)) }
#[inline]
fn visit_map<V>(&mut self, mut visitor: V) -> Result<Value, V::Error> where V: MapVisitor {
let mut object = LinearMap::with_capacity(visitor.size_hint().0);
while let Some((key, value)) = try!(visitor.visit()) {
object.insert(key, value);
}
try!(visitor.end());
Ok(Value::Object(object))
}
}
deserializer.deserialize(ValueVisitor)
}
}
impl<V> From<Option<V>> for Value where V: Into<Value> {
fn from(option: Option<V>) -> Self {
match option {
None => Value::Null,
Some(value) => value.into()
}
}
}
impl From<bool> for Value {
fn from(boolean: bool) -> Self {
Value::Boolean(boolean)
}
}
impl From<i64> for Value {
fn from(number: i64) -> Self {
Value::I64(number)
}
}
impl From<f64> for Value {
fn from(number: f64) -> Self {
Value::F64(number)
}
}
impl From<String> for Value {
fn from(string: String) -> Self {
Value::String(string)
}
}
impl<'a> From<&'a str> for Value {
fn from(string: &'a str) -> Self {
Value::from(string.to_owned())
}
}
/// An iterator of values. Used by drivers to convert their own iterator
/// implementations into a single type.
pub struct ValueIter<'a> {
iter: Box<Iterator<Item=Value> + 'a>
}
impl<'a> ValueIter<'a> {
/// Create a new value iterator.
pub fn new<I>(iter: I) -> Self where I: Iterator<Item=Value> + 'a {
ValueIter {
iter: Box::new(iter)
}
}
}
impl<'a> Iterator for ValueIter<'a> {
type Item = Value;
#[inline]
fn next(&mut self) -> Option<Value> {
self.iter.next()
}
}
#[cfg(test)]
mod tests {
use value::Value;
#[test]
fn test_get_primitive() {
assert_eq!(value!().get(point![]).cloned(), Some(value!()));
assert_eq!(value!().get(point!["hello"]).cloned(), None);
assert_eq!(value!().get(point!["a", "b", "c", "d", "e"]).cloned(), None);
assert_eq!(value!(true).get(point![]).cloned(), Some(value!(true)));
assert_eq!(value!(true).get(point!["hello"]).cloned(), None);
assert_eq!(value!(36).get(point![]).cloned(), Some(value!(36)));
assert_eq!(value!(36).get(point!["hello"]).cloned(), None);
assert_eq!(value!("world").get(point![]).cloned(), Some(value!("world")));
assert_eq!(value!("world").get(point!["hello"]).cloned(), None);
}
#[test]
fn test_get_object() {
let object = value!({
"hello" => true,
"world" => 8,
"yolo" => "swag",
"5" => (),
"moon" => {
"hello" => "yoyo"
}
});
assert_eq!(object.get(point![]).cloned(), Some(object.clone()));
assert_eq!(object.get(point!["hello"]).cloned(), Some(value!(true)));
assert_eq!(object.get(point!["yolo"]).cloned(), Some(value!("swag")));
assert_eq!(object.get(point!["5"]).cloned(), Some(value!()));
assert_eq!(object.get(point!["world", "hello"]).cloned(), None);
assert_eq!(object.get(point!["moon", "hello"]).cloned(), Some(value!("yoyo")));
assert_eq!(object.get(point!["moon", "nope"]).cloned(), None);
}
#[test]
fn test_get_array() {
let array = value!([
false,
64,
{
"hello" => true,
"world" => false,
"moon" => {
"goodbye" => "yoyo"
}
},
[[1, 2, 3], 4, 5 ]
]);
assert_eq!(array.get(point![]).cloned(), Some(array.clone()));
assert_eq!(array.get(point!["0"]).cloned(), Some(value!(false)));
assert_eq!(array.get(point!["1"]).cloned(), Some(value!(64)));
assert_eq!(array.get(point!["2", "hello"]).cloned(), Some(value!(true)));
assert_eq!(array.get(point!["2", "moon", "goodbye"]).cloned(), Some(value!("yoyo")));
assert_eq!(array.get(point!["length"]).cloned(), None);
assert_eq!(array.get(point!["3", "0", "1"]).cloned(), Some(value!(2)));
}
#[test]
fn test_from_json() {
assert_eq!(Value::from_json("null").unwrap(), value!());
assert_eq!(Value::from_json("true").unwrap(), value!(true));
assert_eq!(Value::from_json("false").unwrap(), value!(false));
assert_eq!(Value::from_json("7").unwrap(), value!(7));
assert_eq!(Value::from_json("3.3").unwrap(), value!(3.3));
assert_eq!(Value::from_json(r#""Hello,\n\"world\"!""#).unwrap(), value!("Hello,\n\"world\"!"));
assert_eq!(Value::from_json(r#"{"hello":"world","foo":true,"null":null,"goodbye":{"moon":2}}"#).unwrap(), value!({
"hello" => "world",
"foo" => true,
"null" => (),
"goodbye" => {
"moon" => 2
}
}));
assert_eq!(
Value::from_json(r#"["world",3.3,{"hello":"world"},null,null,[1,2,3],null]"#).unwrap(),
value!(["world", 3.3, { "hello" => "world" }, (), (), [1, 2, 3], ()])
);
}
#[test]
fn test_to_json() {
assert_eq!(&value!().to_json().unwrap(), "null");
assert_eq!(&value!(true).to_json().unwrap(), "true");
assert_eq!(&value!(false).to_json().unwrap(), "false");
assert_eq!(&value!(7).to_json().unwrap(), "7");
assert_eq!(&value!(6.667).to_json().unwrap(), "6.667");
assert_eq!(&value!("Hello,\n\"world\"!").to_json().unwrap(), r#""Hello,\n\"world\"!""#);
assert_eq!(&value!({
"hello" => "world",
"foo" => true,
"null" => (),
"goodbye" => {
"moon" => 2
}
}).to_json().unwrap(), r#"{"hello":"world","foo":true,"null":null,"goodbye":{"moon":2}}"#);
assert_eq!(
&value!(["world", 3.333, { "hello" => "world" }, (), (), [1, 2, 3], ()]).to_json().unwrap(),
r#"["world",3.333,{"hello":"world"},null,null,[1,2,3],null]"#
);
}
#[test]
fn test_to_json_pretty() {
assert_eq!(
&value!(["world", 3.333, { "hello" => "world" }, (), (), [1, 2, 3], ()]).to_json_pretty().unwrap(),
"[\n \"world\",\n 3.333,\n {\n \"hello\": \"world\"\n },\n null,\n null,\n [\n 1,\n 2,\n 3\n ],\n null\n]"
);
}
} | //! the driver to these types.
use linear_map::LinearMap;
use serde::ser::{Serialize, Serializer}; | random_line_split |
value.rs | //! Types representing for data which will be retrieved from the driver.
//! Currently this data is expected to look like a JSON object but this may be
//! changed in the future. Driver authors must cast the data they retrieve from
//! the driver to these types.
use linear_map::LinearMap;
use serde::ser::{Serialize, Serializer};
use serde::de::{Deserialize, Deserializer, Error as DeError, Visitor, SeqVisitor, MapVisitor};
use serde::de::impls::VecVisitor;
use serde_json;
use error::Error;
/// The type which represents the key for maps used throughout the Ardite
/// codebase.
///
/// Functions similarly to an object key in JavaScript.
pub type Key = String;
/// Represents a [JSON pointer][1] to a document property. Examples of a
/// pointer in this context include `/hello/world` or `/a/b/c/d`.
///
/// These pointers are represented as a list of keys.
///
/// [1]: https://duckduckgo.com/?q=json+pointer&atb=v1&ia=about
pub type Pointer = Vec<Key>;
/// Ordered representation of a map of key/value pairs, like a JSON object.
/// Backed by a linear map to maintain order and have high performance for
/// small objects.
// TODO: newtype pattern?
pub type Object = LinearMap<Key, Value>;
/// Ordered array of values, like a JSON array.
// TODO: newtype pattern?
pub type Array = Vec<Value>;
/// Various value types. Based on types in the [JSON standard][1] (see section
/// 5).
///
/// [1]: http://ecma-international.org/publications/files/ECMA-ST/ECMA-404.pdf
#[derive(PartialEq, Clone, Debug)]
pub enum Value {
/// The abscense of any value.
Null,
/// True or false.
Boolean(bool),
/// An integer numeric value.
I64(i64),
/// A floating point numeric value.
F64(f64),
/// A list of characters.
String(String),
/// A map of key/value pairs.
Object(Object),
/// A list of values.
Array(Array)
}
impl Value {
/// Gets a value at a specific point. Helpful for retrieving nested values.
pub fn get(&self, mut pointer: Pointer) -> Option<&Value> {
match *self {
Value::Object(ref map) => {
if pointer.is_empty() {
Some(self)
} else if let Some(value) = map.get(&pointer.remove(0)) {
value.get(pointer)
} else {
None
}
},
Value::Array(ref vec) => {
if pointer.is_empty() {
Some(self)
} else if let Some(value) = pointer.remove(0).parse::<usize>().ok().map_or(None, |i| vec.get(i)) {
value.get(pointer)
} else {
None
}
},
_ => if pointer.is_empty() { Some(self) } else { None }
}
}
/// Creates a `Value` from a JSON string.
pub fn from_json(json: &str) -> Result<Value, Error> {
serde_json::from_str(json).map_err(Error::from)
}
/// Converts a `Value` into a JSON string.
pub fn to_json(&self) -> Result<String, Error> {
serde_json::to_string(self).map_err(Error::from)
}
/// Converts a `Value` into a nice and indented JSON string.
pub fn to_json_pretty(&self) -> Result<String, Error> {
serde_json::to_string_pretty(self).map_err(Error::from)
}
}
impl Serialize for Value {
#[inline]
fn serialize<S>(&self, serializer: &mut S) -> Result<(), S::Error> where S: Serializer {
match *self {
Value::Null => serializer.serialize_unit(),
Value::Boolean(value) => serializer.serialize_bool(value),
Value::I64(value) => serializer.serialize_i64(value),
Value::F64(value) => serializer.serialize_f64(value),
Value::String(ref value) => serializer.serialize_str(&value),
Value::Array(ref value) => value.serialize(serializer),
Value::Object(ref value) => value.serialize(serializer)
}
}
}
impl Deserialize for Value {
#[inline]
fn deserialize<D>(deserializer: &mut D) -> Result<Value, D::Error> where D: Deserializer {
struct ValueVisitor;
impl Visitor for ValueVisitor {
type Value = Value;
#[inline] fn visit_bool<E>(&mut self, value: bool) -> Result<Value, E> { Ok(Value::Boolean(value)) }
#[inline] fn visit_u64<E>(&mut self, value: u64) -> Result<Value, E> { Ok(Value::I64(value as i64)) }
#[inline] fn visit_i64<E>(&mut self, value: i64) -> Result<Value, E> { Ok(Value::I64(value)) }
#[inline] fn visit_f64<E>(&mut self, value: f64) -> Result<Value, E> { Ok(Value::F64(value)) }
#[inline] fn visit_str<E>(&mut self, value: &str) -> Result<Value, E> where E: DeError { self.visit_string(value.to_owned()) }
#[inline] fn visit_string<E>(&mut self, value: String) -> Result<Value, E> { Ok(Value::String(value)) }
#[inline] fn visit_none<E>(&mut self) -> Result<Value, E> { Ok(Value::Null) }
#[inline] fn visit_some<D>(&mut self, deserializer: &mut D) -> Result<Value, D::Error> where D: Deserializer { Deserialize::deserialize(deserializer) }
#[inline] fn visit_unit<E>(&mut self) -> Result<Value, E> { Ok(Value::Null) }
#[inline] fn visit_seq<V>(&mut self, visitor: V) -> Result<Value, V::Error> where V: SeqVisitor { let values = try!(VecVisitor::new().visit_seq(visitor)); Ok(Value::Array(values)) }
#[inline]
fn visit_map<V>(&mut self, mut visitor: V) -> Result<Value, V::Error> where V: MapVisitor {
let mut object = LinearMap::with_capacity(visitor.size_hint().0);
while let Some((key, value)) = try!(visitor.visit()) {
object.insert(key, value);
}
try!(visitor.end());
Ok(Value::Object(object))
}
}
deserializer.deserialize(ValueVisitor)
}
}
impl<V> From<Option<V>> for Value where V: Into<Value> {
fn from(option: Option<V>) -> Self {
match option {
None => Value::Null,
Some(value) => value.into()
}
}
}
impl From<bool> for Value {
fn from(boolean: bool) -> Self {
Value::Boolean(boolean)
}
}
impl From<i64> for Value {
fn from(number: i64) -> Self {
Value::I64(number)
}
}
impl From<f64> for Value {
fn from(number: f64) -> Self {
Value::F64(number)
}
}
impl From<String> for Value {
fn from(string: String) -> Self {
Value::String(string)
}
}
impl<'a> From<&'a str> for Value {
fn from(string: &'a str) -> Self {
Value::from(string.to_owned())
}
}
/// An iterator of values. Used by drivers to convert their own iterator
/// implementations into a single type.
pub struct ValueIter<'a> {
iter: Box<Iterator<Item=Value> + 'a>
}
impl<'a> ValueIter<'a> {
/// Create a new value iterator.
pub fn new<I>(iter: I) -> Self where I: Iterator<Item=Value> + 'a {
ValueIter {
iter: Box::new(iter)
}
}
}
impl<'a> Iterator for ValueIter<'a> {
type Item = Value;
#[inline]
fn next(&mut self) -> Option<Value> {
self.iter.next()
}
}
#[cfg(test)]
mod tests {
use value::Value;
#[test]
fn test_get_primitive() {
assert_eq!(value!().get(point![]).cloned(), Some(value!()));
assert_eq!(value!().get(point!["hello"]).cloned(), None);
assert_eq!(value!().get(point!["a", "b", "c", "d", "e"]).cloned(), None);
assert_eq!(value!(true).get(point![]).cloned(), Some(value!(true)));
assert_eq!(value!(true).get(point!["hello"]).cloned(), None);
assert_eq!(value!(36).get(point![]).cloned(), Some(value!(36)));
assert_eq!(value!(36).get(point!["hello"]).cloned(), None);
assert_eq!(value!("world").get(point![]).cloned(), Some(value!("world")));
assert_eq!(value!("world").get(point!["hello"]).cloned(), None);
}
#[test]
fn test_get_object() {
let object = value!({
"hello" => true,
"world" => 8,
"yolo" => "swag",
"5" => (),
"moon" => {
"hello" => "yoyo"
}
});
assert_eq!(object.get(point![]).cloned(), Some(object.clone()));
assert_eq!(object.get(point!["hello"]).cloned(), Some(value!(true)));
assert_eq!(object.get(point!["yolo"]).cloned(), Some(value!("swag")));
assert_eq!(object.get(point!["5"]).cloned(), Some(value!()));
assert_eq!(object.get(point!["world", "hello"]).cloned(), None);
assert_eq!(object.get(point!["moon", "hello"]).cloned(), Some(value!("yoyo")));
assert_eq!(object.get(point!["moon", "nope"]).cloned(), None);
}
#[test]
fn test_get_array() {
let array = value!([
false,
64,
{
"hello" => true,
"world" => false,
"moon" => {
"goodbye" => "yoyo"
}
},
[[1, 2, 3], 4, 5 ]
]);
assert_eq!(array.get(point![]).cloned(), Some(array.clone()));
assert_eq!(array.get(point!["0"]).cloned(), Some(value!(false)));
assert_eq!(array.get(point!["1"]).cloned(), Some(value!(64)));
assert_eq!(array.get(point!["2", "hello"]).cloned(), Some(value!(true)));
assert_eq!(array.get(point!["2", "moon", "goodbye"]).cloned(), Some(value!("yoyo")));
assert_eq!(array.get(point!["length"]).cloned(), None);
assert_eq!(array.get(point!["3", "0", "1"]).cloned(), Some(value!(2)));
}
#[test]
fn test_from_json() {
assert_eq!(Value::from_json("null").unwrap(), value!());
assert_eq!(Value::from_json("true").unwrap(), value!(true));
assert_eq!(Value::from_json("false").unwrap(), value!(false));
assert_eq!(Value::from_json("7").unwrap(), value!(7));
assert_eq!(Value::from_json("3.3").unwrap(), value!(3.3));
assert_eq!(Value::from_json(r#""Hello,\n\"world\"!""#).unwrap(), value!("Hello,\n\"world\"!"));
assert_eq!(Value::from_json(r#"{"hello":"world","foo":true,"null":null,"goodbye":{"moon":2}}"#).unwrap(), value!({
"hello" => "world",
"foo" => true,
"null" => (),
"goodbye" => {
"moon" => 2
}
}));
assert_eq!(
Value::from_json(r#"["world",3.3,{"hello":"world"},null,null,[1,2,3],null]"#).unwrap(),
value!(["world", 3.3, { "hello" => "world" }, (), (), [1, 2, 3], ()])
);
}
#[test]
fn test_to_json() |
#[test]
fn test_to_json_pretty() {
assert_eq!(
&value!(["world", 3.333, { "hello" => "world" }, (), (), [1, 2, 3], ()]).to_json_pretty().unwrap(),
"[\n \"world\",\n 3.333,\n {\n \"hello\": \"world\"\n },\n null,\n null,\n [\n 1,\n 2,\n 3\n ],\n null\n]"
);
}
}
| {
assert_eq!(&value!().to_json().unwrap(), "null");
assert_eq!(&value!(true).to_json().unwrap(), "true");
assert_eq!(&value!(false).to_json().unwrap(), "false");
assert_eq!(&value!(7).to_json().unwrap(), "7");
assert_eq!(&value!(6.667).to_json().unwrap(), "6.667");
assert_eq!(&value!("Hello,\n\"world\"!").to_json().unwrap(), r#""Hello,\n\"world\"!""#);
assert_eq!(&value!({
"hello" => "world",
"foo" => true,
"null" => (),
"goodbye" => {
"moon" => 2
}
}).to_json().unwrap(), r#"{"hello":"world","foo":true,"null":null,"goodbye":{"moon":2}}"#);
assert_eq!(
&value!(["world", 3.333, { "hello" => "world" }, (), (), [1, 2, 3], ()]).to_json().unwrap(),
r#"["world",3.333,{"hello":"world"},null,null,[1,2,3],null]"#
);
} | identifier_body |
value.rs | //! Types representing for data which will be retrieved from the driver.
//! Currently this data is expected to look like a JSON object but this may be
//! changed in the future. Driver authors must cast the data they retrieve from
//! the driver to these types.
use linear_map::LinearMap;
use serde::ser::{Serialize, Serializer};
use serde::de::{Deserialize, Deserializer, Error as DeError, Visitor, SeqVisitor, MapVisitor};
use serde::de::impls::VecVisitor;
use serde_json;
use error::Error;
/// The type which represents the key for maps used throughout the Ardite
/// codebase.
///
/// Functions similarly to an object key in JavaScript.
pub type Key = String;
/// Represents a [JSON pointer][1] to a document property. Examples of a
/// pointer in this context include `/hello/world` or `/a/b/c/d`.
///
/// These pointers are represented as a list of keys.
///
/// [1]: https://duckduckgo.com/?q=json+pointer&atb=v1&ia=about
pub type Pointer = Vec<Key>;
/// Ordered representation of a map of key/value pairs, like a JSON object.
/// Backed by a linear map to maintain order and have high performance for
/// small objects.
// TODO: newtype pattern?
pub type Object = LinearMap<Key, Value>;
/// Ordered array of values, like a JSON array.
// TODO: newtype pattern?
pub type Array = Vec<Value>;
/// Various value types. Based on types in the [JSON standard][1] (see section
/// 5).
///
/// [1]: http://ecma-international.org/publications/files/ECMA-ST/ECMA-404.pdf
#[derive(PartialEq, Clone, Debug)]
pub enum Value {
/// The abscense of any value.
Null,
/// True or false.
Boolean(bool),
/// An integer numeric value.
I64(i64),
/// A floating point numeric value.
F64(f64),
/// A list of characters.
String(String),
/// A map of key/value pairs.
Object(Object),
/// A list of values.
Array(Array)
}
impl Value {
/// Gets a value at a specific point. Helpful for retrieving nested values.
pub fn get(&self, mut pointer: Pointer) -> Option<&Value> {
match *self {
Value::Object(ref map) => {
if pointer.is_empty() {
Some(self)
} else if let Some(value) = map.get(&pointer.remove(0)) {
value.get(pointer)
} else {
None
}
},
Value::Array(ref vec) => {
if pointer.is_empty() {
Some(self)
} else if let Some(value) = pointer.remove(0).parse::<usize>().ok().map_or(None, |i| vec.get(i)) {
value.get(pointer)
} else {
None
}
},
_ => if pointer.is_empty() { Some(self) } else { None }
}
}
/// Creates a `Value` from a JSON string.
pub fn from_json(json: &str) -> Result<Value, Error> {
serde_json::from_str(json).map_err(Error::from)
}
/// Converts a `Value` into a JSON string.
pub fn to_json(&self) -> Result<String, Error> {
serde_json::to_string(self).map_err(Error::from)
}
/// Converts a `Value` into a nice and indented JSON string.
pub fn to_json_pretty(&self) -> Result<String, Error> {
serde_json::to_string_pretty(self).map_err(Error::from)
}
}
impl Serialize for Value {
#[inline]
fn serialize<S>(&self, serializer: &mut S) -> Result<(), S::Error> where S: Serializer {
match *self {
Value::Null => serializer.serialize_unit(),
Value::Boolean(value) => serializer.serialize_bool(value),
Value::I64(value) => serializer.serialize_i64(value),
Value::F64(value) => serializer.serialize_f64(value),
Value::String(ref value) => serializer.serialize_str(&value),
Value::Array(ref value) => value.serialize(serializer),
Value::Object(ref value) => value.serialize(serializer)
}
}
}
impl Deserialize for Value {
#[inline]
fn deserialize<D>(deserializer: &mut D) -> Result<Value, D::Error> where D: Deserializer {
struct ValueVisitor;
impl Visitor for ValueVisitor {
type Value = Value;
#[inline] fn visit_bool<E>(&mut self, value: bool) -> Result<Value, E> { Ok(Value::Boolean(value)) }
#[inline] fn visit_u64<E>(&mut self, value: u64) -> Result<Value, E> { Ok(Value::I64(value as i64)) }
#[inline] fn visit_i64<E>(&mut self, value: i64) -> Result<Value, E> { Ok(Value::I64(value)) }
#[inline] fn visit_f64<E>(&mut self, value: f64) -> Result<Value, E> { Ok(Value::F64(value)) }
#[inline] fn visit_str<E>(&mut self, value: &str) -> Result<Value, E> where E: DeError { self.visit_string(value.to_owned()) }
#[inline] fn visit_string<E>(&mut self, value: String) -> Result<Value, E> { Ok(Value::String(value)) }
#[inline] fn visit_none<E>(&mut self) -> Result<Value, E> { Ok(Value::Null) }
#[inline] fn visit_some<D>(&mut self, deserializer: &mut D) -> Result<Value, D::Error> where D: Deserializer { Deserialize::deserialize(deserializer) }
#[inline] fn visit_unit<E>(&mut self) -> Result<Value, E> { Ok(Value::Null) }
#[inline] fn visit_seq<V>(&mut self, visitor: V) -> Result<Value, V::Error> where V: SeqVisitor { let values = try!(VecVisitor::new().visit_seq(visitor)); Ok(Value::Array(values)) }
#[inline]
fn visit_map<V>(&mut self, mut visitor: V) -> Result<Value, V::Error> where V: MapVisitor {
let mut object = LinearMap::with_capacity(visitor.size_hint().0);
while let Some((key, value)) = try!(visitor.visit()) {
object.insert(key, value);
}
try!(visitor.end());
Ok(Value::Object(object))
}
}
deserializer.deserialize(ValueVisitor)
}
}
impl<V> From<Option<V>> for Value where V: Into<Value> {
fn from(option: Option<V>) -> Self {
match option {
None => Value::Null,
Some(value) => value.into()
}
}
}
impl From<bool> for Value {
fn from(boolean: bool) -> Self {
Value::Boolean(boolean)
}
}
impl From<i64> for Value {
fn from(number: i64) -> Self {
Value::I64(number)
}
}
impl From<f64> for Value {
fn from(number: f64) -> Self {
Value::F64(number)
}
}
impl From<String> for Value {
fn from(string: String) -> Self {
Value::String(string)
}
}
impl<'a> From<&'a str> for Value {
fn from(string: &'a str) -> Self {
Value::from(string.to_owned())
}
}
/// An iterator of values. Used by drivers to convert their own iterator
/// implementations into a single type.
pub struct ValueIter<'a> {
iter: Box<Iterator<Item=Value> + 'a>
}
impl<'a> ValueIter<'a> {
/// Create a new value iterator.
pub fn new<I>(iter: I) -> Self where I: Iterator<Item=Value> + 'a {
ValueIter {
iter: Box::new(iter)
}
}
}
impl<'a> Iterator for ValueIter<'a> {
type Item = Value;
#[inline]
fn next(&mut self) -> Option<Value> {
self.iter.next()
}
}
#[cfg(test)]
mod tests {
use value::Value;
#[test]
fn | () {
assert_eq!(value!().get(point![]).cloned(), Some(value!()));
assert_eq!(value!().get(point!["hello"]).cloned(), None);
assert_eq!(value!().get(point!["a", "b", "c", "d", "e"]).cloned(), None);
assert_eq!(value!(true).get(point![]).cloned(), Some(value!(true)));
assert_eq!(value!(true).get(point!["hello"]).cloned(), None);
assert_eq!(value!(36).get(point![]).cloned(), Some(value!(36)));
assert_eq!(value!(36).get(point!["hello"]).cloned(), None);
assert_eq!(value!("world").get(point![]).cloned(), Some(value!("world")));
assert_eq!(value!("world").get(point!["hello"]).cloned(), None);
}
#[test]
fn test_get_object() {
let object = value!({
"hello" => true,
"world" => 8,
"yolo" => "swag",
"5" => (),
"moon" => {
"hello" => "yoyo"
}
});
assert_eq!(object.get(point![]).cloned(), Some(object.clone()));
assert_eq!(object.get(point!["hello"]).cloned(), Some(value!(true)));
assert_eq!(object.get(point!["yolo"]).cloned(), Some(value!("swag")));
assert_eq!(object.get(point!["5"]).cloned(), Some(value!()));
assert_eq!(object.get(point!["world", "hello"]).cloned(), None);
assert_eq!(object.get(point!["moon", "hello"]).cloned(), Some(value!("yoyo")));
assert_eq!(object.get(point!["moon", "nope"]).cloned(), None);
}
#[test]
fn test_get_array() {
let array = value!([
false,
64,
{
"hello" => true,
"world" => false,
"moon" => {
"goodbye" => "yoyo"
}
},
[[1, 2, 3], 4, 5 ]
]);
assert_eq!(array.get(point![]).cloned(), Some(array.clone()));
assert_eq!(array.get(point!["0"]).cloned(), Some(value!(false)));
assert_eq!(array.get(point!["1"]).cloned(), Some(value!(64)));
assert_eq!(array.get(point!["2", "hello"]).cloned(), Some(value!(true)));
assert_eq!(array.get(point!["2", "moon", "goodbye"]).cloned(), Some(value!("yoyo")));
assert_eq!(array.get(point!["length"]).cloned(), None);
assert_eq!(array.get(point!["3", "0", "1"]).cloned(), Some(value!(2)));
}
#[test]
fn test_from_json() {
assert_eq!(Value::from_json("null").unwrap(), value!());
assert_eq!(Value::from_json("true").unwrap(), value!(true));
assert_eq!(Value::from_json("false").unwrap(), value!(false));
assert_eq!(Value::from_json("7").unwrap(), value!(7));
assert_eq!(Value::from_json("3.3").unwrap(), value!(3.3));
assert_eq!(Value::from_json(r#""Hello,\n\"world\"!""#).unwrap(), value!("Hello,\n\"world\"!"));
assert_eq!(Value::from_json(r#"{"hello":"world","foo":true,"null":null,"goodbye":{"moon":2}}"#).unwrap(), value!({
"hello" => "world",
"foo" => true,
"null" => (),
"goodbye" => {
"moon" => 2
}
}));
assert_eq!(
Value::from_json(r#"["world",3.3,{"hello":"world"},null,null,[1,2,3],null]"#).unwrap(),
value!(["world", 3.3, { "hello" => "world" }, (), (), [1, 2, 3], ()])
);
}
#[test]
fn test_to_json() {
assert_eq!(&value!().to_json().unwrap(), "null");
assert_eq!(&value!(true).to_json().unwrap(), "true");
assert_eq!(&value!(false).to_json().unwrap(), "false");
assert_eq!(&value!(7).to_json().unwrap(), "7");
assert_eq!(&value!(6.667).to_json().unwrap(), "6.667");
assert_eq!(&value!("Hello,\n\"world\"!").to_json().unwrap(), r#""Hello,\n\"world\"!""#);
assert_eq!(&value!({
"hello" => "world",
"foo" => true,
"null" => (),
"goodbye" => {
"moon" => 2
}
}).to_json().unwrap(), r#"{"hello":"world","foo":true,"null":null,"goodbye":{"moon":2}}"#);
assert_eq!(
&value!(["world", 3.333, { "hello" => "world" }, (), (), [1, 2, 3], ()]).to_json().unwrap(),
r#"["world",3.333,{"hello":"world"},null,null,[1,2,3],null]"#
);
}
#[test]
fn test_to_json_pretty() {
assert_eq!(
&value!(["world", 3.333, { "hello" => "world" }, (), (), [1, 2, 3], ()]).to_json_pretty().unwrap(),
"[\n \"world\",\n 3.333,\n {\n \"hello\": \"world\"\n },\n null,\n null,\n [\n 1,\n 2,\n 3\n ],\n null\n]"
);
}
}
| test_get_primitive | identifier_name |
balloon.rs | #[error("Fallocate fail.: {0}")]
FallocateFail(std::io::Error),
#[error("Madvise fail.: {0}")]
MadviseFail(std::io::Error),
#[error("Failed to EventFd write.: {0}")]
EventFdWriteFail(std::io::Error),
#[error("Invalid queue index: {0}")]
InvalidQueueIndex(usize),
#[error("Fail tp signal: {0}")]
FailedSignal(io::Error),
#[error("Descriptor chain is too short")]
DescriptorChainTooShort,
#[error("Failed adding used index: {0}")]
QueueAddUsed(virtio_queue::Error),
#[error("Failed creating an iterator over the queue: {0}")]
QueueIterator(virtio_queue::Error),
}
// Got from include/uapi/linux/virtio_balloon.h
#[repr(C)]
#[derive(Copy, Clone, Debug, Default, Versionize)]
pub struct VirtioBalloonConfig {
// Number of pages host wants Guest to give up.
num_pages: u32,
// Number of pages we've actually got in balloon.
actual: u32,
}
#[derive(Clone, Debug)]
struct PartiallyBalloonedPage {
addr: u64,
bitmap: Vec<u64>,
page_size: u64,
}
impl PartiallyBalloonedPage {
fn new() -> Self {
let page_size = get_page_size();
let len = ((page_size >> VIRTIO_BALLOON_PFN_SHIFT) + 63) / 64;
// Initial each padding bit as 1 in bitmap.
let mut bitmap = vec![0_u64; len as usize];
let pad_num = len * 64 - (page_size >> VIRTIO_BALLOON_PFN_SHIFT);
bitmap[(len - 1) as usize] =!((1 << (64 - pad_num)) - 1);
Self {
addr: 0,
bitmap,
page_size,
}
}
fn pfn_match(&self, addr: u64) -> bool {
self.addr == addr &!(self.page_size - 1)
}
fn bitmap_full(&self) -> bool {
self.bitmap.iter().all(|b| *b == u64::MAX)
}
fn | (&mut self, addr: u64) {
let addr_offset = (addr % self.page_size) >> VIRTIO_BALLOON_PFN_SHIFT;
self.bitmap[(addr_offset / 64) as usize] |= 1 << (addr_offset % 64);
}
fn reset(&mut self) {
let len = ((self.page_size >> VIRTIO_BALLOON_PFN_SHIFT) + 63) / 64;
self.addr = 0;
self.bitmap = vec![0; len as usize];
let pad_num = len * 64 - (self.page_size >> VIRTIO_BALLOON_PFN_SHIFT);
self.bitmap[(len - 1) as usize] =!((1 << (64 - pad_num)) - 1);
}
}
const CONFIG_ACTUAL_OFFSET: u64 = 4;
const CONFIG_ACTUAL_SIZE: usize = 4;
// SAFETY: it only has data and has no implicit padding.
unsafe impl ByteValued for VirtioBalloonConfig {}
struct BalloonEpollHandler {
mem: GuestMemoryAtomic<GuestMemoryMmap>,
queues: Vec<Queue>,
interrupt_cb: Arc<dyn VirtioInterrupt>,
inflate_queue_evt: EventFd,
deflate_queue_evt: EventFd,
reporting_queue_evt: Option<EventFd>,
kill_evt: EventFd,
pause_evt: EventFd,
pbp: Option<PartiallyBalloonedPage>,
}
impl BalloonEpollHandler {
fn signal(&self, int_type: VirtioInterruptType) -> result::Result<(), Error> {
self.interrupt_cb.trigger(int_type).map_err(|e| {
error!("Failed to signal used queue: {:?}", e);
Error::FailedSignal(e)
})
}
fn advise_memory_range(
memory: &GuestMemoryMmap,
range_base: GuestAddress,
range_len: usize,
advice: libc::c_int,
) -> result::Result<(), Error> {
let hva = memory
.get_host_address(range_base)
.map_err(Error::GuestMemory)?;
let res =
// SAFETY: Need unsafe to do syscall madvise
unsafe { libc::madvise(hva as *mut libc::c_void, range_len as libc::size_t, advice) };
if res!= 0 {
return Err(Error::MadviseFail(io::Error::last_os_error()));
}
Ok(())
}
fn release_memory_range(
memory: &GuestMemoryMmap,
range_base: GuestAddress,
range_len: usize,
) -> result::Result<(), Error> {
let region = memory.find_region(range_base).ok_or(Error::GuestMemory(
GuestMemoryError::InvalidGuestAddress(range_base),
))?;
if let Some(f_off) = region.file_offset() {
let offset = range_base.0 - region.start_addr().0;
// SAFETY: FFI call with valid arguments
let res = unsafe {
libc::fallocate64(
f_off.file().as_raw_fd(),
libc::FALLOC_FL_PUNCH_HOLE | libc::FALLOC_FL_KEEP_SIZE,
(offset + f_off.start()) as libc::off64_t,
range_len as libc::off64_t,
)
};
if res!= 0 {
return Err(Error::FallocateFail(io::Error::last_os_error()));
}
}
Self::advise_memory_range(memory, range_base, range_len, libc::MADV_DONTNEED)
}
fn release_memory_range_4k(
pbp: &mut Option<PartiallyBalloonedPage>,
memory: &GuestMemoryMmap,
pfn: u32,
) -> result::Result<(), Error> {
let range_base = GuestAddress((pfn as u64) << VIRTIO_BALLOON_PFN_SHIFT);
let range_len = 1 << VIRTIO_BALLOON_PFN_SHIFT;
let page_size: u64 = get_page_size();
if page_size == 1 << VIRTIO_BALLOON_PFN_SHIFT {
return Self::release_memory_range(memory, range_base, range_len);
}
if pbp.is_none() {
*pbp = Some(PartiallyBalloonedPage::new());
}
if!pbp.as_ref().unwrap().pfn_match(range_base.0) {
// We are trying to free memory region in a different pfn with current pbp. Flush pbp.
pbp.as_mut().unwrap().reset();
pbp.as_mut().unwrap().addr = align_page_size_down(range_base.0);
}
pbp.as_mut().unwrap().set_bit(range_base.0);
if pbp.as_ref().unwrap().bitmap_full() {
Self::release_memory_range(
memory,
vm_memory::GuestAddress(pbp.as_ref().unwrap().addr),
page_size as usize,
)?;
pbp.as_mut().unwrap().reset();
}
Ok(())
}
fn process_queue(&mut self, queue_index: usize) -> result::Result<(), Error> {
let mut used_descs = false;
while let Some(mut desc_chain) =
self.queues[queue_index].pop_descriptor_chain(self.mem.memory())
{
let desc = desc_chain.next().ok_or(Error::DescriptorChainTooShort)?;
let data_chunk_size = size_of::<u32>();
// The head contains the request type which MUST be readable.
if desc.is_write_only() {
error!("The head contains the request type is not right");
return Err(Error::UnexpectedWriteOnlyDescriptor);
}
if desc.len() as usize % data_chunk_size!= 0 {
error!("the request size {} is not right", desc.len());
return Err(Error::InvalidRequest);
}
let mut offset = 0u64;
while offset < desc.len() as u64 {
let addr = desc.addr().checked_add(offset).unwrap();
let pfn: u32 = desc_chain
.memory()
.read_obj(addr)
.map_err(Error::GuestMemory)?;
offset += data_chunk_size as u64;
match queue_index {
0 => {
Self::release_memory_range_4k(&mut self.pbp, desc_chain.memory(), pfn)?;
}
1 => {
let page_size = get_page_size() as usize;
let rbase = align_page_size_down((pfn as u64) << VIRTIO_BALLOON_PFN_SHIFT);
Self::advise_memory_range(
desc_chain.memory(),
vm_memory::GuestAddress(rbase),
page_size,
libc::MADV_WILLNEED,
)?;
}
_ => return Err(Error::InvalidQueueIndex(queue_index)),
}
}
self.queues[queue_index]
.add_used(desc_chain.memory(), desc_chain.head_index(), desc.len())
.map_err(Error::QueueAddUsed)?;
used_descs = true;
}
if used_descs {
self.signal(VirtioInterruptType::Queue(queue_index as u16))
} else {
Ok(())
}
}
fn process_reporting_queue(&mut self, queue_index: usize) -> result::Result<(), Error> {
let mut used_descs = false;
while let Some(mut desc_chain) =
self.queues[queue_index].pop_descriptor_chain(self.mem.memory())
{
let mut descs_len = 0;
while let Some(desc) = desc_chain.next() {
descs_len += desc.len();
Self::release_memory_range(desc_chain.memory(), desc.addr(), desc.len() as usize)?;
}
self.queues[queue_index]
.add_used(desc_chain.memory(), desc_chain.head_index(), descs_len)
.map_err(Error::QueueAddUsed)?;
used_descs = true;
}
if used_descs {
self.signal(VirtioInterruptType::Queue(queue_index as u16))
} else {
Ok(())
}
}
fn run(
&mut self,
paused: Arc<AtomicBool>,
paused_sync: Arc<Barrier>,
) -> result::Result<(), EpollHelperError> {
let mut helper = EpollHelper::new(&self.kill_evt, &self.pause_evt)?;
helper.add_event(self.inflate_queue_evt.as_raw_fd(), INFLATE_QUEUE_EVENT)?;
helper.add_event(self.deflate_queue_evt.as_raw_fd(), DEFLATE_QUEUE_EVENT)?;
if let Some(reporting_queue_evt) = self.reporting_queue_evt.as_ref() {
helper.add_event(reporting_queue_evt.as_raw_fd(), REPORTING_QUEUE_EVENT)?;
}
helper.run(paused, paused_sync, self)?;
Ok(())
}
}
impl EpollHelperHandler for BalloonEpollHandler {
fn handle_event(
&mut self,
_helper: &mut EpollHelper,
event: &epoll::Event,
) -> result::Result<(), EpollHelperError> {
let ev_type = event.data as u16;
match ev_type {
INFLATE_QUEUE_EVENT => {
self.inflate_queue_evt.read().map_err(|e| {
EpollHelperError::HandleEvent(anyhow!(
"Failed to get inflate queue event: {:?}",
e
))
})?;
self.process_queue(0).map_err(|e| {
EpollHelperError::HandleEvent(anyhow!(
"Failed to signal used inflate queue: {:?}",
e
))
})?;
}
DEFLATE_QUEUE_EVENT => {
self.deflate_queue_evt.read().map_err(|e| {
EpollHelperError::HandleEvent(anyhow!(
"Failed to get deflate queue event: {:?}",
e
))
})?;
self.process_queue(1).map_err(|e| {
EpollHelperError::HandleEvent(anyhow!(
"Failed to signal used deflate queue: {:?}",
e
))
})?;
}
REPORTING_QUEUE_EVENT => {
if let Some(reporting_queue_evt) = self.reporting_queue_evt.as_ref() {
reporting_queue_evt.read().map_err(|e| {
EpollHelperError::HandleEvent(anyhow!(
"Failed to get reporting queue event: {:?}",
e
))
})?;
self.process_reporting_queue(2).map_err(|e| {
EpollHelperError::HandleEvent(anyhow!(
"Failed to signal used inflate queue: {:?}",
e
))
})?;
} else {
return Err(EpollHelperError::HandleEvent(anyhow!(
"Invalid reporting queue event as no eventfd registered"
)));
}
}
_ => {
return Err(EpollHelperError::HandleEvent(anyhow!(
"Unknown event for virtio-balloon"
)));
}
}
Ok(())
}
}
#[derive(Versionize)]
pub struct BalloonState {
pub avail_features: u64,
pub acked_features: u64,
pub config: VirtioBalloonConfig,
}
impl VersionMapped for BalloonState {}
// Virtio device for exposing entropy to the guest OS through virtio.
pub struct Balloon {
common: VirtioCommon,
id: String,
config: VirtioBalloonConfig,
seccomp_action: SeccompAction,
exit_evt: EventFd,
interrupt_cb: Option<Arc<dyn VirtioInterrupt>>,
}
impl Balloon {
// Create a new virtio-balloon.
pub fn new(
id: String,
size: u64,
deflate_on_oom: bool,
free_page_reporting: bool,
seccomp_action: SeccompAction,
exit_evt: EventFd,
state: Option<BalloonState>,
) -> io::Result<Self> {
let mut queue_sizes = vec![QUEUE_SIZE; MIN_NUM_QUEUES];
let (avail_features, acked_features, config, paused) = if let Some(state) = state {
info!("Restoring virtio-balloon {}", id);
(
state.avail_features,
state.acked_features,
state.config,
true,
)
} else {
let mut avail_features = 1u64 << VIRTIO_F_VERSION_1;
if deflate_on_oom {
avail_features |= 1u64 << VIRTIO_BALLOON_F_DEFLATE_ON_OOM;
}
if free_page_reporting {
avail_features |= 1u64 << VIRTIO_BALLOON_F_REPORTING;
}
let config = VirtioBalloonConfig {
num_pages: (size >> VIRTIO_BALLOON_PFN_SHIFT) as u32,
..Default::default()
};
(avail_features, 0, config, false)
};
if free_page_reporting {
queue_sizes.push(REPORTING_QUEUE_SIZE);
}
Ok(Balloon {
common: VirtioCommon {
device_type: VirtioDeviceType::Balloon as u32,
avail_features,
acked_features,
paused_sync: Some(Arc::new(Barrier::new(2))),
queue_sizes,
min_queues: MIN_NUM_QUEUES as u16,
paused: Arc::new(AtomicBool::new(paused)),
..Default::default()
},
id,
config,
seccomp_action,
exit_evt,
interrupt_cb: None,
})
}
pub fn resize(&mut self, size: u64) -> Result<(), Error> {
self.config.num_pages = (size >> VIRTIO_BALLOON_PFN_SHIFT) as u32;
if let Some(interrupt_cb) = &self.interrupt_cb {
interrupt_cb
.trigger(VirtioInterruptType::Config)
.map_err(Error::FailedSignal)
} else {
Ok(())
}
}
// Get the actual size of the virtio-balloon.
pub fn get_actual(&self) -> u64 {
(self.config.actual as u64) << VIRTIO_BALLOON_PFN_SHIFT
}
fn state(&self) -> BalloonState {
BalloonState {
avail_features: self.common.avail_features,
acked_features: self.common.acked_features,
config: self.config,
}
}
#[cfg(fuzzing)]
pub fn wait_for_epoll_threads(&mut self) {
self.common.wait_for_epoll_threads();
}
}
impl Drop for Balloon {
fn drop(&mut self) {
if let Some(kill_evt) = self.common.kill_evt.take() {
// Ignore the result because there is nothing we can do about it.
let _ = kill_evt.write(1);
}
self.common.wait_for_epoll_threads();
}
}
impl VirtioDevice for Balloon {
fn device_type(&self) -> u32 {
self.common.device_type
}
fn queue_max_sizes(&self) -> &[u16] {
&self.common.queue_sizes
}
fn features(&self) -> u64 {
self.common.avail_features
}
fn ack_features(&mut self, value: u64) {
self.common. | set_bit | identifier_name |
balloon.rs | #[error("Fallocate fail.: {0}")]
FallocateFail(std::io::Error),
#[error("Madvise fail.: {0}")]
MadviseFail(std::io::Error),
#[error("Failed to EventFd write.: {0}")]
EventFdWriteFail(std::io::Error),
#[error("Invalid queue index: {0}")]
InvalidQueueIndex(usize),
#[error("Fail tp signal: {0}")]
FailedSignal(io::Error),
#[error("Descriptor chain is too short")]
DescriptorChainTooShort,
#[error("Failed adding used index: {0}")]
QueueAddUsed(virtio_queue::Error),
#[error("Failed creating an iterator over the queue: {0}")]
QueueIterator(virtio_queue::Error),
}
// Got from include/uapi/linux/virtio_balloon.h
#[repr(C)]
#[derive(Copy, Clone, Debug, Default, Versionize)]
pub struct VirtioBalloonConfig {
// Number of pages host wants Guest to give up.
num_pages: u32,
// Number of pages we've actually got in balloon.
actual: u32,
}
#[derive(Clone, Debug)]
struct PartiallyBalloonedPage {
addr: u64,
bitmap: Vec<u64>,
page_size: u64,
}
impl PartiallyBalloonedPage {
fn new() -> Self {
let page_size = get_page_size();
let len = ((page_size >> VIRTIO_BALLOON_PFN_SHIFT) + 63) / 64;
// Initial each padding bit as 1 in bitmap.
let mut bitmap = vec![0_u64; len as usize];
let pad_num = len * 64 - (page_size >> VIRTIO_BALLOON_PFN_SHIFT);
bitmap[(len - 1) as usize] =!((1 << (64 - pad_num)) - 1);
Self {
addr: 0,
bitmap,
page_size,
}
}
fn pfn_match(&self, addr: u64) -> bool {
self.addr == addr &!(self.page_size - 1)
}
fn bitmap_full(&self) -> bool {
self.bitmap.iter().all(|b| *b == u64::MAX)
}
fn set_bit(&mut self, addr: u64) {
let addr_offset = (addr % self.page_size) >> VIRTIO_BALLOON_PFN_SHIFT;
self.bitmap[(addr_offset / 64) as usize] |= 1 << (addr_offset % 64);
}
fn reset(&mut self) {
let len = ((self.page_size >> VIRTIO_BALLOON_PFN_SHIFT) + 63) / 64;
self.addr = 0;
self.bitmap = vec![0; len as usize];
let pad_num = len * 64 - (self.page_size >> VIRTIO_BALLOON_PFN_SHIFT);
self.bitmap[(len - 1) as usize] =!((1 << (64 - pad_num)) - 1);
}
}
const CONFIG_ACTUAL_OFFSET: u64 = 4;
const CONFIG_ACTUAL_SIZE: usize = 4;
// SAFETY: it only has data and has no implicit padding.
unsafe impl ByteValued for VirtioBalloonConfig {}
struct BalloonEpollHandler {
mem: GuestMemoryAtomic<GuestMemoryMmap>,
queues: Vec<Queue>,
interrupt_cb: Arc<dyn VirtioInterrupt>,
inflate_queue_evt: EventFd,
deflate_queue_evt: EventFd,
reporting_queue_evt: Option<EventFd>,
kill_evt: EventFd,
pause_evt: EventFd,
pbp: Option<PartiallyBalloonedPage>,
}
impl BalloonEpollHandler {
fn signal(&self, int_type: VirtioInterruptType) -> result::Result<(), Error> {
self.interrupt_cb.trigger(int_type).map_err(|e| {
error!("Failed to signal used queue: {:?}", e);
Error::FailedSignal(e)
})
}
fn advise_memory_range(
memory: &GuestMemoryMmap,
range_base: GuestAddress,
range_len: usize,
advice: libc::c_int,
) -> result::Result<(), Error> {
let hva = memory
.get_host_address(range_base)
.map_err(Error::GuestMemory)?;
let res =
// SAFETY: Need unsafe to do syscall madvise
unsafe { libc::madvise(hva as *mut libc::c_void, range_len as libc::size_t, advice) };
if res!= 0 {
return Err(Error::MadviseFail(io::Error::last_os_error()));
}
Ok(())
}
fn release_memory_range(
memory: &GuestMemoryMmap,
range_base: GuestAddress,
range_len: usize,
) -> result::Result<(), Error> {
let region = memory.find_region(range_base).ok_or(Error::GuestMemory(
GuestMemoryError::InvalidGuestAddress(range_base),
))?;
if let Some(f_off) = region.file_offset() {
let offset = range_base.0 - region.start_addr().0;
// SAFETY: FFI call with valid arguments
let res = unsafe {
libc::fallocate64(
f_off.file().as_raw_fd(),
libc::FALLOC_FL_PUNCH_HOLE | libc::FALLOC_FL_KEEP_SIZE,
(offset + f_off.start()) as libc::off64_t,
range_len as libc::off64_t,
)
};
if res!= 0 {
return Err(Error::FallocateFail(io::Error::last_os_error()));
}
}
Self::advise_memory_range(memory, range_base, range_len, libc::MADV_DONTNEED)
}
fn release_memory_range_4k(
pbp: &mut Option<PartiallyBalloonedPage>,
memory: &GuestMemoryMmap,
pfn: u32,
) -> result::Result<(), Error> {
let range_base = GuestAddress((pfn as u64) << VIRTIO_BALLOON_PFN_SHIFT);
let range_len = 1 << VIRTIO_BALLOON_PFN_SHIFT;
let page_size: u64 = get_page_size();
if page_size == 1 << VIRTIO_BALLOON_PFN_SHIFT {
return Self::release_memory_range(memory, range_base, range_len);
}
if pbp.is_none() {
*pbp = Some(PartiallyBalloonedPage::new());
}
if!pbp.as_ref().unwrap().pfn_match(range_base.0) {
// We are trying to free memory region in a different pfn with current pbp. Flush pbp.
pbp.as_mut().unwrap().reset();
pbp.as_mut().unwrap().addr = align_page_size_down(range_base.0);
}
pbp.as_mut().unwrap().set_bit(range_base.0);
if pbp.as_ref().unwrap().bitmap_full() {
Self::release_memory_range(
memory,
vm_memory::GuestAddress(pbp.as_ref().unwrap().addr),
page_size as usize,
)?;
pbp.as_mut().unwrap().reset();
}
Ok(())
}
fn process_queue(&mut self, queue_index: usize) -> result::Result<(), Error> {
let mut used_descs = false;
while let Some(mut desc_chain) =
self.queues[queue_index].pop_descriptor_chain(self.mem.memory())
{
let desc = desc_chain.next().ok_or(Error::DescriptorChainTooShort)?;
let data_chunk_size = size_of::<u32>();
// The head contains the request type which MUST be readable.
if desc.is_write_only() {
error!("The head contains the request type is not right");
return Err(Error::UnexpectedWriteOnlyDescriptor);
}
if desc.len() as usize % data_chunk_size!= 0 {
error!("the request size {} is not right", desc.len());
return Err(Error::InvalidRequest);
}
let mut offset = 0u64;
while offset < desc.len() as u64 {
let addr = desc.addr().checked_add(offset).unwrap();
let pfn: u32 = desc_chain
.memory()
.read_obj(addr)
.map_err(Error::GuestMemory)?;
offset += data_chunk_size as u64;
match queue_index {
0 => |
1 => {
let page_size = get_page_size() as usize;
let rbase = align_page_size_down((pfn as u64) << VIRTIO_BALLOON_PFN_SHIFT);
Self::advise_memory_range(
desc_chain.memory(),
vm_memory::GuestAddress(rbase),
page_size,
libc::MADV_WILLNEED,
)?;
}
_ => return Err(Error::InvalidQueueIndex(queue_index)),
}
}
self.queues[queue_index]
.add_used(desc_chain.memory(), desc_chain.head_index(), desc.len())
.map_err(Error::QueueAddUsed)?;
used_descs = true;
}
if used_descs {
self.signal(VirtioInterruptType::Queue(queue_index as u16))
} else {
Ok(())
}
}
fn process_reporting_queue(&mut self, queue_index: usize) -> result::Result<(), Error> {
let mut used_descs = false;
while let Some(mut desc_chain) =
self.queues[queue_index].pop_descriptor_chain(self.mem.memory())
{
let mut descs_len = 0;
while let Some(desc) = desc_chain.next() {
descs_len += desc.len();
Self::release_memory_range(desc_chain.memory(), desc.addr(), desc.len() as usize)?;
}
self.queues[queue_index]
.add_used(desc_chain.memory(), desc_chain.head_index(), descs_len)
.map_err(Error::QueueAddUsed)?;
used_descs = true;
}
if used_descs {
self.signal(VirtioInterruptType::Queue(queue_index as u16))
} else {
Ok(())
}
}
fn run(
&mut self,
paused: Arc<AtomicBool>,
paused_sync: Arc<Barrier>,
) -> result::Result<(), EpollHelperError> {
let mut helper = EpollHelper::new(&self.kill_evt, &self.pause_evt)?;
helper.add_event(self.inflate_queue_evt.as_raw_fd(), INFLATE_QUEUE_EVENT)?;
helper.add_event(self.deflate_queue_evt.as_raw_fd(), DEFLATE_QUEUE_EVENT)?;
if let Some(reporting_queue_evt) = self.reporting_queue_evt.as_ref() {
helper.add_event(reporting_queue_evt.as_raw_fd(), REPORTING_QUEUE_EVENT)?;
}
helper.run(paused, paused_sync, self)?;
Ok(())
}
}
impl EpollHelperHandler for BalloonEpollHandler {
fn handle_event(
&mut self,
_helper: &mut EpollHelper,
event: &epoll::Event,
) -> result::Result<(), EpollHelperError> {
let ev_type = event.data as u16;
match ev_type {
INFLATE_QUEUE_EVENT => {
self.inflate_queue_evt.read().map_err(|e| {
EpollHelperError::HandleEvent(anyhow!(
"Failed to get inflate queue event: {:?}",
e
))
})?;
self.process_queue(0).map_err(|e| {
EpollHelperError::HandleEvent(anyhow!(
"Failed to signal used inflate queue: {:?}",
e
))
})?;
}
DEFLATE_QUEUE_EVENT => {
self.deflate_queue_evt.read().map_err(|e| {
EpollHelperError::HandleEvent(anyhow!(
"Failed to get deflate queue event: {:?}",
e
))
})?;
self.process_queue(1).map_err(|e| {
EpollHelperError::HandleEvent(anyhow!(
"Failed to signal used deflate queue: {:?}",
e
))
})?;
}
REPORTING_QUEUE_EVENT => {
if let Some(reporting_queue_evt) = self.reporting_queue_evt.as_ref() {
reporting_queue_evt.read().map_err(|e| {
EpollHelperError::HandleEvent(anyhow!(
"Failed to get reporting queue event: {:?}",
e
))
})?;
self.process_reporting_queue(2).map_err(|e| {
EpollHelperError::HandleEvent(anyhow!(
"Failed to signal used inflate queue: {:?}",
e
))
})?;
} else {
return Err(EpollHelperError::HandleEvent(anyhow!(
"Invalid reporting queue event as no eventfd registered"
)));
}
}
_ => {
return Err(EpollHelperError::HandleEvent(anyhow!(
"Unknown event for virtio-balloon"
)));
}
}
Ok(())
}
}
#[derive(Versionize)]
pub struct BalloonState {
pub avail_features: u64,
pub acked_features: u64,
pub config: VirtioBalloonConfig,
}
impl VersionMapped for BalloonState {}
// Virtio device for exposing entropy to the guest OS through virtio.
pub struct Balloon {
common: VirtioCommon,
id: String,
config: VirtioBalloonConfig,
seccomp_action: SeccompAction,
exit_evt: EventFd,
interrupt_cb: Option<Arc<dyn VirtioInterrupt>>,
}
impl Balloon {
// Create a new virtio-balloon.
pub fn new(
id: String,
size: u64,
deflate_on_oom: bool,
free_page_reporting: bool,
seccomp_action: SeccompAction,
exit_evt: EventFd,
state: Option<BalloonState>,
) -> io::Result<Self> {
let mut queue_sizes = vec![QUEUE_SIZE; MIN_NUM_QUEUES];
let (avail_features, acked_features, config, paused) = if let Some(state) = state {
info!("Restoring virtio-balloon {}", id);
(
state.avail_features,
state.acked_features,
state.config,
true,
)
} else {
let mut avail_features = 1u64 << VIRTIO_F_VERSION_1;
if deflate_on_oom {
avail_features |= 1u64 << VIRTIO_BALLOON_F_DEFLATE_ON_OOM;
}
if free_page_reporting {
avail_features |= 1u64 << VIRTIO_BALLOON_F_REPORTING;
}
let config = VirtioBalloonConfig {
num_pages: (size >> VIRTIO_BALLOON_PFN_SHIFT) as u32,
..Default::default()
};
(avail_features, 0, config, false)
};
if free_page_reporting {
queue_sizes.push(REPORTING_QUEUE_SIZE);
}
Ok(Balloon {
common: VirtioCommon {
device_type: VirtioDeviceType::Balloon as u32,
avail_features,
acked_features,
paused_sync: Some(Arc::new(Barrier::new(2))),
queue_sizes,
min_queues: MIN_NUM_QUEUES as u16,
paused: Arc::new(AtomicBool::new(paused)),
..Default::default()
},
id,
config,
seccomp_action,
exit_evt,
interrupt_cb: None,
})
}
pub fn resize(&mut self, size: u64) -> Result<(), Error> {
self.config.num_pages = (size >> VIRTIO_BALLOON_PFN_SHIFT) as u32;
if let Some(interrupt_cb) = &self.interrupt_cb {
interrupt_cb
.trigger(VirtioInterruptType::Config)
.map_err(Error::FailedSignal)
} else {
Ok(())
}
}
// Get the actual size of the virtio-balloon.
pub fn get_actual(&self) -> u64 {
(self.config.actual as u64) << VIRTIO_BALLOON_PFN_SHIFT
}
fn state(&self) -> BalloonState {
BalloonState {
avail_features: self.common.avail_features,
acked_features: self.common.acked_features,
config: self.config,
}
}
#[cfg(fuzzing)]
pub fn wait_for_epoll_threads(&mut self) {
self.common.wait_for_epoll_threads();
}
}
impl Drop for Balloon {
fn drop(&mut self) {
if let Some(kill_evt) = self.common.kill_evt.take() {
// Ignore the result because there is nothing we can do about it.
let _ = kill_evt.write(1);
}
self.common.wait_for_epoll_threads();
}
}
impl VirtioDevice for Balloon {
fn device_type(&self) -> u32 {
self.common.device_type
}
fn queue_max_sizes(&self) -> &[u16] {
&self.common.queue_sizes
}
fn features(&self) -> u64 {
self.common.avail_features
}
fn ack_features(&mut self, value: u64) {
self.common. | {
Self::release_memory_range_4k(&mut self.pbp, desc_chain.memory(), pfn)?;
} | conditional_block |
balloon.rs | // SAFETY: FFI call with valid arguments
let res = unsafe {
libc::fallocate64(
f_off.file().as_raw_fd(),
libc::FALLOC_FL_PUNCH_HOLE | libc::FALLOC_FL_KEEP_SIZE,
(offset + f_off.start()) as libc::off64_t,
range_len as libc::off64_t,
)
};
if res!= 0 {
return Err(Error::FallocateFail(io::Error::last_os_error()));
}
}
Self::advise_memory_range(memory, range_base, range_len, libc::MADV_DONTNEED)
}
fn release_memory_range_4k(
pbp: &mut Option<PartiallyBalloonedPage>,
memory: &GuestMemoryMmap,
pfn: u32,
) -> result::Result<(), Error> {
let range_base = GuestAddress((pfn as u64) << VIRTIO_BALLOON_PFN_SHIFT);
let range_len = 1 << VIRTIO_BALLOON_PFN_SHIFT;
let page_size: u64 = get_page_size();
if page_size == 1 << VIRTIO_BALLOON_PFN_SHIFT {
return Self::release_memory_range(memory, range_base, range_len);
}
if pbp.is_none() {
*pbp = Some(PartiallyBalloonedPage::new());
}
if!pbp.as_ref().unwrap().pfn_match(range_base.0) {
// We are trying to free memory region in a different pfn with current pbp. Flush pbp.
pbp.as_mut().unwrap().reset();
pbp.as_mut().unwrap().addr = align_page_size_down(range_base.0);
}
pbp.as_mut().unwrap().set_bit(range_base.0);
if pbp.as_ref().unwrap().bitmap_full() {
Self::release_memory_range(
memory,
vm_memory::GuestAddress(pbp.as_ref().unwrap().addr),
page_size as usize,
)?;
pbp.as_mut().unwrap().reset();
}
Ok(())
}
fn process_queue(&mut self, queue_index: usize) -> result::Result<(), Error> {
let mut used_descs = false;
while let Some(mut desc_chain) =
self.queues[queue_index].pop_descriptor_chain(self.mem.memory())
{
let desc = desc_chain.next().ok_or(Error::DescriptorChainTooShort)?;
let data_chunk_size = size_of::<u32>();
// The head contains the request type which MUST be readable.
if desc.is_write_only() {
error!("The head contains the request type is not right");
return Err(Error::UnexpectedWriteOnlyDescriptor);
}
if desc.len() as usize % data_chunk_size!= 0 {
error!("the request size {} is not right", desc.len());
return Err(Error::InvalidRequest);
}
let mut offset = 0u64;
while offset < desc.len() as u64 {
let addr = desc.addr().checked_add(offset).unwrap();
let pfn: u32 = desc_chain
.memory()
.read_obj(addr)
.map_err(Error::GuestMemory)?;
offset += data_chunk_size as u64;
match queue_index {
0 => {
Self::release_memory_range_4k(&mut self.pbp, desc_chain.memory(), pfn)?;
}
1 => {
let page_size = get_page_size() as usize;
let rbase = align_page_size_down((pfn as u64) << VIRTIO_BALLOON_PFN_SHIFT);
Self::advise_memory_range(
desc_chain.memory(),
vm_memory::GuestAddress(rbase),
page_size,
libc::MADV_WILLNEED,
)?;
}
_ => return Err(Error::InvalidQueueIndex(queue_index)),
}
}
self.queues[queue_index]
.add_used(desc_chain.memory(), desc_chain.head_index(), desc.len())
.map_err(Error::QueueAddUsed)?;
used_descs = true;
}
if used_descs {
self.signal(VirtioInterruptType::Queue(queue_index as u16))
} else {
Ok(())
}
}
fn process_reporting_queue(&mut self, queue_index: usize) -> result::Result<(), Error> {
let mut used_descs = false;
while let Some(mut desc_chain) =
self.queues[queue_index].pop_descriptor_chain(self.mem.memory())
{
let mut descs_len = 0;
while let Some(desc) = desc_chain.next() {
descs_len += desc.len();
Self::release_memory_range(desc_chain.memory(), desc.addr(), desc.len() as usize)?;
}
self.queues[queue_index]
.add_used(desc_chain.memory(), desc_chain.head_index(), descs_len)
.map_err(Error::QueueAddUsed)?;
used_descs = true;
}
if used_descs {
self.signal(VirtioInterruptType::Queue(queue_index as u16))
} else {
Ok(())
}
}
fn run(
&mut self,
paused: Arc<AtomicBool>,
paused_sync: Arc<Barrier>,
) -> result::Result<(), EpollHelperError> {
let mut helper = EpollHelper::new(&self.kill_evt, &self.pause_evt)?;
helper.add_event(self.inflate_queue_evt.as_raw_fd(), INFLATE_QUEUE_EVENT)?;
helper.add_event(self.deflate_queue_evt.as_raw_fd(), DEFLATE_QUEUE_EVENT)?;
if let Some(reporting_queue_evt) = self.reporting_queue_evt.as_ref() {
helper.add_event(reporting_queue_evt.as_raw_fd(), REPORTING_QUEUE_EVENT)?;
}
helper.run(paused, paused_sync, self)?;
Ok(())
}
}
impl EpollHelperHandler for BalloonEpollHandler {
fn handle_event(
&mut self,
_helper: &mut EpollHelper,
event: &epoll::Event,
) -> result::Result<(), EpollHelperError> {
let ev_type = event.data as u16;
match ev_type {
INFLATE_QUEUE_EVENT => {
self.inflate_queue_evt.read().map_err(|e| {
EpollHelperError::HandleEvent(anyhow!(
"Failed to get inflate queue event: {:?}",
e
))
})?;
self.process_queue(0).map_err(|e| {
EpollHelperError::HandleEvent(anyhow!(
"Failed to signal used inflate queue: {:?}",
e
))
})?;
}
DEFLATE_QUEUE_EVENT => {
self.deflate_queue_evt.read().map_err(|e| {
EpollHelperError::HandleEvent(anyhow!(
"Failed to get deflate queue event: {:?}",
e
))
})?;
self.process_queue(1).map_err(|e| {
EpollHelperError::HandleEvent(anyhow!(
"Failed to signal used deflate queue: {:?}",
e
))
})?;
}
REPORTING_QUEUE_EVENT => {
if let Some(reporting_queue_evt) = self.reporting_queue_evt.as_ref() {
reporting_queue_evt.read().map_err(|e| {
EpollHelperError::HandleEvent(anyhow!(
"Failed to get reporting queue event: {:?}",
e
))
})?;
self.process_reporting_queue(2).map_err(|e| {
EpollHelperError::HandleEvent(anyhow!(
"Failed to signal used inflate queue: {:?}",
e
))
})?;
} else {
return Err(EpollHelperError::HandleEvent(anyhow!(
"Invalid reporting queue event as no eventfd registered"
)));
}
}
_ => {
return Err(EpollHelperError::HandleEvent(anyhow!(
"Unknown event for virtio-balloon"
)));
}
}
Ok(())
}
}
#[derive(Versionize)]
pub struct BalloonState {
pub avail_features: u64,
pub acked_features: u64,
pub config: VirtioBalloonConfig,
}
impl VersionMapped for BalloonState {}
// Virtio device for exposing entropy to the guest OS through virtio.
pub struct Balloon {
common: VirtioCommon,
id: String,
config: VirtioBalloonConfig,
seccomp_action: SeccompAction,
exit_evt: EventFd,
interrupt_cb: Option<Arc<dyn VirtioInterrupt>>,
}
impl Balloon {
// Create a new virtio-balloon.
pub fn new(
id: String,
size: u64,
deflate_on_oom: bool,
free_page_reporting: bool,
seccomp_action: SeccompAction,
exit_evt: EventFd,
state: Option<BalloonState>,
) -> io::Result<Self> {
let mut queue_sizes = vec![QUEUE_SIZE; MIN_NUM_QUEUES];
let (avail_features, acked_features, config, paused) = if let Some(state) = state {
info!("Restoring virtio-balloon {}", id);
(
state.avail_features,
state.acked_features,
state.config,
true,
)
} else {
let mut avail_features = 1u64 << VIRTIO_F_VERSION_1;
if deflate_on_oom {
avail_features |= 1u64 << VIRTIO_BALLOON_F_DEFLATE_ON_OOM;
}
if free_page_reporting {
avail_features |= 1u64 << VIRTIO_BALLOON_F_REPORTING;
}
let config = VirtioBalloonConfig {
num_pages: (size >> VIRTIO_BALLOON_PFN_SHIFT) as u32,
..Default::default()
};
(avail_features, 0, config, false)
};
if free_page_reporting {
queue_sizes.push(REPORTING_QUEUE_SIZE);
}
Ok(Balloon {
common: VirtioCommon {
device_type: VirtioDeviceType::Balloon as u32,
avail_features,
acked_features,
paused_sync: Some(Arc::new(Barrier::new(2))),
queue_sizes,
min_queues: MIN_NUM_QUEUES as u16,
paused: Arc::new(AtomicBool::new(paused)),
..Default::default()
},
id,
config,
seccomp_action,
exit_evt,
interrupt_cb: None,
})
}
pub fn resize(&mut self, size: u64) -> Result<(), Error> {
self.config.num_pages = (size >> VIRTIO_BALLOON_PFN_SHIFT) as u32;
if let Some(interrupt_cb) = &self.interrupt_cb {
interrupt_cb
.trigger(VirtioInterruptType::Config)
.map_err(Error::FailedSignal)
} else {
Ok(())
}
}
// Get the actual size of the virtio-balloon.
pub fn get_actual(&self) -> u64 {
(self.config.actual as u64) << VIRTIO_BALLOON_PFN_SHIFT
}
fn state(&self) -> BalloonState {
BalloonState {
avail_features: self.common.avail_features,
acked_features: self.common.acked_features,
config: self.config,
}
}
#[cfg(fuzzing)]
pub fn wait_for_epoll_threads(&mut self) {
self.common.wait_for_epoll_threads();
}
}
impl Drop for Balloon {
fn drop(&mut self) {
if let Some(kill_evt) = self.common.kill_evt.take() {
// Ignore the result because there is nothing we can do about it.
let _ = kill_evt.write(1);
}
self.common.wait_for_epoll_threads();
}
}
impl VirtioDevice for Balloon {
fn device_type(&self) -> u32 {
self.common.device_type
}
fn queue_max_sizes(&self) -> &[u16] {
&self.common.queue_sizes
}
fn features(&self) -> u64 {
self.common.avail_features
}
fn ack_features(&mut self, value: u64) {
self.common.ack_features(value)
}
fn read_config(&self, offset: u64, data: &mut [u8]) {
self.read_config_from_slice(self.config.as_slice(), offset, data);
}
fn write_config(&mut self, offset: u64, data: &[u8]) {
// The "actual" field is the only mutable field
if offset!= CONFIG_ACTUAL_OFFSET || data.len()!= CONFIG_ACTUAL_SIZE {
error!(
"Attempt to write to read-only field: offset {:x} length {}",
offset,
data.len()
);
return;
}
let config = self.config.as_mut_slice();
let config_len = config.len() as u64;
let data_len = data.len() as u64;
if offset + data_len > config_len {
error!(
"Out-of-bound access to configuration: config_len = {} offset = {:x} length = {} for {}",
config_len,
offset,
data_len,
self.device_type()
);
return;
}
if let Some(end) = offset.checked_add(config.len() as u64) {
let mut offset_config =
&mut config[offset as usize..std::cmp::min(end, config_len) as usize];
offset_config.write_all(data).unwrap();
}
}
fn activate(
&mut self,
mem: GuestMemoryAtomic<GuestMemoryMmap>,
interrupt_cb: Arc<dyn VirtioInterrupt>,
mut queues: Vec<(usize, Queue, EventFd)>,
) -> ActivateResult {
self.common.activate(&queues, &interrupt_cb)?;
let (kill_evt, pause_evt) = self.common.dup_eventfds();
let mut virtqueues = Vec::new();
let (_, queue, queue_evt) = queues.remove(0);
virtqueues.push(queue);
let inflate_queue_evt = queue_evt;
let (_, queue, queue_evt) = queues.remove(0);
virtqueues.push(queue);
let deflate_queue_evt = queue_evt;
let reporting_queue_evt =
if self.common.feature_acked(VIRTIO_BALLOON_F_REPORTING) &&!queues.is_empty() {
let (_, queue, queue_evt) = queues.remove(0);
virtqueues.push(queue);
Some(queue_evt)
} else {
None
};
self.interrupt_cb = Some(interrupt_cb.clone());
let mut handler = BalloonEpollHandler {
mem,
queues: virtqueues,
interrupt_cb,
inflate_queue_evt,
deflate_queue_evt,
reporting_queue_evt,
kill_evt,
pause_evt,
pbp: None,
};
let paused = self.common.paused.clone();
let paused_sync = self.common.paused_sync.clone();
let mut epoll_threads = Vec::new();
spawn_virtio_thread(
&self.id,
&self.seccomp_action,
Thread::VirtioBalloon,
&mut epoll_threads,
&self.exit_evt,
move || handler.run(paused, paused_sync.unwrap()),
)?;
self.common.epoll_threads = Some(epoll_threads);
event!("virtio-device", "activated", "id", &self.id);
Ok(())
}
fn reset(&mut self) -> Option<Arc<dyn VirtioInterrupt>> {
let result = self.common.reset();
event!("virtio-device", "reset", "id", &self.id);
result
}
}
impl Pausable for Balloon {
fn pause(&mut self) -> result::Result<(), MigratableError> {
self.common.pause()
}
| fn resume(&mut self) -> result::Result<(), MigratableError> { | random_line_split |
|
utility.rs | //! General-purpose utility functions for internal usage within this crate.
use crate::derive_data::{ReflectMeta, StructField};
use crate::field_attributes::ReflectIgnoreBehavior;
use crate::fq_std::{FQAny, FQOption, FQSend, FQSync};
use bevy_macro_utils::BevyManifest;
use bit_set::BitSet;
use proc_macro2::{Ident, Span};
use quote::{quote, ToTokens};
use syn::{spanned::Spanned, LitStr, Member, Path, Type, WhereClause};
/// Returns the correct path for `bevy_reflect`.
pub(crate) fn get_bevy_reflect_path() -> Path {
BevyManifest::get_path_direct("bevy_reflect")
}
/// Returns the "reflected" ident for a given string.
///
/// # Example
///
/// ```ignore
/// let reflected: Ident = get_reflect_ident("Hash");
/// assert_eq!("ReflectHash", reflected.to_string());
/// ```
pub(crate) fn get_reflect_ident(name: &str) -> Ident {
let reflected = format!("Reflect{name}");
Ident::new(&reflected, Span::call_site())
}
/// Helper struct used to process an iterator of `Result<Vec<T>, syn::Error>`,
/// combining errors into one along the way.
pub(crate) struct ResultSifter<T> {
items: Vec<T>,
errors: Option<syn::Error>,
}
/// Returns a [`Member`] made of `ident` or `index` if `ident` is None.
///
/// Rust struct syntax allows for `Struct { foo: "string" }` with explicitly
/// named fields. It allows the `Struct { 0: "string" }` syntax when the struct
/// is declared as a tuple struct.
///
/// ```
/// # fn main() {
/// struct Foo { field: &'static str }
/// struct Bar(&'static str);
/// let Foo { field } = Foo { field: "hi" };
/// let Bar { 0: field } = Bar { 0: "hello" };
/// let Bar(field) = Bar("hello"); // more common syntax
/// # }
/// ```
///
/// This function helps field access in context where you are declaring either
/// a tuple struct or a struct with named fields. If you don't have a field name,
/// it means you need to access the struct through an index.
pub(crate) fn ident_or_index(ident: Option<&Ident>, index: usize) -> Member {
ident.map_or_else(
|| Member::Unnamed(index.into()),
|ident| Member::Named(ident.clone()),
)
}
/// Options defining how to extend the `where` clause in reflection with any additional bounds needed.
pub(crate) struct WhereClauseOptions {
/// Type parameters that need extra trait bounds.
parameter_types: Box<[Ident]>,
/// Trait bounds to add to the type parameters.
parameter_trait_bounds: Box<[proc_macro2::TokenStream]>,
/// Any types that will be reflected and need an extra trait bound
active_types: Box<[Type]>,
/// Trait bounds to add to the active types
active_trait_bounds: Box<[proc_macro2::TokenStream]>,
/// Any types that won't be reflected and need an extra trait bound
ignored_types: Box<[Type]>,
/// Trait bounds to add to the ignored types
ignored_trait_bounds: Box<[proc_macro2::TokenStream]>,
}
impl Default for WhereClauseOptions {
/// By default, don't add any additional bounds to the `where` clause
fn default() -> Self {
Self {
parameter_types: Box::new([]),
active_types: Box::new([]),
ignored_types: Box::new([]),
active_trait_bounds: Box::new([]),
ignored_trait_bounds: Box::new([]),
parameter_trait_bounds: Box::new([]),
}
}
}
impl WhereClauseOptions {
/// Create [`WhereClauseOptions`] for a struct or enum type.
pub fn new<'a: 'b, 'b>(
meta: &ReflectMeta,
active_fields: impl Iterator<Item = &'b StructField<'a>>,
ignored_fields: impl Iterator<Item = &'b StructField<'a>>,
) -> Self {
Self::new_with_bounds(meta, active_fields, ignored_fields, |_| None, |_| None)
}
/// Create [`WhereClauseOptions`] for a simple value type.
pub fn new_value(meta: &ReflectMeta) -> Self {
Self::new_with_bounds(
meta,
std::iter::empty(),
std::iter::empty(),
|_| None,
|_| None,
)
}
/// Create [`WhereClauseOptions`] for a struct or enum type.
///
/// Compared to [`WhereClauseOptions::new`], this version allows you to specify
/// custom trait bounds for each field.
pub fn new_with_bounds<'a: 'b, 'b>(
meta: &ReflectMeta,
active_fields: impl Iterator<Item = &'b StructField<'a>>,
ignored_fields: impl Iterator<Item = &'b StructField<'a>>,
active_bounds: impl Fn(&StructField<'a>) -> Option<proc_macro2::TokenStream>,
ignored_bounds: impl Fn(&StructField<'a>) -> Option<proc_macro2::TokenStream>,
) -> Self {
let bevy_reflect_path = meta.bevy_reflect_path();
let is_from_reflect = meta.from_reflect().should_auto_derive();
let (active_types, active_trait_bounds): (Vec<_>, Vec<_>) = active_fields
.map(|field| {
let ty = field.data.ty.clone();
let custom_bounds = active_bounds(field).map(|bounds| quote!(+ #bounds));
let bounds = if is_from_reflect {
quote!(#bevy_reflect_path::FromReflect #custom_bounds)
} else | ;
(ty, bounds)
})
.unzip();
let (ignored_types, ignored_trait_bounds): (Vec<_>, Vec<_>) = ignored_fields
.map(|field| {
let ty = field.data.ty.clone();
let custom_bounds = ignored_bounds(field).map(|bounds| quote!(+ #bounds));
let bounds = quote!(#FQAny + #FQSend + #FQSync #custom_bounds);
(ty, bounds)
})
.unzip();
let (parameter_types, parameter_trait_bounds): (Vec<_>, Vec<_>) =
if meta.traits().type_path_attrs().should_auto_derive() {
meta.type_path()
.generics()
.type_params()
.map(|param| {
let ident = param.ident.clone();
let bounds = quote!(#bevy_reflect_path::TypePath);
(ident, bounds)
})
.unzip()
} else {
// If we don't need to derive `TypePath` for the type parameters,
// we can skip adding its bound to the `where` clause.
(Vec::new(), Vec::new())
};
Self {
active_types: active_types.into_boxed_slice(),
active_trait_bounds: active_trait_bounds.into_boxed_slice(),
ignored_types: ignored_types.into_boxed_slice(),
ignored_trait_bounds: ignored_trait_bounds.into_boxed_slice(),
parameter_types: parameter_types.into_boxed_slice(),
parameter_trait_bounds: parameter_trait_bounds.into_boxed_slice(),
}
}
}
/// Extends the `where` clause in reflection with any additional bounds needed.
///
/// This is mostly used to add additional bounds to reflected objects with generic types.
/// For reflection purposes, we usually have:
/// * `active_trait_bounds: Reflect`
/// * `ignored_trait_bounds: Any + Send + Sync`
///
/// # Arguments
///
/// * `where_clause`: existing `where` clause present on the object to be derived
/// * `where_clause_options`: additional parameters defining which trait bounds to add to the `where` clause
///
/// # Example
///
/// The struct:
/// ```ignore
/// #[derive(Reflect)]
/// struct Foo<T, U> {
/// a: T,
/// #[reflect(ignore)]
/// b: U
/// }
/// ```
/// will have active types: `[T]` and ignored types: `[U]`
///
/// The `extend_where_clause` function will yield the following `where` clause:
/// ```ignore
/// where
/// T: Reflect, // active_trait_bounds
/// U: Any + Send + Sync, // ignored_trait_bounds
/// ```
pub(crate) fn extend_where_clause(
where_clause: Option<&WhereClause>,
where_clause_options: &WhereClauseOptions,
) -> proc_macro2::TokenStream {
let parameter_types = &where_clause_options.parameter_types;
let active_types = &where_clause_options.active_types;
let ignored_types = &where_clause_options.ignored_types;
let parameter_trait_bounds = &where_clause_options.parameter_trait_bounds;
let active_trait_bounds = &where_clause_options.active_trait_bounds;
let ignored_trait_bounds = &where_clause_options.ignored_trait_bounds;
let mut generic_where_clause = if let Some(where_clause) = where_clause {
let predicates = where_clause.predicates.iter();
quote! {where #(#predicates,)*}
} else if!(parameter_types.is_empty() && active_types.is_empty() && ignored_types.is_empty()) {
quote! {where}
} else {
quote!()
};
// The nested parentheses here are required to properly scope HRTBs coming
// from field types to the type itself, as the compiler will scope them to
// the whole bound by default, resulting in a failure to prove trait
// adherence.
generic_where_clause.extend(quote! {
#((#active_types): #active_trait_bounds,)*
#((#ignored_types): #ignored_trait_bounds,)*
// Leave parameter bounds to the end for more sane error messages.
#((#parameter_types): #parameter_trait_bounds,)*
});
generic_where_clause
}
impl<T> Default for ResultSifter<T> {
fn default() -> Self {
Self {
items: Vec::new(),
errors: None,
}
}
}
impl<T> ResultSifter<T> {
/// Sift the given result, combining errors if necessary.
pub fn sift(&mut self, result: Result<T, syn::Error>) {
match result {
Ok(data) => self.items.push(data),
Err(err) => {
if let Some(ref mut errors) = self.errors {
errors.combine(err);
} else {
self.errors = Some(err);
}
}
}
}
/// Associated method that provides a convenient implementation for [`Iterator::fold`].
pub fn fold(mut sifter: Self, result: Result<T, syn::Error>) -> Self {
sifter.sift(result);
sifter
}
/// Complete the sifting process and return the final result.
pub fn finish(self) -> Result<Vec<T>, syn::Error> {
if let Some(errors) = self.errors {
Err(errors)
} else {
Ok(self.items)
}
}
}
/// Converts an iterator over ignore behavior of members to a bitset of ignored members.
///
/// Takes into account the fact that always ignored (non-reflected) members are skipped.
///
/// # Example
/// ```rust,ignore
/// pub struct HelloWorld {
/// reflected_field: u32 // index: 0
///
/// #[reflect(ignore)]
/// non_reflected_field: u32 // index: N/A (not 1!)
///
/// #[reflect(skip_serializing)]
/// non_serialized_field: u32 // index: 1
/// }
/// ```
/// Would convert to the `0b01` bitset (i.e second field is NOT serialized)
///
pub(crate) fn members_to_serialization_denylist<T>(member_iter: T) -> BitSet<u32>
where
T: Iterator<Item = ReflectIgnoreBehavior>,
{
let mut bitset = BitSet::default();
member_iter.fold(0, |next_idx, member| match member {
ReflectIgnoreBehavior::IgnoreAlways => next_idx,
ReflectIgnoreBehavior::IgnoreSerialization => {
bitset.insert(next_idx);
next_idx + 1
}
ReflectIgnoreBehavior::None => next_idx + 1,
});
bitset
}
/// Turns an `Option<TokenStream>` into a `TokenStream` for an `Option`.
pub(crate) fn wrap_in_option(tokens: Option<proc_macro2::TokenStream>) -> proc_macro2::TokenStream {
match tokens {
Some(tokens) => quote! {
#FQOption::Some(#tokens)
},
None => quote! {
#FQOption::None
},
}
}
/// Contains tokens representing different kinds of string.
#[derive(Clone)]
pub(crate) enum StringExpr {
/// A string that is valid at compile time.
///
/// This is either a string literal like `"mystring"`,
/// or a string created by a macro like [`module_path`]
/// or [`concat`].
Const(proc_macro2::TokenStream),
/// A [string slice](str) that is borrowed for a `'static` lifetime.
Borrowed(proc_macro2::TokenStream),
/// An [owned string](String).
Owned(proc_macro2::TokenStream),
}
impl<T: ToString + Spanned> From<T> for StringExpr {
fn from(value: T) -> Self {
Self::from_lit(&LitStr::new(&value.to_string(), value.span()))
}
}
impl StringExpr {
/// Creates a [constant] [`StringExpr`] from a [`struct@LitStr`].
///
/// [constant]: StringExpr::Const
pub fn from_lit(lit: &LitStr) -> Self {
Self::Const(lit.to_token_stream())
}
/// Creates a [constant] [`StringExpr`] by interpreting a [string slice][str] as a [`struct@LitStr`].
///
/// [constant]: StringExpr::Const
pub fn from_str(string: &str) -> Self {
Self::Const(string.into_token_stream())
}
/// Returns tokens for an [owned string](String).
///
/// The returned expression will allocate unless the [`StringExpr`] is [already owned].
///
/// [already owned]: StringExpr::Owned
pub fn into_owned(self) -> proc_macro2::TokenStream {
match self {
Self::Const(tokens) | Self::Borrowed(tokens) => quote! {
::std::string::ToString::to_string(#tokens)
},
Self::Owned(owned) => owned,
}
}
/// Returns tokens for a statically borrowed [string slice](str).
pub fn into_borrowed(self) -> proc_macro2::TokenStream {
match self {
Self::Const(tokens) | Self::Borrowed(tokens) => tokens,
Self::Owned(owned) => quote! {
&#owned
},
}
}
/// Appends a [`StringExpr`] to another.
///
/// If both expressions are [`StringExpr::Const`] this will use [`concat`] to merge them.
pub fn appended_by(mut self, other: StringExpr) -> Self {
if let Self::Const(tokens) = self {
if let Self::Const(more) = other {
return Self::Const(quote! {
::core::concat!(#tokens, #more)
});
}
self = Self::Const(tokens);
}
let owned = self.into_owned();
let borrowed = other.into_borrowed();
Self::Owned(quote! {
#owned + #borrowed
})
}
}
impl Default for StringExpr {
fn default() -> Self {
StringExpr::from_str("")
}
}
impl FromIterator<StringExpr> for StringExpr {
fn from_iter<T: IntoIterator<Item = StringExpr>>(iter: T) -> Self {
let mut iter = iter.into_iter();
match iter.next() {
Some(mut expr) => {
for next in iter {
expr = expr.appended_by(next);
}
expr
}
None => Default::default(),
}
}
}
| {
quote!(#bevy_reflect_path::Reflect #custom_bounds)
} | conditional_block |
utility.rs | //! General-purpose utility functions for internal usage within this crate.
use crate::derive_data::{ReflectMeta, StructField};
use crate::field_attributes::ReflectIgnoreBehavior;
use crate::fq_std::{FQAny, FQOption, FQSend, FQSync};
use bevy_macro_utils::BevyManifest;
use bit_set::BitSet;
use proc_macro2::{Ident, Span};
use quote::{quote, ToTokens};
use syn::{spanned::Spanned, LitStr, Member, Path, Type, WhereClause};
/// Returns the correct path for `bevy_reflect`.
pub(crate) fn get_bevy_reflect_path() -> Path {
BevyManifest::get_path_direct("bevy_reflect")
}
/// Returns the "reflected" ident for a given string.
///
/// # Example
///
/// ```ignore
/// let reflected: Ident = get_reflect_ident("Hash");
/// assert_eq!("ReflectHash", reflected.to_string());
/// ```
pub(crate) fn get_reflect_ident(name: &str) -> Ident {
let reflected = format!("Reflect{name}");
Ident::new(&reflected, Span::call_site())
}
/// Helper struct used to process an iterator of `Result<Vec<T>, syn::Error>`,
/// combining errors into one along the way.
pub(crate) struct ResultSifter<T> {
items: Vec<T>,
errors: Option<syn::Error>,
}
/// Returns a [`Member`] made of `ident` or `index` if `ident` is None.
///
/// Rust struct syntax allows for `Struct { foo: "string" }` with explicitly
/// named fields. It allows the `Struct { 0: "string" }` syntax when the struct
/// is declared as a tuple struct.
///
/// ```
/// # fn main() {
/// struct Foo { field: &'static str }
/// struct Bar(&'static str);
/// let Foo { field } = Foo { field: "hi" };
/// let Bar { 0: field } = Bar { 0: "hello" };
/// let Bar(field) = Bar("hello"); // more common syntax
/// # }
/// ```
///
/// This function helps field access in context where you are declaring either
/// a tuple struct or a struct with named fields. If you don't have a field name,
/// it means you need to access the struct through an index.
pub(crate) fn ident_or_index(ident: Option<&Ident>, index: usize) -> Member {
ident.map_or_else(
|| Member::Unnamed(index.into()),
|ident| Member::Named(ident.clone()),
)
}
/// Options defining how to extend the `where` clause in reflection with any additional bounds needed.
pub(crate) struct WhereClauseOptions {
/// Type parameters that need extra trait bounds.
parameter_types: Box<[Ident]>,
/// Trait bounds to add to the type parameters.
parameter_trait_bounds: Box<[proc_macro2::TokenStream]>,
/// Any types that will be reflected and need an extra trait bound
active_types: Box<[Type]>,
/// Trait bounds to add to the active types
active_trait_bounds: Box<[proc_macro2::TokenStream]>,
/// Any types that won't be reflected and need an extra trait bound
ignored_types: Box<[Type]>,
/// Trait bounds to add to the ignored types
ignored_trait_bounds: Box<[proc_macro2::TokenStream]>,
}
impl Default for WhereClauseOptions {
/// By default, don't add any additional bounds to the `where` clause
fn default() -> Self {
Self {
parameter_types: Box::new([]),
active_types: Box::new([]),
ignored_types: Box::new([]),
active_trait_bounds: Box::new([]),
ignored_trait_bounds: Box::new([]),
parameter_trait_bounds: Box::new([]),
}
}
}
impl WhereClauseOptions {
/// Create [`WhereClauseOptions`] for a struct or enum type.
pub fn new<'a: 'b, 'b>(
meta: &ReflectMeta,
active_fields: impl Iterator<Item = &'b StructField<'a>>,
ignored_fields: impl Iterator<Item = &'b StructField<'a>>,
) -> Self {
Self::new_with_bounds(meta, active_fields, ignored_fields, |_| None, |_| None)
}
/// Create [`WhereClauseOptions`] for a simple value type.
pub fn new_value(meta: &ReflectMeta) -> Self {
Self::new_with_bounds(
meta,
std::iter::empty(),
std::iter::empty(),
|_| None,
|_| None,
)
}
/// Create [`WhereClauseOptions`] for a struct or enum type.
///
/// Compared to [`WhereClauseOptions::new`], this version allows you to specify
/// custom trait bounds for each field.
pub fn new_with_bounds<'a: 'b, 'b>(
meta: &ReflectMeta,
active_fields: impl Iterator<Item = &'b StructField<'a>>,
ignored_fields: impl Iterator<Item = &'b StructField<'a>>,
active_bounds: impl Fn(&StructField<'a>) -> Option<proc_macro2::TokenStream>,
ignored_bounds: impl Fn(&StructField<'a>) -> Option<proc_macro2::TokenStream>,
) -> Self {
let bevy_reflect_path = meta.bevy_reflect_path();
let is_from_reflect = meta.from_reflect().should_auto_derive();
let (active_types, active_trait_bounds): (Vec<_>, Vec<_>) = active_fields
.map(|field| {
let ty = field.data.ty.clone();
let custom_bounds = active_bounds(field).map(|bounds| quote!(+ #bounds));
let bounds = if is_from_reflect {
quote!(#bevy_reflect_path::FromReflect #custom_bounds)
} else {
quote!(#bevy_reflect_path::Reflect #custom_bounds)
};
(ty, bounds)
})
.unzip();
let (ignored_types, ignored_trait_bounds): (Vec<_>, Vec<_>) = ignored_fields
.map(|field| {
let ty = field.data.ty.clone();
let custom_bounds = ignored_bounds(field).map(|bounds| quote!(+ #bounds));
let bounds = quote!(#FQAny + #FQSend + #FQSync #custom_bounds);
(ty, bounds)
})
.unzip();
let (parameter_types, parameter_trait_bounds): (Vec<_>, Vec<_>) =
if meta.traits().type_path_attrs().should_auto_derive() {
meta.type_path()
.generics()
.type_params()
.map(|param| {
let ident = param.ident.clone();
let bounds = quote!(#bevy_reflect_path::TypePath);
(ident, bounds)
})
.unzip()
} else {
// If we don't need to derive `TypePath` for the type parameters,
// we can skip adding its bound to the `where` clause.
(Vec::new(), Vec::new())
};
Self {
active_types: active_types.into_boxed_slice(),
active_trait_bounds: active_trait_bounds.into_boxed_slice(),
ignored_types: ignored_types.into_boxed_slice(),
ignored_trait_bounds: ignored_trait_bounds.into_boxed_slice(),
parameter_types: parameter_types.into_boxed_slice(),
parameter_trait_bounds: parameter_trait_bounds.into_boxed_slice(),
}
}
}
/// Extends the `where` clause in reflection with any additional bounds needed.
///
/// This is mostly used to add additional bounds to reflected objects with generic types.
/// For reflection purposes, we usually have:
/// * `active_trait_bounds: Reflect`
/// * `ignored_trait_bounds: Any + Send + Sync`
///
/// # Arguments
///
/// * `where_clause`: existing `where` clause present on the object to be derived
/// * `where_clause_options`: additional parameters defining which trait bounds to add to the `where` clause
///
/// # Example
///
/// The struct:
/// ```ignore
/// #[derive(Reflect)]
/// struct Foo<T, U> {
/// a: T,
/// #[reflect(ignore)]
/// b: U
/// }
/// ```
/// will have active types: `[T]` and ignored types: `[U]`
///
/// The `extend_where_clause` function will yield the following `where` clause:
/// ```ignore
/// where
/// T: Reflect, // active_trait_bounds
/// U: Any + Send + Sync, // ignored_trait_bounds
/// ```
pub(crate) fn extend_where_clause(
where_clause: Option<&WhereClause>,
where_clause_options: &WhereClauseOptions,
) -> proc_macro2::TokenStream {
let parameter_types = &where_clause_options.parameter_types;
let active_types = &where_clause_options.active_types;
let ignored_types = &where_clause_options.ignored_types;
let parameter_trait_bounds = &where_clause_options.parameter_trait_bounds;
let active_trait_bounds = &where_clause_options.active_trait_bounds;
let ignored_trait_bounds = &where_clause_options.ignored_trait_bounds;
let mut generic_where_clause = if let Some(where_clause) = where_clause {
let predicates = where_clause.predicates.iter();
quote! {where #(#predicates,)*}
} else if!(parameter_types.is_empty() && active_types.is_empty() && ignored_types.is_empty()) {
quote! {where}
} else {
quote!()
};
// The nested parentheses here are required to properly scope HRTBs coming
// from field types to the type itself, as the compiler will scope them to
// the whole bound by default, resulting in a failure to prove trait
// adherence.
generic_where_clause.extend(quote! {
#((#active_types): #active_trait_bounds,)*
#((#ignored_types): #ignored_trait_bounds,)*
// Leave parameter bounds to the end for more sane error messages.
#((#parameter_types): #parameter_trait_bounds,)*
});
generic_where_clause
}
impl<T> Default for ResultSifter<T> {
fn default() -> Self {
Self {
items: Vec::new(),
errors: None,
}
}
}
impl<T> ResultSifter<T> {
/// Sift the given result, combining errors if necessary.
pub fn sift(&mut self, result: Result<T, syn::Error>) {
match result {
Ok(data) => self.items.push(data),
Err(err) => {
if let Some(ref mut errors) = self.errors {
errors.combine(err);
} else {
self.errors = Some(err);
}
}
}
}
/// Associated method that provides a convenient implementation for [`Iterator::fold`].
pub fn fold(mut sifter: Self, result: Result<T, syn::Error>) -> Self {
sifter.sift(result);
sifter
}
/// Complete the sifting process and return the final result.
pub fn finish(self) -> Result<Vec<T>, syn::Error> {
if let Some(errors) = self.errors {
Err(errors)
} else {
Ok(self.items)
}
}
}
/// Converts an iterator over ignore behavior of members to a bitset of ignored members.
///
/// Takes into account the fact that always ignored (non-reflected) members are skipped.
///
/// # Example
/// ```rust,ignore
/// pub struct HelloWorld {
/// reflected_field: u32 // index: 0
///
/// #[reflect(ignore)]
/// non_reflected_field: u32 // index: N/A (not 1!)
///
/// #[reflect(skip_serializing)]
/// non_serialized_field: u32 // index: 1
/// }
/// ```
/// Would convert to the `0b01` bitset (i.e second field is NOT serialized)
///
pub(crate) fn members_to_serialization_denylist<T>(member_iter: T) -> BitSet<u32>
where
T: Iterator<Item = ReflectIgnoreBehavior>,
{
let mut bitset = BitSet::default();
member_iter.fold(0, |next_idx, member| match member {
ReflectIgnoreBehavior::IgnoreAlways => next_idx,
ReflectIgnoreBehavior::IgnoreSerialization => {
bitset.insert(next_idx);
next_idx + 1
}
ReflectIgnoreBehavior::None => next_idx + 1,
});
bitset
}
/// Turns an `Option<TokenStream>` into a `TokenStream` for an `Option`.
pub(crate) fn wrap_in_option(tokens: Option<proc_macro2::TokenStream>) -> proc_macro2::TokenStream {
match tokens {
Some(tokens) => quote! {
#FQOption::Some(#tokens)
},
None => quote! {
#FQOption::None
},
}
}
/// Contains tokens representing different kinds of string.
#[derive(Clone)]
pub(crate) enum StringExpr {
/// A string that is valid at compile time.
///
/// This is either a string literal like `"mystring"`,
/// or a string created by a macro like [`module_path`]
/// or [`concat`].
Const(proc_macro2::TokenStream),
/// A [string slice](str) that is borrowed for a `'static` lifetime.
Borrowed(proc_macro2::TokenStream),
/// An [owned string](String).
Owned(proc_macro2::TokenStream),
}
impl<T: ToString + Spanned> From<T> for StringExpr {
fn from(value: T) -> Self {
Self::from_lit(&LitStr::new(&value.to_string(), value.span()))
}
}
impl StringExpr {
/// Creates a [constant] [`StringExpr`] from a [`struct@LitStr`].
///
/// [constant]: StringExpr::Const
pub fn from_lit(lit: &LitStr) -> Self {
Self::Const(lit.to_token_stream()) | /// [constant]: StringExpr::Const
pub fn from_str(string: &str) -> Self {
Self::Const(string.into_token_stream())
}
/// Returns tokens for an [owned string](String).
///
/// The returned expression will allocate unless the [`StringExpr`] is [already owned].
///
/// [already owned]: StringExpr::Owned
pub fn into_owned(self) -> proc_macro2::TokenStream {
match self {
Self::Const(tokens) | Self::Borrowed(tokens) => quote! {
::std::string::ToString::to_string(#tokens)
},
Self::Owned(owned) => owned,
}
}
/// Returns tokens for a statically borrowed [string slice](str).
pub fn into_borrowed(self) -> proc_macro2::TokenStream {
match self {
Self::Const(tokens) | Self::Borrowed(tokens) => tokens,
Self::Owned(owned) => quote! {
&#owned
},
}
}
/// Appends a [`StringExpr`] to another.
///
/// If both expressions are [`StringExpr::Const`] this will use [`concat`] to merge them.
pub fn appended_by(mut self, other: StringExpr) -> Self {
if let Self::Const(tokens) = self {
if let Self::Const(more) = other {
return Self::Const(quote! {
::core::concat!(#tokens, #more)
});
}
self = Self::Const(tokens);
}
let owned = self.into_owned();
let borrowed = other.into_borrowed();
Self::Owned(quote! {
#owned + #borrowed
})
}
}
impl Default for StringExpr {
fn default() -> Self {
StringExpr::from_str("")
}
}
impl FromIterator<StringExpr> for StringExpr {
fn from_iter<T: IntoIterator<Item = StringExpr>>(iter: T) -> Self {
let mut iter = iter.into_iter();
match iter.next() {
Some(mut expr) => {
for next in iter {
expr = expr.appended_by(next);
}
expr
}
None => Default::default(),
}
}
} | }
/// Creates a [constant] [`StringExpr`] by interpreting a [string slice][str] as a [`struct@LitStr`].
/// | random_line_split |
utility.rs | //! General-purpose utility functions for internal usage within this crate.
use crate::derive_data::{ReflectMeta, StructField};
use crate::field_attributes::ReflectIgnoreBehavior;
use crate::fq_std::{FQAny, FQOption, FQSend, FQSync};
use bevy_macro_utils::BevyManifest;
use bit_set::BitSet;
use proc_macro2::{Ident, Span};
use quote::{quote, ToTokens};
use syn::{spanned::Spanned, LitStr, Member, Path, Type, WhereClause};
/// Returns the correct path for `bevy_reflect`.
pub(crate) fn get_bevy_reflect_path() -> Path {
BevyManifest::get_path_direct("bevy_reflect")
}
/// Returns the "reflected" ident for a given string.
///
/// # Example
///
/// ```ignore
/// let reflected: Ident = get_reflect_ident("Hash");
/// assert_eq!("ReflectHash", reflected.to_string());
/// ```
pub(crate) fn get_reflect_ident(name: &str) -> Ident {
let reflected = format!("Reflect{name}");
Ident::new(&reflected, Span::call_site())
}
/// Helper struct used to process an iterator of `Result<Vec<T>, syn::Error>`,
/// combining errors into one along the way.
pub(crate) struct ResultSifter<T> {
items: Vec<T>,
errors: Option<syn::Error>,
}
/// Returns a [`Member`] made of `ident` or `index` if `ident` is None.
///
/// Rust struct syntax allows for `Struct { foo: "string" }` with explicitly
/// named fields. It allows the `Struct { 0: "string" }` syntax when the struct
/// is declared as a tuple struct.
///
/// ```
/// # fn main() {
/// struct Foo { field: &'static str }
/// struct Bar(&'static str);
/// let Foo { field } = Foo { field: "hi" };
/// let Bar { 0: field } = Bar { 0: "hello" };
/// let Bar(field) = Bar("hello"); // more common syntax
/// # }
/// ```
///
/// This function helps field access in context where you are declaring either
/// a tuple struct or a struct with named fields. If you don't have a field name,
/// it means you need to access the struct through an index.
pub(crate) fn ident_or_index(ident: Option<&Ident>, index: usize) -> Member {
ident.map_or_else(
|| Member::Unnamed(index.into()),
|ident| Member::Named(ident.clone()),
)
}
/// Options defining how to extend the `where` clause in reflection with any additional bounds needed.
pub(crate) struct WhereClauseOptions {
/// Type parameters that need extra trait bounds.
parameter_types: Box<[Ident]>,
/// Trait bounds to add to the type parameters.
parameter_trait_bounds: Box<[proc_macro2::TokenStream]>,
/// Any types that will be reflected and need an extra trait bound
active_types: Box<[Type]>,
/// Trait bounds to add to the active types
active_trait_bounds: Box<[proc_macro2::TokenStream]>,
/// Any types that won't be reflected and need an extra trait bound
ignored_types: Box<[Type]>,
/// Trait bounds to add to the ignored types
ignored_trait_bounds: Box<[proc_macro2::TokenStream]>,
}
impl Default for WhereClauseOptions {
/// By default, don't add any additional bounds to the `where` clause
fn default() -> Self {
Self {
parameter_types: Box::new([]),
active_types: Box::new([]),
ignored_types: Box::new([]),
active_trait_bounds: Box::new([]),
ignored_trait_bounds: Box::new([]),
parameter_trait_bounds: Box::new([]),
}
}
}
impl WhereClauseOptions {
/// Create [`WhereClauseOptions`] for a struct or enum type.
pub fn new<'a: 'b, 'b>(
meta: &ReflectMeta,
active_fields: impl Iterator<Item = &'b StructField<'a>>,
ignored_fields: impl Iterator<Item = &'b StructField<'a>>,
) -> Self {
Self::new_with_bounds(meta, active_fields, ignored_fields, |_| None, |_| None)
}
/// Create [`WhereClauseOptions`] for a simple value type.
pub fn new_value(meta: &ReflectMeta) -> Self {
Self::new_with_bounds(
meta,
std::iter::empty(),
std::iter::empty(),
|_| None,
|_| None,
)
}
/// Create [`WhereClauseOptions`] for a struct or enum type.
///
/// Compared to [`WhereClauseOptions::new`], this version allows you to specify
/// custom trait bounds for each field.
pub fn new_with_bounds<'a: 'b, 'b>(
meta: &ReflectMeta,
active_fields: impl Iterator<Item = &'b StructField<'a>>,
ignored_fields: impl Iterator<Item = &'b StructField<'a>>,
active_bounds: impl Fn(&StructField<'a>) -> Option<proc_macro2::TokenStream>,
ignored_bounds: impl Fn(&StructField<'a>) -> Option<proc_macro2::TokenStream>,
) -> Self {
let bevy_reflect_path = meta.bevy_reflect_path();
let is_from_reflect = meta.from_reflect().should_auto_derive();
let (active_types, active_trait_bounds): (Vec<_>, Vec<_>) = active_fields
.map(|field| {
let ty = field.data.ty.clone();
let custom_bounds = active_bounds(field).map(|bounds| quote!(+ #bounds));
let bounds = if is_from_reflect {
quote!(#bevy_reflect_path::FromReflect #custom_bounds)
} else {
quote!(#bevy_reflect_path::Reflect #custom_bounds)
};
(ty, bounds)
})
.unzip();
let (ignored_types, ignored_trait_bounds): (Vec<_>, Vec<_>) = ignored_fields
.map(|field| {
let ty = field.data.ty.clone();
let custom_bounds = ignored_bounds(field).map(|bounds| quote!(+ #bounds));
let bounds = quote!(#FQAny + #FQSend + #FQSync #custom_bounds);
(ty, bounds)
})
.unzip();
let (parameter_types, parameter_trait_bounds): (Vec<_>, Vec<_>) =
if meta.traits().type_path_attrs().should_auto_derive() {
meta.type_path()
.generics()
.type_params()
.map(|param| {
let ident = param.ident.clone();
let bounds = quote!(#bevy_reflect_path::TypePath);
(ident, bounds)
})
.unzip()
} else {
// If we don't need to derive `TypePath` for the type parameters,
// we can skip adding its bound to the `where` clause.
(Vec::new(), Vec::new())
};
Self {
active_types: active_types.into_boxed_slice(),
active_trait_bounds: active_trait_bounds.into_boxed_slice(),
ignored_types: ignored_types.into_boxed_slice(),
ignored_trait_bounds: ignored_trait_bounds.into_boxed_slice(),
parameter_types: parameter_types.into_boxed_slice(),
parameter_trait_bounds: parameter_trait_bounds.into_boxed_slice(),
}
}
}
/// Extends the `where` clause in reflection with any additional bounds needed.
///
/// This is mostly used to add additional bounds to reflected objects with generic types.
/// For reflection purposes, we usually have:
/// * `active_trait_bounds: Reflect`
/// * `ignored_trait_bounds: Any + Send + Sync`
///
/// # Arguments
///
/// * `where_clause`: existing `where` clause present on the object to be derived
/// * `where_clause_options`: additional parameters defining which trait bounds to add to the `where` clause
///
/// # Example
///
/// The struct:
/// ```ignore
/// #[derive(Reflect)]
/// struct Foo<T, U> {
/// a: T,
/// #[reflect(ignore)]
/// b: U
/// }
/// ```
/// will have active types: `[T]` and ignored types: `[U]`
///
/// The `extend_where_clause` function will yield the following `where` clause:
/// ```ignore
/// where
/// T: Reflect, // active_trait_bounds
/// U: Any + Send + Sync, // ignored_trait_bounds
/// ```
pub(crate) fn extend_where_clause(
where_clause: Option<&WhereClause>,
where_clause_options: &WhereClauseOptions,
) -> proc_macro2::TokenStream {
let parameter_types = &where_clause_options.parameter_types;
let active_types = &where_clause_options.active_types;
let ignored_types = &where_clause_options.ignored_types;
let parameter_trait_bounds = &where_clause_options.parameter_trait_bounds;
let active_trait_bounds = &where_clause_options.active_trait_bounds;
let ignored_trait_bounds = &where_clause_options.ignored_trait_bounds;
let mut generic_where_clause = if let Some(where_clause) = where_clause {
let predicates = where_clause.predicates.iter();
quote! {where #(#predicates,)*}
} else if!(parameter_types.is_empty() && active_types.is_empty() && ignored_types.is_empty()) {
quote! {where}
} else {
quote!()
};
// The nested parentheses here are required to properly scope HRTBs coming
// from field types to the type itself, as the compiler will scope them to
// the whole bound by default, resulting in a failure to prove trait
// adherence.
generic_where_clause.extend(quote! {
#((#active_types): #active_trait_bounds,)*
#((#ignored_types): #ignored_trait_bounds,)*
// Leave parameter bounds to the end for more sane error messages.
#((#parameter_types): #parameter_trait_bounds,)*
});
generic_where_clause
}
impl<T> Default for ResultSifter<T> {
fn default() -> Self {
Self {
items: Vec::new(),
errors: None,
}
}
}
impl<T> ResultSifter<T> {
/// Sift the given result, combining errors if necessary.
pub fn sift(&mut self, result: Result<T, syn::Error>) {
match result {
Ok(data) => self.items.push(data),
Err(err) => {
if let Some(ref mut errors) = self.errors {
errors.combine(err);
} else {
self.errors = Some(err);
}
}
}
}
/// Associated method that provides a convenient implementation for [`Iterator::fold`].
pub fn fold(mut sifter: Self, result: Result<T, syn::Error>) -> Self {
sifter.sift(result);
sifter
}
/// Complete the sifting process and return the final result.
pub fn finish(self) -> Result<Vec<T>, syn::Error> {
if let Some(errors) = self.errors {
Err(errors)
} else {
Ok(self.items)
}
}
}
/// Converts an iterator over ignore behavior of members to a bitset of ignored members.
///
/// Takes into account the fact that always ignored (non-reflected) members are skipped.
///
/// # Example
/// ```rust,ignore
/// pub struct HelloWorld {
/// reflected_field: u32 // index: 0
///
/// #[reflect(ignore)]
/// non_reflected_field: u32 // index: N/A (not 1!)
///
/// #[reflect(skip_serializing)]
/// non_serialized_field: u32 // index: 1
/// }
/// ```
/// Would convert to the `0b01` bitset (i.e second field is NOT serialized)
///
pub(crate) fn members_to_serialization_denylist<T>(member_iter: T) -> BitSet<u32>
where
T: Iterator<Item = ReflectIgnoreBehavior>,
{
let mut bitset = BitSet::default();
member_iter.fold(0, |next_idx, member| match member {
ReflectIgnoreBehavior::IgnoreAlways => next_idx,
ReflectIgnoreBehavior::IgnoreSerialization => {
bitset.insert(next_idx);
next_idx + 1
}
ReflectIgnoreBehavior::None => next_idx + 1,
});
bitset
}
/// Turns an `Option<TokenStream>` into a `TokenStream` for an `Option`.
pub(crate) fn wrap_in_option(tokens: Option<proc_macro2::TokenStream>) -> proc_macro2::TokenStream {
match tokens {
Some(tokens) => quote! {
#FQOption::Some(#tokens)
},
None => quote! {
#FQOption::None
},
}
}
/// Contains tokens representing different kinds of string.
#[derive(Clone)]
pub(crate) enum StringExpr {
/// A string that is valid at compile time.
///
/// This is either a string literal like `"mystring"`,
/// or a string created by a macro like [`module_path`]
/// or [`concat`].
Const(proc_macro2::TokenStream),
/// A [string slice](str) that is borrowed for a `'static` lifetime.
Borrowed(proc_macro2::TokenStream),
/// An [owned string](String).
Owned(proc_macro2::TokenStream),
}
impl<T: ToString + Spanned> From<T> for StringExpr {
fn from(value: T) -> Self {
Self::from_lit(&LitStr::new(&value.to_string(), value.span()))
}
}
impl StringExpr {
/// Creates a [constant] [`StringExpr`] from a [`struct@LitStr`].
///
/// [constant]: StringExpr::Const
pub fn from_lit(lit: &LitStr) -> Self {
Self::Const(lit.to_token_stream())
}
/// Creates a [constant] [`StringExpr`] by interpreting a [string slice][str] as a [`struct@LitStr`].
///
/// [constant]: StringExpr::Const
pub fn from_str(string: &str) -> Self {
Self::Const(string.into_token_stream())
}
/// Returns tokens for an [owned string](String).
///
/// The returned expression will allocate unless the [`StringExpr`] is [already owned].
///
/// [already owned]: StringExpr::Owned
pub fn into_owned(self) -> proc_macro2::TokenStream {
match self {
Self::Const(tokens) | Self::Borrowed(tokens) => quote! {
::std::string::ToString::to_string(#tokens)
},
Self::Owned(owned) => owned,
}
}
/// Returns tokens for a statically borrowed [string slice](str).
pub fn into_borrowed(self) -> proc_macro2::TokenStream |
/// Appends a [`StringExpr`] to another.
///
/// If both expressions are [`StringExpr::Const`] this will use [`concat`] to merge them.
pub fn appended_by(mut self, other: StringExpr) -> Self {
if let Self::Const(tokens) = self {
if let Self::Const(more) = other {
return Self::Const(quote! {
::core::concat!(#tokens, #more)
});
}
self = Self::Const(tokens);
}
let owned = self.into_owned();
let borrowed = other.into_borrowed();
Self::Owned(quote! {
#owned + #borrowed
})
}
}
impl Default for StringExpr {
fn default() -> Self {
StringExpr::from_str("")
}
}
impl FromIterator<StringExpr> for StringExpr {
fn from_iter<T: IntoIterator<Item = StringExpr>>(iter: T) -> Self {
let mut iter = iter.into_iter();
match iter.next() {
Some(mut expr) => {
for next in iter {
expr = expr.appended_by(next);
}
expr
}
None => Default::default(),
}
}
}
| {
match self {
Self::Const(tokens) | Self::Borrowed(tokens) => tokens,
Self::Owned(owned) => quote! {
&#owned
},
}
} | identifier_body |
utility.rs | //! General-purpose utility functions for internal usage within this crate.
use crate::derive_data::{ReflectMeta, StructField};
use crate::field_attributes::ReflectIgnoreBehavior;
use crate::fq_std::{FQAny, FQOption, FQSend, FQSync};
use bevy_macro_utils::BevyManifest;
use bit_set::BitSet;
use proc_macro2::{Ident, Span};
use quote::{quote, ToTokens};
use syn::{spanned::Spanned, LitStr, Member, Path, Type, WhereClause};
/// Returns the correct path for `bevy_reflect`.
pub(crate) fn get_bevy_reflect_path() -> Path {
BevyManifest::get_path_direct("bevy_reflect")
}
/// Returns the "reflected" ident for a given string.
///
/// # Example
///
/// ```ignore
/// let reflected: Ident = get_reflect_ident("Hash");
/// assert_eq!("ReflectHash", reflected.to_string());
/// ```
pub(crate) fn get_reflect_ident(name: &str) -> Ident {
let reflected = format!("Reflect{name}");
Ident::new(&reflected, Span::call_site())
}
/// Helper struct used to process an iterator of `Result<Vec<T>, syn::Error>`,
/// combining errors into one along the way.
pub(crate) struct ResultSifter<T> {
items: Vec<T>,
errors: Option<syn::Error>,
}
/// Returns a [`Member`] made of `ident` or `index` if `ident` is None.
///
/// Rust struct syntax allows for `Struct { foo: "string" }` with explicitly
/// named fields. It allows the `Struct { 0: "string" }` syntax when the struct
/// is declared as a tuple struct.
///
/// ```
/// # fn main() {
/// struct Foo { field: &'static str }
/// struct Bar(&'static str);
/// let Foo { field } = Foo { field: "hi" };
/// let Bar { 0: field } = Bar { 0: "hello" };
/// let Bar(field) = Bar("hello"); // more common syntax
/// # }
/// ```
///
/// This function helps field access in context where you are declaring either
/// a tuple struct or a struct with named fields. If you don't have a field name,
/// it means you need to access the struct through an index.
pub(crate) fn ident_or_index(ident: Option<&Ident>, index: usize) -> Member {
ident.map_or_else(
|| Member::Unnamed(index.into()),
|ident| Member::Named(ident.clone()),
)
}
/// Options defining how to extend the `where` clause in reflection with any additional bounds needed.
pub(crate) struct WhereClauseOptions {
/// Type parameters that need extra trait bounds.
parameter_types: Box<[Ident]>,
/// Trait bounds to add to the type parameters.
parameter_trait_bounds: Box<[proc_macro2::TokenStream]>,
/// Any types that will be reflected and need an extra trait bound
active_types: Box<[Type]>,
/// Trait bounds to add to the active types
active_trait_bounds: Box<[proc_macro2::TokenStream]>,
/// Any types that won't be reflected and need an extra trait bound
ignored_types: Box<[Type]>,
/// Trait bounds to add to the ignored types
ignored_trait_bounds: Box<[proc_macro2::TokenStream]>,
}
impl Default for WhereClauseOptions {
/// By default, don't add any additional bounds to the `where` clause
fn default() -> Self {
Self {
parameter_types: Box::new([]),
active_types: Box::new([]),
ignored_types: Box::new([]),
active_trait_bounds: Box::new([]),
ignored_trait_bounds: Box::new([]),
parameter_trait_bounds: Box::new([]),
}
}
}
impl WhereClauseOptions {
/// Create [`WhereClauseOptions`] for a struct or enum type.
pub fn new<'a: 'b, 'b>(
meta: &ReflectMeta,
active_fields: impl Iterator<Item = &'b StructField<'a>>,
ignored_fields: impl Iterator<Item = &'b StructField<'a>>,
) -> Self {
Self::new_with_bounds(meta, active_fields, ignored_fields, |_| None, |_| None)
}
/// Create [`WhereClauseOptions`] for a simple value type.
pub fn new_value(meta: &ReflectMeta) -> Self {
Self::new_with_bounds(
meta,
std::iter::empty(),
std::iter::empty(),
|_| None,
|_| None,
)
}
/// Create [`WhereClauseOptions`] for a struct or enum type.
///
/// Compared to [`WhereClauseOptions::new`], this version allows you to specify
/// custom trait bounds for each field.
pub fn new_with_bounds<'a: 'b, 'b>(
meta: &ReflectMeta,
active_fields: impl Iterator<Item = &'b StructField<'a>>,
ignored_fields: impl Iterator<Item = &'b StructField<'a>>,
active_bounds: impl Fn(&StructField<'a>) -> Option<proc_macro2::TokenStream>,
ignored_bounds: impl Fn(&StructField<'a>) -> Option<proc_macro2::TokenStream>,
) -> Self {
let bevy_reflect_path = meta.bevy_reflect_path();
let is_from_reflect = meta.from_reflect().should_auto_derive();
let (active_types, active_trait_bounds): (Vec<_>, Vec<_>) = active_fields
.map(|field| {
let ty = field.data.ty.clone();
let custom_bounds = active_bounds(field).map(|bounds| quote!(+ #bounds));
let bounds = if is_from_reflect {
quote!(#bevy_reflect_path::FromReflect #custom_bounds)
} else {
quote!(#bevy_reflect_path::Reflect #custom_bounds)
};
(ty, bounds)
})
.unzip();
let (ignored_types, ignored_trait_bounds): (Vec<_>, Vec<_>) = ignored_fields
.map(|field| {
let ty = field.data.ty.clone();
let custom_bounds = ignored_bounds(field).map(|bounds| quote!(+ #bounds));
let bounds = quote!(#FQAny + #FQSend + #FQSync #custom_bounds);
(ty, bounds)
})
.unzip();
let (parameter_types, parameter_trait_bounds): (Vec<_>, Vec<_>) =
if meta.traits().type_path_attrs().should_auto_derive() {
meta.type_path()
.generics()
.type_params()
.map(|param| {
let ident = param.ident.clone();
let bounds = quote!(#bevy_reflect_path::TypePath);
(ident, bounds)
})
.unzip()
} else {
// If we don't need to derive `TypePath` for the type parameters,
// we can skip adding its bound to the `where` clause.
(Vec::new(), Vec::new())
};
Self {
active_types: active_types.into_boxed_slice(),
active_trait_bounds: active_trait_bounds.into_boxed_slice(),
ignored_types: ignored_types.into_boxed_slice(),
ignored_trait_bounds: ignored_trait_bounds.into_boxed_slice(),
parameter_types: parameter_types.into_boxed_slice(),
parameter_trait_bounds: parameter_trait_bounds.into_boxed_slice(),
}
}
}
/// Extends the `where` clause in reflection with any additional bounds needed.
///
/// This is mostly used to add additional bounds to reflected objects with generic types.
/// For reflection purposes, we usually have:
/// * `active_trait_bounds: Reflect`
/// * `ignored_trait_bounds: Any + Send + Sync`
///
/// # Arguments
///
/// * `where_clause`: existing `where` clause present on the object to be derived
/// * `where_clause_options`: additional parameters defining which trait bounds to add to the `where` clause
///
/// # Example
///
/// The struct:
/// ```ignore
/// #[derive(Reflect)]
/// struct Foo<T, U> {
/// a: T,
/// #[reflect(ignore)]
/// b: U
/// }
/// ```
/// will have active types: `[T]` and ignored types: `[U]`
///
/// The `extend_where_clause` function will yield the following `where` clause:
/// ```ignore
/// where
/// T: Reflect, // active_trait_bounds
/// U: Any + Send + Sync, // ignored_trait_bounds
/// ```
pub(crate) fn | (
where_clause: Option<&WhereClause>,
where_clause_options: &WhereClauseOptions,
) -> proc_macro2::TokenStream {
let parameter_types = &where_clause_options.parameter_types;
let active_types = &where_clause_options.active_types;
let ignored_types = &where_clause_options.ignored_types;
let parameter_trait_bounds = &where_clause_options.parameter_trait_bounds;
let active_trait_bounds = &where_clause_options.active_trait_bounds;
let ignored_trait_bounds = &where_clause_options.ignored_trait_bounds;
let mut generic_where_clause = if let Some(where_clause) = where_clause {
let predicates = where_clause.predicates.iter();
quote! {where #(#predicates,)*}
} else if!(parameter_types.is_empty() && active_types.is_empty() && ignored_types.is_empty()) {
quote! {where}
} else {
quote!()
};
// The nested parentheses here are required to properly scope HRTBs coming
// from field types to the type itself, as the compiler will scope them to
// the whole bound by default, resulting in a failure to prove trait
// adherence.
generic_where_clause.extend(quote! {
#((#active_types): #active_trait_bounds,)*
#((#ignored_types): #ignored_trait_bounds,)*
// Leave parameter bounds to the end for more sane error messages.
#((#parameter_types): #parameter_trait_bounds,)*
});
generic_where_clause
}
impl<T> Default for ResultSifter<T> {
fn default() -> Self {
Self {
items: Vec::new(),
errors: None,
}
}
}
impl<T> ResultSifter<T> {
/// Sift the given result, combining errors if necessary.
pub fn sift(&mut self, result: Result<T, syn::Error>) {
match result {
Ok(data) => self.items.push(data),
Err(err) => {
if let Some(ref mut errors) = self.errors {
errors.combine(err);
} else {
self.errors = Some(err);
}
}
}
}
/// Associated method that provides a convenient implementation for [`Iterator::fold`].
pub fn fold(mut sifter: Self, result: Result<T, syn::Error>) -> Self {
sifter.sift(result);
sifter
}
/// Complete the sifting process and return the final result.
pub fn finish(self) -> Result<Vec<T>, syn::Error> {
if let Some(errors) = self.errors {
Err(errors)
} else {
Ok(self.items)
}
}
}
/// Converts an iterator over ignore behavior of members to a bitset of ignored members.
///
/// Takes into account the fact that always ignored (non-reflected) members are skipped.
///
/// # Example
/// ```rust,ignore
/// pub struct HelloWorld {
/// reflected_field: u32 // index: 0
///
/// #[reflect(ignore)]
/// non_reflected_field: u32 // index: N/A (not 1!)
///
/// #[reflect(skip_serializing)]
/// non_serialized_field: u32 // index: 1
/// }
/// ```
/// Would convert to the `0b01` bitset (i.e second field is NOT serialized)
///
pub(crate) fn members_to_serialization_denylist<T>(member_iter: T) -> BitSet<u32>
where
T: Iterator<Item = ReflectIgnoreBehavior>,
{
let mut bitset = BitSet::default();
member_iter.fold(0, |next_idx, member| match member {
ReflectIgnoreBehavior::IgnoreAlways => next_idx,
ReflectIgnoreBehavior::IgnoreSerialization => {
bitset.insert(next_idx);
next_idx + 1
}
ReflectIgnoreBehavior::None => next_idx + 1,
});
bitset
}
/// Turns an `Option<TokenStream>` into a `TokenStream` for an `Option`.
pub(crate) fn wrap_in_option(tokens: Option<proc_macro2::TokenStream>) -> proc_macro2::TokenStream {
match tokens {
Some(tokens) => quote! {
#FQOption::Some(#tokens)
},
None => quote! {
#FQOption::None
},
}
}
/// Contains tokens representing different kinds of string.
#[derive(Clone)]
pub(crate) enum StringExpr {
/// A string that is valid at compile time.
///
/// This is either a string literal like `"mystring"`,
/// or a string created by a macro like [`module_path`]
/// or [`concat`].
Const(proc_macro2::TokenStream),
/// A [string slice](str) that is borrowed for a `'static` lifetime.
Borrowed(proc_macro2::TokenStream),
/// An [owned string](String).
Owned(proc_macro2::TokenStream),
}
impl<T: ToString + Spanned> From<T> for StringExpr {
fn from(value: T) -> Self {
Self::from_lit(&LitStr::new(&value.to_string(), value.span()))
}
}
impl StringExpr {
/// Creates a [constant] [`StringExpr`] from a [`struct@LitStr`].
///
/// [constant]: StringExpr::Const
pub fn from_lit(lit: &LitStr) -> Self {
Self::Const(lit.to_token_stream())
}
/// Creates a [constant] [`StringExpr`] by interpreting a [string slice][str] as a [`struct@LitStr`].
///
/// [constant]: StringExpr::Const
pub fn from_str(string: &str) -> Self {
Self::Const(string.into_token_stream())
}
/// Returns tokens for an [owned string](String).
///
/// The returned expression will allocate unless the [`StringExpr`] is [already owned].
///
/// [already owned]: StringExpr::Owned
pub fn into_owned(self) -> proc_macro2::TokenStream {
match self {
Self::Const(tokens) | Self::Borrowed(tokens) => quote! {
::std::string::ToString::to_string(#tokens)
},
Self::Owned(owned) => owned,
}
}
/// Returns tokens for a statically borrowed [string slice](str).
pub fn into_borrowed(self) -> proc_macro2::TokenStream {
match self {
Self::Const(tokens) | Self::Borrowed(tokens) => tokens,
Self::Owned(owned) => quote! {
&#owned
},
}
}
/// Appends a [`StringExpr`] to another.
///
/// If both expressions are [`StringExpr::Const`] this will use [`concat`] to merge them.
pub fn appended_by(mut self, other: StringExpr) -> Self {
if let Self::Const(tokens) = self {
if let Self::Const(more) = other {
return Self::Const(quote! {
::core::concat!(#tokens, #more)
});
}
self = Self::Const(tokens);
}
let owned = self.into_owned();
let borrowed = other.into_borrowed();
Self::Owned(quote! {
#owned + #borrowed
})
}
}
impl Default for StringExpr {
fn default() -> Self {
StringExpr::from_str("")
}
}
impl FromIterator<StringExpr> for StringExpr {
fn from_iter<T: IntoIterator<Item = StringExpr>>(iter: T) -> Self {
let mut iter = iter.into_iter();
match iter.next() {
Some(mut expr) => {
for next in iter {
expr = expr.appended_by(next);
}
expr
}
None => Default::default(),
}
}
}
| extend_where_clause | identifier_name |
main.rs | extern crate argparse;
extern crate chrono;
extern crate colored;
extern crate rand;
extern crate serde;
extern crate serde_json;
extern crate time;
#[macro_use]
extern crate serde_derive;
use argparse::{ArgumentParser, Store, StoreOption};
use rand::Rng;
use std::fs::File;
use std::fs::OpenOptions;
use std::io::BufRead;
use std::io::BufReader;
use std::path::PathBuf;
use std::str::FromStr;
use time::Duration;
use std::env;
use std::io::Read;
use std::io::Result as IOResult;
use std::io::Write;
use colored::*;
use chrono::prelude::*;
enum VagueTime {
Tomorrow,
Today,
Evening,
NextWeek,
Day(u8),
}
impl FromStr for VagueTime {
type Err = ();
fn from_str(s: &str) -> Result<Self, Self::Err> {
use VagueTime::*;
match s {
"tomorrow" => Ok(Tomorrow),
"today" => Ok(Today),
"tonight" => Ok(Today),
"evening" => Ok(Evening),
"week" => Ok(NextWeek),
"next week" => Ok(NextWeek),
d => Ok(match u8::from_str(d) {
Ok(x) => Day(x),
Err(_e) => {
panic!("I don't understand the date you asked for!");
}
}),
}
}
}
impl VagueTime {
fn concretise(&self) -> DateTime<Local> {
use VagueTime::*;
let t0 = Local::now();
match self {
Tomorrow => t0 + Duration::days(1),
Today => Local::today().and_hms(23, 30, 0),
Evening => Local::today().and_hms(23, 00, 0),
NextWeek => t0 + Duration::days(7),
Day(d) => Local::today()
.with_day(u32::from(*d))
.unwrap()
.and_hms(15, 00, 0),
}
}
}
enum Command {
List,
Add,
Started,
Resolve,
Remove,
None,
}
impl FromStr for Command {
type Err = ();
fn from_str(s: &str) -> Result<Self, Self::Err> {
use Command::*;
Ok(match s {
"" => List,
"list" => List,
"add" => Add,
"start" => Started,
"resolve" => Resolve,
"remove" => Remove,
_ => None,
})
}
}
fn main() {
let mut command = Command::List;
let mut arg = "".to_owned();
let mut deadline: Option<VagueTime> = None;
let mut priority: u8 = 0;
{
let mut ap = ArgumentParser::new();
ap.set_description(
"Something to help me organise\nSupports commands:
list\n - add \"Text of task\"\n - start taskname
- \nresolve taskname\n - remove taskname
Supports setting deadlines which can be of the form
tommorow, today, tonight, evening, nextweek, or a day of this month as a single
number",
);
ap.refer(&mut command)
.add_argument("command", Store, "Command to run");
ap.refer(&mut arg)
.add_argument("arg", Store, "arg for command");
ap.refer(&mut deadline)
.add_option(&["-d", "--deadline"], StoreOption, "Deadline of task");
ap.refer(&mut priority)
.add_option(&["-p", "--priority"], Store, "Priority of task");
ap.parse_args_or_exit();
}
match command {
Command::List => {
do_list();
}
Command::Add => {
do_add(arg, priority, &deadline);
}
Command::Started => {
do_set_progress(&arg, Status::Started);
}
Command::Resolve => {
do_set_progress(&arg, Status::Resolved);
}
Command::Remove => {
do_remove(&arg);
}
_ => {
println!("Unrecognised argument, try todo --help");
}
}
}
#[derive(Serialize, Deserialize)]
struct Data {
entries: Vec<Entry>,
last_updated: DateTime<Local>,
}
impl Data {
fn new() -> Self {
Data {
entries: Vec::new(),
last_updated: Local::now(),
}
}
fn add_entry(&mut self, entry: Entry) {
self.entries.push(entry);
}
fn find_entry<'t>(&'t self, id: &str) -> Option<&'t Entry> {
for x in &self.entries {
if x.id == id {
return Some(x);
}
}
None
}
fn find_entry_mut<'t>(&'t mut self, id: &str) -> Option<&'t mut Entry> {
for x in &mut self.entries {
if x.id == id {
return Some(x);
}
}
None
}
fn remove_by_id(&mut self, id: &str) {
self.entries.retain(|x| x.id!= id);
}
fn print(&self) {
if self.entries.is_empty() {
println!("Nothing todo, woooooo!");
}
for entry in &self.entries {
println!("{}", entry.format());
}
}
}
#[derive(Serialize, Deserialize)]
enum Status {
NotStarted,
Started,
Resolved,
}
impl Status {
fn is_urgent(&self) -> bool {
use Status::*;
match self {
NotStarted => true,
Started => true,
Resolved => false,
}
}
fn to_colored(&self, urgent: &bool) -> ColoredString {
use Status::*;
match self {
NotStarted => {
let base = "Not Started";
if *urgent {
base.red()
} else {
base.dimmed()
}
}
Started => {
let base = "Started";
if *urgent {
base.red()
} else {
base.yellow()
}
}
Resolved => "Resolved".green(),
}
}
}
#[derive(Serialize, Deserialize)]
struct Entry {
id: String,
task: String,
deadline: Option<DateTime<Local>>,
status: Status,
priority: u8,
}
impl Entry {
fn new(id: String, task: String, priority: u8, mb_deadline: Option<DateTime<Local>>) -> Self {
Entry {
id,
task,
deadline: mb_deadline,
priority,
status: Status::NotStarted,
}
}
fn format(&self) -> String {
let deadline_urgent = match self.deadline {
Some(x) => x.date() <= Local::now().date(),
_ => false,
};
let status_urgent = self.status.is_urgent();
let urgent = deadline_urgent && status_urgent;
let deadline_str = match self.deadline {
Some(deadline) => {
let str = format!("{}", deadline.format("\n\t Deadline: %d-%m %H:%M")).to_owned();
if urgent {
str.red()
} else {
str.dimmed()
}
}
None => "".to_owned().dimmed(),
};
let priority_str = if self.priority > 0 {
format!("Priority: {}", self.priority).to_owned()
} else {
"".to_owned()
};
let status_str = self.status.to_colored(&urgent);
| "Task: {} {} | {} | {} {}",
self.id,
priority_str,
self.task.bold(),
status_str,
deadline_str
)
}
}
const DATA_FOLDER: &str = ".todo.d";
const DATA_FILENAME: &str = "data.json";
const NOUNS_FILENAME: &str = "nouns.txt";
fn data_folder() -> PathBuf {
match env::home_dir() {
Some(mut p) => {
p.push(DATA_FOLDER);
p
}
None => {
panic!("Couldn't find your home folder, setup will require some manual hacking");
}
}
}
fn data_path() -> PathBuf {
let mut p = data_folder();
p.push(DATA_FILENAME);
p
}
fn nouns_path() -> PathBuf {
let mut p = data_folder();
p.push(NOUNS_FILENAME);
p
}
fn load_data() -> IOResult<Data> {
let filename = data_path();
let mut file = File::open(filename)?;
let mut contents = String::new();
file.read_to_string(&mut contents)?;
Ok(match serde_json::from_str(&contents) {
Ok(d) => d,
Err(e) => {
panic!("Error, corrupted data!\n{}", e);
}
})
}
fn load_data_catch() -> Data {
load_data().unwrap_or_else(|_| Data::new())
}
fn load_nouns() -> IOResult<Vec<String>> {
let filename = nouns_path();
let f = File::open(filename)?;
let f = BufReader::new(f);
f.lines().collect()
}
fn save_data(data: &Data) -> Result<(), serde_json::Error> {
let j = serde_json::to_string(data)?;
let filename = data_path();
// TODO merge two result error types
let mut file = OpenOptions::new()
.write(true) // Overwrite whole file when writing
.create(true)
.truncate(true) // Remove any previous stuff
.open(filename).unwrap();
file.write_all(j.as_ref()).unwrap();
Ok(())
}
fn do_list() {
let data = load_data_catch();
data.print();
}
fn pick_name(data: &Data) -> String {
// TODO error handle
let nouns = load_nouns().unwrap();
let mut noun;
// We know this will probably terminate
// stop worrying guys
#[allow(while_immutable_condition)]
while {
noun = rand::thread_rng().choose(&nouns).unwrap();
// Repeat until we find one not already used
data.find_entry(noun).is_some()
} {}
noun.to_owned()
}
fn do_add(task: String, priority: u8, deadline_vague: &Option<VagueTime>) {
let mut data = load_data_catch();
let id = pick_name(&data);
println!("Adding {} - '{}'", id, task);
let deadline = deadline_vague.as_ref().map(|x| x.concretise());
let new_entry = Entry::new(id, task, priority, deadline);
data.add_entry(new_entry);
data.print();
save_data(&data).unwrap();
}
fn do_set_progress(id: &str, progress: Status) {
let mut data = load_data_catch();
println!("Resolving '{}'", id);
{
// Scope for mutable borrow
match data.find_entry_mut(id) {
Some(entry) => {
entry.status = progress;
}
None => {
println!("Could not find '{}' to update, exiting..", id);
return;
}
}
}
data.print();
save_data(&data).unwrap();
}
fn do_remove(id: &str) {
let mut data = load_data_catch();
println!("Removing '{}'", id);
data.remove_by_id(id);
data.print();
save_data(&data).unwrap();
} | format!( | random_line_split |
main.rs | extern crate argparse;
extern crate chrono;
extern crate colored;
extern crate rand;
extern crate serde;
extern crate serde_json;
extern crate time;
#[macro_use]
extern crate serde_derive;
use argparse::{ArgumentParser, Store, StoreOption};
use rand::Rng;
use std::fs::File;
use std::fs::OpenOptions;
use std::io::BufRead;
use std::io::BufReader;
use std::path::PathBuf;
use std::str::FromStr;
use time::Duration;
use std::env;
use std::io::Read;
use std::io::Result as IOResult;
use std::io::Write;
use colored::*;
use chrono::prelude::*;
enum VagueTime {
Tomorrow,
Today,
Evening,
NextWeek,
Day(u8),
}
impl FromStr for VagueTime {
type Err = ();
fn from_str(s: &str) -> Result<Self, Self::Err> {
use VagueTime::*;
match s {
"tomorrow" => Ok(Tomorrow),
"today" => Ok(Today),
"tonight" => Ok(Today),
"evening" => Ok(Evening),
"week" => Ok(NextWeek),
"next week" => Ok(NextWeek),
d => Ok(match u8::from_str(d) {
Ok(x) => Day(x),
Err(_e) => {
panic!("I don't understand the date you asked for!");
}
}),
}
}
}
impl VagueTime {
fn concretise(&self) -> DateTime<Local> {
use VagueTime::*;
let t0 = Local::now();
match self {
Tomorrow => t0 + Duration::days(1),
Today => Local::today().and_hms(23, 30, 0),
Evening => Local::today().and_hms(23, 00, 0),
NextWeek => t0 + Duration::days(7),
Day(d) => Local::today()
.with_day(u32::from(*d))
.unwrap()
.and_hms(15, 00, 0),
}
}
}
enum Command {
List,
Add,
Started,
Resolve,
Remove,
None,
}
impl FromStr for Command {
type Err = ();
fn from_str(s: &str) -> Result<Self, Self::Err> {
use Command::*;
Ok(match s {
"" => List,
"list" => List,
"add" => Add,
"start" => Started,
"resolve" => Resolve,
"remove" => Remove,
_ => None,
})
}
}
fn main() {
let mut command = Command::List;
let mut arg = "".to_owned();
let mut deadline: Option<VagueTime> = None;
let mut priority: u8 = 0;
{
let mut ap = ArgumentParser::new();
ap.set_description(
"Something to help me organise\nSupports commands:
list\n - add \"Text of task\"\n - start taskname
- \nresolve taskname\n - remove taskname
Supports setting deadlines which can be of the form
tommorow, today, tonight, evening, nextweek, or a day of this month as a single
number",
);
ap.refer(&mut command)
.add_argument("command", Store, "Command to run");
ap.refer(&mut arg)
.add_argument("arg", Store, "arg for command");
ap.refer(&mut deadline)
.add_option(&["-d", "--deadline"], StoreOption, "Deadline of task");
ap.refer(&mut priority)
.add_option(&["-p", "--priority"], Store, "Priority of task");
ap.parse_args_or_exit();
}
match command {
Command::List => {
do_list();
}
Command::Add => {
do_add(arg, priority, &deadline);
}
Command::Started => {
do_set_progress(&arg, Status::Started);
}
Command::Resolve => {
do_set_progress(&arg, Status::Resolved);
}
Command::Remove => {
do_remove(&arg);
}
_ => {
println!("Unrecognised argument, try todo --help");
}
}
}
#[derive(Serialize, Deserialize)]
struct Data {
entries: Vec<Entry>,
last_updated: DateTime<Local>,
}
impl Data {
fn new() -> Self {
Data {
entries: Vec::new(),
last_updated: Local::now(),
}
}
fn add_entry(&mut self, entry: Entry) {
self.entries.push(entry);
}
fn find_entry<'t>(&'t self, id: &str) -> Option<&'t Entry> {
for x in &self.entries {
if x.id == id {
return Some(x);
}
}
None
}
fn find_entry_mut<'t>(&'t mut self, id: &str) -> Option<&'t mut Entry> {
for x in &mut self.entries {
if x.id == id {
return Some(x);
}
}
None
}
fn remove_by_id(&mut self, id: &str) {
self.entries.retain(|x| x.id!= id);
}
fn print(&self) {
if self.entries.is_empty() {
println!("Nothing todo, woooooo!");
}
for entry in &self.entries {
println!("{}", entry.format());
}
}
}
#[derive(Serialize, Deserialize)]
enum Status {
NotStarted,
Started,
Resolved,
}
impl Status {
fn is_urgent(&self) -> bool {
use Status::*;
match self {
NotStarted => true,
Started => true,
Resolved => false,
}
}
fn to_colored(&self, urgent: &bool) -> ColoredString {
use Status::*;
match self {
NotStarted => {
let base = "Not Started";
if *urgent {
base.red()
} else {
base.dimmed()
}
}
Started => {
let base = "Started";
if *urgent {
base.red()
} else {
base.yellow()
}
}
Resolved => "Resolved".green(),
}
}
}
#[derive(Serialize, Deserialize)]
struct Entry {
id: String,
task: String,
deadline: Option<DateTime<Local>>,
status: Status,
priority: u8,
}
impl Entry {
fn new(id: String, task: String, priority: u8, mb_deadline: Option<DateTime<Local>>) -> Self {
Entry {
id,
task,
deadline: mb_deadline,
priority,
status: Status::NotStarted,
}
}
fn format(&self) -> String {
let deadline_urgent = match self.deadline {
Some(x) => x.date() <= Local::now().date(),
_ => false,
};
let status_urgent = self.status.is_urgent();
let urgent = deadline_urgent && status_urgent;
let deadline_str = match self.deadline {
Some(deadline) => {
let str = format!("{}", deadline.format("\n\t Deadline: %d-%m %H:%M")).to_owned();
if urgent {
str.red()
} else {
str.dimmed()
}
}
None => "".to_owned().dimmed(),
};
let priority_str = if self.priority > 0 {
format!("Priority: {}", self.priority).to_owned()
} else {
"".to_owned()
};
let status_str = self.status.to_colored(&urgent);
format!(
"Task: {} {} | {} | {} {}",
self.id,
priority_str,
self.task.bold(),
status_str,
deadline_str
)
}
}
const DATA_FOLDER: &str = ".todo.d";
const DATA_FILENAME: &str = "data.json";
const NOUNS_FILENAME: &str = "nouns.txt";
fn data_folder() -> PathBuf {
match env::home_dir() {
Some(mut p) => {
p.push(DATA_FOLDER);
p
}
None => {
panic!("Couldn't find your home folder, setup will require some manual hacking");
}
}
}
fn data_path() -> PathBuf {
let mut p = data_folder();
p.push(DATA_FILENAME);
p
}
fn nouns_path() -> PathBuf {
let mut p = data_folder();
p.push(NOUNS_FILENAME);
p
}
fn load_data() -> IOResult<Data> {
let filename = data_path();
let mut file = File::open(filename)?;
let mut contents = String::new();
file.read_to_string(&mut contents)?;
Ok(match serde_json::from_str(&contents) {
Ok(d) => d,
Err(e) => {
panic!("Error, corrupted data!\n{}", e);
}
})
}
fn load_data_catch() -> Data {
load_data().unwrap_or_else(|_| Data::new())
}
fn load_nouns() -> IOResult<Vec<String>> {
let filename = nouns_path();
let f = File::open(filename)?;
let f = BufReader::new(f);
f.lines().collect()
}
fn save_data(data: &Data) -> Result<(), serde_json::Error> {
let j = serde_json::to_string(data)?;
let filename = data_path();
// TODO merge two result error types
let mut file = OpenOptions::new()
.write(true) // Overwrite whole file when writing
.create(true)
.truncate(true) // Remove any previous stuff
.open(filename).unwrap();
file.write_all(j.as_ref()).unwrap();
Ok(())
}
fn do_list() {
let data = load_data_catch();
data.print();
}
fn pick_name(data: &Data) -> String {
// TODO error handle
let nouns = load_nouns().unwrap();
let mut noun;
// We know this will probably terminate
// stop worrying guys
#[allow(while_immutable_condition)]
while {
noun = rand::thread_rng().choose(&nouns).unwrap();
// Repeat until we find one not already used
data.find_entry(noun).is_some()
} {}
noun.to_owned()
}
fn | (task: String, priority: u8, deadline_vague: &Option<VagueTime>) {
let mut data = load_data_catch();
let id = pick_name(&data);
println!("Adding {} - '{}'", id, task);
let deadline = deadline_vague.as_ref().map(|x| x.concretise());
let new_entry = Entry::new(id, task, priority, deadline);
data.add_entry(new_entry);
data.print();
save_data(&data).unwrap();
}
fn do_set_progress(id: &str, progress: Status) {
let mut data = load_data_catch();
println!("Resolving '{}'", id);
{
// Scope for mutable borrow
match data.find_entry_mut(id) {
Some(entry) => {
entry.status = progress;
}
None => {
println!("Could not find '{}' to update, exiting..", id);
return;
}
}
}
data.print();
save_data(&data).unwrap();
}
fn do_remove(id: &str) {
let mut data = load_data_catch();
println!("Removing '{}'", id);
data.remove_by_id(id);
data.print();
save_data(&data).unwrap();
}
| do_add | identifier_name |
linker.rs | use crate::{
Extern, ExternType, Func, FuncType, GlobalType, ImportType, Instance, IntoFunc, Module, Store,
};
use anyhow::{anyhow, bail, Result};
use std::collections::hash_map::{Entry, HashMap};
use std::rc::Rc;
/// Structure used to link wasm modules/instances together.
///
/// This structure is used to assist in instantiating a [`Module`]. A `Linker`
/// is a way of performing name resolution to make instantiating a module easier
/// (as opposed to calling [`Instance::new`]). `Linker` is a name-based resolver
/// where names are dynamically defined and then used to instantiate a
/// [`Module`]. The goal of a `Linker` is to have a one-argument method,
/// [`Linker::instantiate`], which takes a [`Module`] and produces an
/// [`Instance`]. This method will automatically select all the right imports
/// for the [`Module`] to be instantiated, and will otherwise return an error
/// if an import isn't satisfied.
///
/// ## Name Resolution
///
/// As mentioned previously, `Linker` is a form of name resolver. It will be
/// using the string-based names of imports on a module to attempt to select a
/// matching item to hook up to it. This name resolution has two-levels of
/// namespaces, a module level and a name level. Each item is defined within a
/// module and then has its own name. This basically follows the wasm standard
/// for modularization.
///
/// Names in a `Linker` can be defined twice, but only for different signatures
/// of items. This means that every item defined in a `Linker` has a unique
/// name/type pair. For example you can define two functions with the module
/// name `foo` and item name `bar`, so long as they have different function
/// signatures. Currently duplicate memories and tables are not allowed, only
/// one-per-name is allowed.
///
/// Note that allowing duplicates by shadowing the previous definition can be
/// controlled with the [`Linker::allow_shadowing`] method as well.
pub struct Linker {
store: Store,
string2idx: HashMap<Rc<str>, usize>,
strings: Vec<Rc<str>>,
map: HashMap<ImportKey, Extern>,
allow_shadowing: bool,
}
#[derive(Hash, PartialEq, Eq)]
struct ImportKey {
name: usize,
module: usize,
kind: ImportKind,
}
#[derive(Hash, PartialEq, Eq, Debug)]
enum ImportKind {
Func(FuncType),
Global(GlobalType),
Memory,
Table,
}
impl Linker {
/// Creates a new [`Linker`].
///
/// This function will create a new [`Linker`] which is ready to start
/// linking modules. All items defined in this linker and produced by this
/// linker will be connected with `store` and must come from the same
/// `store`.
///
/// # Examples
///
/// ```
/// use wasmtime::{Linker, Store};
///
/// let store = Store::default();
/// let mut linker = Linker::new(&store);
/// //...
/// ```
pub fn new(store: &Store) -> Linker {
Linker {
store: store.clone(),
map: HashMap::new(),
string2idx: HashMap::new(),
strings: Vec::new(),
allow_shadowing: false,
}
}
/// Configures whether this [`Linker`] will shadow previous duplicate
/// definitions of the same signature.
///
/// By default a [`Linker`] will disallow duplicate definitions of the same
/// signature. This method, however, can be used to instead allow duplicates
/// and have the latest definition take precedence when linking modules.
///
/// # Examples
///
/// ```
/// # use wasmtime::*;
/// # fn main() -> anyhow::Result<()> {
/// # let store = Store::default();
/// let mut linker = Linker::new(&store);
/// linker.func("", "", || {})?;
///
/// // by default, duplicates are disallowed
/// assert!(linker.func("", "", || {}).is_err());
///
/// // but shadowing can be configured to be allowed as well
/// linker.allow_shadowing(true);
/// linker.func("", "", || {})?;
/// # Ok(())
/// # }
/// ```
pub fn allow_shadowing(&mut self, allow: bool) -> &mut Linker {
self.allow_shadowing = allow;
self
}
/// Defines a new item in this [`Linker`].
///
/// This method will add a new definition, by name, to this instance of
/// [`Linker`]. The `module` and `name` provided are what to name the
/// `item`.
///
/// # Errors
///
/// Returns an error if the `module` and `name` already identify an item
/// of the same type as the `item` provided and if shadowing is disallowed.
/// For more information see the documentation on [`Linker`].
///
/// Also returns an error if `item` comes from a different store than this
/// [`Linker`] was created with.
///
/// # Examples
///
/// ```
/// # use wasmtime::*;
/// # fn main() -> anyhow::Result<()> {
/// # let store = Store::default();
/// let mut linker = Linker::new(&store);
/// let ty = GlobalType::new(ValType::I32, Mutability::Const);
/// let global = Global::new(&store, ty, Val::I32(0x1234))?;
/// linker.define("host", "offset", global)?;
///
/// let wat = r#"
/// (module
/// (import "host" "offset" (global i32))
/// (memory 1)
/// (data (global.get 0) "foo")
/// )
/// "#;
/// let module = Module::new(&store, wat)?;
/// linker.instantiate(&module)?;
/// # Ok(())
/// # }
/// ```
pub fn define(
&mut self,
module: &str,
name: &str,
item: impl Into<Extern>,
) -> Result<&mut Self> {
self._define(module, name, item.into())
}
fn _define(&mut self, module: &str, name: &str, item: Extern) -> Result<&mut Self> {
if!item.comes_from_same_store(&self.store) {
bail!("all linker items must be from the same store");
}
self.insert(module, name, item)?;
Ok(self)
}
/// Convenience wrapper to define a function import.
///
/// This method is a convenience wrapper around [`Linker::define`] which
/// internally delegates to [`Func::wrap`].
///
/// # Errors
///
/// Returns an error if the `module` and `name` already identify an item
/// of the same type as the `item` provided and if shadowing is disallowed.
/// For more information see the documentation on [`Linker`].
/// | /// ```
/// # use wasmtime::*;
/// # fn main() -> anyhow::Result<()> {
/// # let store = Store::default();
/// let mut linker = Linker::new(&store);
/// linker.func("host", "double", |x: i32| x * 2)?;
/// linker.func("host", "log_i32", |x: i32| println!("{}", x))?;
/// linker.func("host", "log_str", |caller: Caller, ptr: i32, len: i32| {
/// //...
/// })?;
///
/// let wat = r#"
/// (module
/// (import "host" "double" (func (param i32) (result i32)))
/// (import "host" "log_i32" (func (param i32)))
/// (import "host" "log_str" (func (param i32 i32)))
/// )
/// "#;
/// let module = Module::new(&store, wat)?;
/// linker.instantiate(&module)?;
/// # Ok(())
/// # }
/// ```
pub fn func<Params, Args>(
&mut self,
module: &str,
name: &str,
func: impl IntoFunc<Params, Args>,
) -> Result<&mut Self> {
self._define(module, name, Func::wrap(&self.store, func).into())
}
/// Convenience wrapper to define an entire [`Instance`] in this linker.
///
/// This function is a convenience wrapper around [`Linker::define`] which
/// will define all exports on `instance` into this linker. The module name
/// for each export is `module_name`, and the name for each export is the
/// name in the instance itself.
///
/// # Errors
///
/// Returns an error if the any item is redefined twice in this linker (for
/// example the same `module_name` was already defined) and shadowing is
/// disallowed, or if `instance` comes from a different [`Store`] than this
/// [`Linker`] originally was created with.
///
/// # Examples
///
/// ```
/// # use wasmtime::*;
/// # fn main() -> anyhow::Result<()> {
/// # let store = Store::default();
/// let mut linker = Linker::new(&store);
///
/// // Instantiate a small instance...
/// let wat = r#"(module (func (export "run") ))"#;
/// let module = Module::new(&store, wat)?;
/// let instance = linker.instantiate(&module)?;
///
/// //... and inform the linker that the name of this instance is
/// // `instance1`. This defines the `instance1::run` name for our next
/// // module to use.
/// linker.instance("instance1", &instance)?;
///
/// let wat = r#"
/// (module
/// (import "instance1" "run" (func $instance1_run))
/// (func (export "run")
/// call $instance1_run
/// )
/// )
/// "#;
/// let module = Module::new(&store, wat)?;
/// let instance = linker.instantiate(&module)?;
/// # Ok(())
/// # }
/// ```
pub fn instance(&mut self, module_name: &str, instance: &Instance) -> Result<&mut Self> {
if!Store::same(&self.store, instance.store()) {
bail!("all linker items must be from the same store");
}
for export in instance.exports() {
self.insert(module_name, export.name(), export.into_extern())?;
}
Ok(self)
}
/// Aliases one module's name as another.
///
/// This method will alias all currently defined under `module` to also be
/// defined under the name `as_module` too.
///
/// # Errors
///
/// Returns an error if any shadowing violations happen while defining new
/// items.
pub fn alias(&mut self, module: &str, as_module: &str) -> Result<()> {
let items = self
.iter()
.filter(|(m, _, _)| *m == module)
.map(|(_, name, item)| (name.to_string(), item))
.collect::<Vec<_>>();
for (name, item) in items {
self.define(as_module, &name, item)?;
}
Ok(())
}
fn insert(&mut self, module: &str, name: &str, item: Extern) -> Result<()> {
let key = self.import_key(module, name, item.ty());
match self.map.entry(key) {
Entry::Occupied(o) if!self.allow_shadowing => bail!(
"import of `{}::{}` with kind {:?} defined twice",
module,
name,
o.key().kind,
),
Entry::Occupied(mut o) => {
o.insert(item);
}
Entry::Vacant(v) => {
v.insert(item);
}
}
Ok(())
}
fn import_key(&mut self, module: &str, name: &str, ty: ExternType) -> ImportKey {
ImportKey {
module: self.intern_str(module),
name: self.intern_str(name),
kind: self.import_kind(ty),
}
}
fn import_kind(&self, ty: ExternType) -> ImportKind {
match ty {
ExternType::Func(f) => ImportKind::Func(f),
ExternType::Global(f) => ImportKind::Global(f),
ExternType::Memory(_) => ImportKind::Memory,
ExternType::Table(_) => ImportKind::Table,
}
}
fn intern_str(&mut self, string: &str) -> usize {
if let Some(idx) = self.string2idx.get(string) {
return *idx;
}
let string: Rc<str> = string.into();
let idx = self.strings.len();
self.strings.push(string.clone());
self.string2idx.insert(string, idx);
idx
}
/// Attempts to instantiate the `module` provided.
///
/// This method will attempt to assemble a list of imports that correspond
/// to the imports required by the [`Module`] provided. This list
/// of imports is then passed to [`Instance::new`] to continue the
/// instantiation process.
///
/// Each import of `module` will be looked up in this [`Linker`] and must
/// have previously been defined. If it was previously defined with an
/// incorrect signature or if it was not prevoiusly defined then an error
/// will be returned because the import can not be satisfied.
///
/// # Errors
///
/// This method can fail because an import may not be found, or because
/// instantiation itself may fail. For information on instantiation
/// failures see [`Instance::new`].
///
/// # Examples
///
/// ```
/// # use wasmtime::*;
/// # fn main() -> anyhow::Result<()> {
/// # let store = Store::default();
/// let mut linker = Linker::new(&store);
/// linker.func("host", "double", |x: i32| x * 2)?;
///
/// let wat = r#"
/// (module
/// (import "host" "double" (func (param i32) (result i32)))
/// )
/// "#;
/// let module = Module::new(&store, wat)?;
/// linker.instantiate(&module)?;
/// # Ok(())
/// # }
/// ```
pub fn instantiate(&self, module: &Module) -> Result<Instance> {
let mut imports = Vec::new();
for import in module.imports() {
if let Some(item) = self.get(&import) {
imports.push(item);
continue;
}
let mut options = String::new();
for i in self.map.keys() {
if &*self.strings[i.module]!= import.module()
|| &*self.strings[i.name]!= import.name()
{
continue;
}
options.push_str(" * ");
options.push_str(&format!("{:?}", i.kind));
options.push_str("\n");
}
if options.len() == 0 {
bail!(
"unknown import: `{}::{}` has not been defined",
import.module(),
import.name()
)
}
bail!(
"incompatible import type for `{}::{}` specified\n\
desired signature was: {:?}\n\
signatures available:\n\n{}",
import.module(),
import.name(),
import.ty(),
options,
)
}
Instance::new(module, &imports)
}
/// Returns the [`Store`] that this linker is connected to.
pub fn store(&self) -> &Store {
&self.store
}
/// Returns an iterator over all items defined in this `Linker`.
///
/// The iterator returned will yield 3-tuples where the first two elements
/// are the module name and item name for the external item, and the third
/// item is the item itself that is defined.
///
/// Note that multiple `Extern` items may be defined for the same
/// module/name pair.
pub fn iter(&self) -> impl Iterator<Item = (&str, &str, Extern)> {
self.map.iter().map(move |(key, item)| {
(
&*self.strings[key.module],
&*self.strings[key.name],
item.clone(),
)
})
}
/// Looks up a value in this `Linker` which matches the `import` type
/// provided.
///
/// Returns `None` if no match was found.
pub fn get(&self, import: &ImportType) -> Option<Extern> {
let key = ImportKey {
module: *self.string2idx.get(import.module())?,
name: *self.string2idx.get(import.name())?,
kind: self.import_kind(import.ty()),
};
self.map.get(&key).cloned()
}
/// Returns all items defined for the `module` and `name` pair.
///
/// This may return an empty iterator, but it may also return multiple items
/// if the module/name have been defined twice.
pub fn get_by_name<'a: 'p, 'p>(
&'a self,
module: &'p str,
name: &'p str,
) -> impl Iterator<Item = &'a Extern> + 'p {
self.map
.iter()
.filter(move |(key, _item)| {
&*self.strings[key.module] == module && &*self.strings[key.name] == name
})
.map(|(_, item)| item)
}
/// Returns the single item defined for the `module` and `name` pair.
///
/// Unlike the similar [`Linker::get_by_name`] method this function returns
/// a single `Extern` item. If the `module` and `name` pair isn't defined
/// in this linker then an error is returned. If more than one value exists
/// for the `module` and `name` pairs, then an error is returned as well.
pub fn get_one_by_name(&self, module: &str, name: &str) -> Result<Extern> {
let mut items = self.get_by_name(module, name);
let ret = items
.next()
.ok_or_else(|| anyhow!("no item named `{}` in `{}`", name, module))?;
if items.next().is_some() {
bail!("too many items named `{}` in `{}`", name, module);
}
Ok(ret.clone())
}
} | /// # Examples
/// | random_line_split |
linker.rs | use crate::{
Extern, ExternType, Func, FuncType, GlobalType, ImportType, Instance, IntoFunc, Module, Store,
};
use anyhow::{anyhow, bail, Result};
use std::collections::hash_map::{Entry, HashMap};
use std::rc::Rc;
/// Structure used to link wasm modules/instances together.
///
/// This structure is used to assist in instantiating a [`Module`]. A `Linker`
/// is a way of performing name resolution to make instantiating a module easier
/// (as opposed to calling [`Instance::new`]). `Linker` is a name-based resolver
/// where names are dynamically defined and then used to instantiate a
/// [`Module`]. The goal of a `Linker` is to have a one-argument method,
/// [`Linker::instantiate`], which takes a [`Module`] and produces an
/// [`Instance`]. This method will automatically select all the right imports
/// for the [`Module`] to be instantiated, and will otherwise return an error
/// if an import isn't satisfied.
///
/// ## Name Resolution
///
/// As mentioned previously, `Linker` is a form of name resolver. It will be
/// using the string-based names of imports on a module to attempt to select a
/// matching item to hook up to it. This name resolution has two-levels of
/// namespaces, a module level and a name level. Each item is defined within a
/// module and then has its own name. This basically follows the wasm standard
/// for modularization.
///
/// Names in a `Linker` can be defined twice, but only for different signatures
/// of items. This means that every item defined in a `Linker` has a unique
/// name/type pair. For example you can define two functions with the module
/// name `foo` and item name `bar`, so long as they have different function
/// signatures. Currently duplicate memories and tables are not allowed, only
/// one-per-name is allowed.
///
/// Note that allowing duplicates by shadowing the previous definition can be
/// controlled with the [`Linker::allow_shadowing`] method as well.
pub struct Linker {
store: Store,
string2idx: HashMap<Rc<str>, usize>,
strings: Vec<Rc<str>>,
map: HashMap<ImportKey, Extern>,
allow_shadowing: bool,
}
#[derive(Hash, PartialEq, Eq)]
struct ImportKey {
name: usize,
module: usize,
kind: ImportKind,
}
#[derive(Hash, PartialEq, Eq, Debug)]
enum ImportKind {
Func(FuncType),
Global(GlobalType),
Memory,
Table,
}
impl Linker {
/// Creates a new [`Linker`].
///
/// This function will create a new [`Linker`] which is ready to start
/// linking modules. All items defined in this linker and produced by this
/// linker will be connected with `store` and must come from the same
/// `store`.
///
/// # Examples
///
/// ```
/// use wasmtime::{Linker, Store};
///
/// let store = Store::default();
/// let mut linker = Linker::new(&store);
/// //...
/// ```
pub fn new(store: &Store) -> Linker {
Linker {
store: store.clone(),
map: HashMap::new(),
string2idx: HashMap::new(),
strings: Vec::new(),
allow_shadowing: false,
}
}
/// Configures whether this [`Linker`] will shadow previous duplicate
/// definitions of the same signature.
///
/// By default a [`Linker`] will disallow duplicate definitions of the same
/// signature. This method, however, can be used to instead allow duplicates
/// and have the latest definition take precedence when linking modules.
///
/// # Examples
///
/// ```
/// # use wasmtime::*;
/// # fn main() -> anyhow::Result<()> {
/// # let store = Store::default();
/// let mut linker = Linker::new(&store);
/// linker.func("", "", || {})?;
///
/// // by default, duplicates are disallowed
/// assert!(linker.func("", "", || {}).is_err());
///
/// // but shadowing can be configured to be allowed as well
/// linker.allow_shadowing(true);
/// linker.func("", "", || {})?;
/// # Ok(())
/// # }
/// ```
pub fn allow_shadowing(&mut self, allow: bool) -> &mut Linker {
self.allow_shadowing = allow;
self
}
/// Defines a new item in this [`Linker`].
///
/// This method will add a new definition, by name, to this instance of
/// [`Linker`]. The `module` and `name` provided are what to name the
/// `item`.
///
/// # Errors
///
/// Returns an error if the `module` and `name` already identify an item
/// of the same type as the `item` provided and if shadowing is disallowed.
/// For more information see the documentation on [`Linker`].
///
/// Also returns an error if `item` comes from a different store than this
/// [`Linker`] was created with.
///
/// # Examples
///
/// ```
/// # use wasmtime::*;
/// # fn main() -> anyhow::Result<()> {
/// # let store = Store::default();
/// let mut linker = Linker::new(&store);
/// let ty = GlobalType::new(ValType::I32, Mutability::Const);
/// let global = Global::new(&store, ty, Val::I32(0x1234))?;
/// linker.define("host", "offset", global)?;
///
/// let wat = r#"
/// (module
/// (import "host" "offset" (global i32))
/// (memory 1)
/// (data (global.get 0) "foo")
/// )
/// "#;
/// let module = Module::new(&store, wat)?;
/// linker.instantiate(&module)?;
/// # Ok(())
/// # }
/// ```
pub fn define(
&mut self,
module: &str,
name: &str,
item: impl Into<Extern>,
) -> Result<&mut Self> {
self._define(module, name, item.into())
}
fn _define(&mut self, module: &str, name: &str, item: Extern) -> Result<&mut Self> {
if!item.comes_from_same_store(&self.store) {
bail!("all linker items must be from the same store");
}
self.insert(module, name, item)?;
Ok(self)
}
/// Convenience wrapper to define a function import.
///
/// This method is a convenience wrapper around [`Linker::define`] which
/// internally delegates to [`Func::wrap`].
///
/// # Errors
///
/// Returns an error if the `module` and `name` already identify an item
/// of the same type as the `item` provided and if shadowing is disallowed.
/// For more information see the documentation on [`Linker`].
///
/// # Examples
///
/// ```
/// # use wasmtime::*;
/// # fn main() -> anyhow::Result<()> {
/// # let store = Store::default();
/// let mut linker = Linker::new(&store);
/// linker.func("host", "double", |x: i32| x * 2)?;
/// linker.func("host", "log_i32", |x: i32| println!("{}", x))?;
/// linker.func("host", "log_str", |caller: Caller, ptr: i32, len: i32| {
/// //...
/// })?;
///
/// let wat = r#"
/// (module
/// (import "host" "double" (func (param i32) (result i32)))
/// (import "host" "log_i32" (func (param i32)))
/// (import "host" "log_str" (func (param i32 i32)))
/// )
/// "#;
/// let module = Module::new(&store, wat)?;
/// linker.instantiate(&module)?;
/// # Ok(())
/// # }
/// ```
pub fn func<Params, Args>(
&mut self,
module: &str,
name: &str,
func: impl IntoFunc<Params, Args>,
) -> Result<&mut Self> {
self._define(module, name, Func::wrap(&self.store, func).into())
}
/// Convenience wrapper to define an entire [`Instance`] in this linker.
///
/// This function is a convenience wrapper around [`Linker::define`] which
/// will define all exports on `instance` into this linker. The module name
/// for each export is `module_name`, and the name for each export is the
/// name in the instance itself.
///
/// # Errors
///
/// Returns an error if the any item is redefined twice in this linker (for
/// example the same `module_name` was already defined) and shadowing is
/// disallowed, or if `instance` comes from a different [`Store`] than this
/// [`Linker`] originally was created with.
///
/// # Examples
///
/// ```
/// # use wasmtime::*;
/// # fn main() -> anyhow::Result<()> {
/// # let store = Store::default();
/// let mut linker = Linker::new(&store);
///
/// // Instantiate a small instance...
/// let wat = r#"(module (func (export "run") ))"#;
/// let module = Module::new(&store, wat)?;
/// let instance = linker.instantiate(&module)?;
///
/// //... and inform the linker that the name of this instance is
/// // `instance1`. This defines the `instance1::run` name for our next
/// // module to use.
/// linker.instance("instance1", &instance)?;
///
/// let wat = r#"
/// (module
/// (import "instance1" "run" (func $instance1_run))
/// (func (export "run")
/// call $instance1_run
/// )
/// )
/// "#;
/// let module = Module::new(&store, wat)?;
/// let instance = linker.instantiate(&module)?;
/// # Ok(())
/// # }
/// ```
pub fn instance(&mut self, module_name: &str, instance: &Instance) -> Result<&mut Self> {
if!Store::same(&self.store, instance.store()) {
bail!("all linker items must be from the same store");
}
for export in instance.exports() {
self.insert(module_name, export.name(), export.into_extern())?;
}
Ok(self)
}
/// Aliases one module's name as another.
///
/// This method will alias all currently defined under `module` to also be
/// defined under the name `as_module` too.
///
/// # Errors
///
/// Returns an error if any shadowing violations happen while defining new
/// items.
pub fn alias(&mut self, module: &str, as_module: &str) -> Result<()> {
let items = self
.iter()
.filter(|(m, _, _)| *m == module)
.map(|(_, name, item)| (name.to_string(), item))
.collect::<Vec<_>>();
for (name, item) in items {
self.define(as_module, &name, item)?;
}
Ok(())
}
fn insert(&mut self, module: &str, name: &str, item: Extern) -> Result<()> {
let key = self.import_key(module, name, item.ty());
match self.map.entry(key) {
Entry::Occupied(o) if!self.allow_shadowing => bail!(
"import of `{}::{}` with kind {:?} defined twice",
module,
name,
o.key().kind,
),
Entry::Occupied(mut o) => {
o.insert(item);
}
Entry::Vacant(v) => {
v.insert(item);
}
}
Ok(())
}
fn | (&mut self, module: &str, name: &str, ty: ExternType) -> ImportKey {
ImportKey {
module: self.intern_str(module),
name: self.intern_str(name),
kind: self.import_kind(ty),
}
}
fn import_kind(&self, ty: ExternType) -> ImportKind {
match ty {
ExternType::Func(f) => ImportKind::Func(f),
ExternType::Global(f) => ImportKind::Global(f),
ExternType::Memory(_) => ImportKind::Memory,
ExternType::Table(_) => ImportKind::Table,
}
}
fn intern_str(&mut self, string: &str) -> usize {
if let Some(idx) = self.string2idx.get(string) {
return *idx;
}
let string: Rc<str> = string.into();
let idx = self.strings.len();
self.strings.push(string.clone());
self.string2idx.insert(string, idx);
idx
}
/// Attempts to instantiate the `module` provided.
///
/// This method will attempt to assemble a list of imports that correspond
/// to the imports required by the [`Module`] provided. This list
/// of imports is then passed to [`Instance::new`] to continue the
/// instantiation process.
///
/// Each import of `module` will be looked up in this [`Linker`] and must
/// have previously been defined. If it was previously defined with an
/// incorrect signature or if it was not prevoiusly defined then an error
/// will be returned because the import can not be satisfied.
///
/// # Errors
///
/// This method can fail because an import may not be found, or because
/// instantiation itself may fail. For information on instantiation
/// failures see [`Instance::new`].
///
/// # Examples
///
/// ```
/// # use wasmtime::*;
/// # fn main() -> anyhow::Result<()> {
/// # let store = Store::default();
/// let mut linker = Linker::new(&store);
/// linker.func("host", "double", |x: i32| x * 2)?;
///
/// let wat = r#"
/// (module
/// (import "host" "double" (func (param i32) (result i32)))
/// )
/// "#;
/// let module = Module::new(&store, wat)?;
/// linker.instantiate(&module)?;
/// # Ok(())
/// # }
/// ```
pub fn instantiate(&self, module: &Module) -> Result<Instance> {
let mut imports = Vec::new();
for import in module.imports() {
if let Some(item) = self.get(&import) {
imports.push(item);
continue;
}
let mut options = String::new();
for i in self.map.keys() {
if &*self.strings[i.module]!= import.module()
|| &*self.strings[i.name]!= import.name()
{
continue;
}
options.push_str(" * ");
options.push_str(&format!("{:?}", i.kind));
options.push_str("\n");
}
if options.len() == 0 {
bail!(
"unknown import: `{}::{}` has not been defined",
import.module(),
import.name()
)
}
bail!(
"incompatible import type for `{}::{}` specified\n\
desired signature was: {:?}\n\
signatures available:\n\n{}",
import.module(),
import.name(),
import.ty(),
options,
)
}
Instance::new(module, &imports)
}
/// Returns the [`Store`] that this linker is connected to.
pub fn store(&self) -> &Store {
&self.store
}
/// Returns an iterator over all items defined in this `Linker`.
///
/// The iterator returned will yield 3-tuples where the first two elements
/// are the module name and item name for the external item, and the third
/// item is the item itself that is defined.
///
/// Note that multiple `Extern` items may be defined for the same
/// module/name pair.
pub fn iter(&self) -> impl Iterator<Item = (&str, &str, Extern)> {
self.map.iter().map(move |(key, item)| {
(
&*self.strings[key.module],
&*self.strings[key.name],
item.clone(),
)
})
}
/// Looks up a value in this `Linker` which matches the `import` type
/// provided.
///
/// Returns `None` if no match was found.
pub fn get(&self, import: &ImportType) -> Option<Extern> {
let key = ImportKey {
module: *self.string2idx.get(import.module())?,
name: *self.string2idx.get(import.name())?,
kind: self.import_kind(import.ty()),
};
self.map.get(&key).cloned()
}
/// Returns all items defined for the `module` and `name` pair.
///
/// This may return an empty iterator, but it may also return multiple items
/// if the module/name have been defined twice.
pub fn get_by_name<'a: 'p, 'p>(
&'a self,
module: &'p str,
name: &'p str,
) -> impl Iterator<Item = &'a Extern> + 'p {
self.map
.iter()
.filter(move |(key, _item)| {
&*self.strings[key.module] == module && &*self.strings[key.name] == name
})
.map(|(_, item)| item)
}
/// Returns the single item defined for the `module` and `name` pair.
///
/// Unlike the similar [`Linker::get_by_name`] method this function returns
/// a single `Extern` item. If the `module` and `name` pair isn't defined
/// in this linker then an error is returned. If more than one value exists
/// for the `module` and `name` pairs, then an error is returned as well.
pub fn get_one_by_name(&self, module: &str, name: &str) -> Result<Extern> {
let mut items = self.get_by_name(module, name);
let ret = items
.next()
.ok_or_else(|| anyhow!("no item named `{}` in `{}`", name, module))?;
if items.next().is_some() {
bail!("too many items named `{}` in `{}`", name, module);
}
Ok(ret.clone())
}
}
| import_key | identifier_name |
linker.rs | use crate::{
Extern, ExternType, Func, FuncType, GlobalType, ImportType, Instance, IntoFunc, Module, Store,
};
use anyhow::{anyhow, bail, Result};
use std::collections::hash_map::{Entry, HashMap};
use std::rc::Rc;
/// Structure used to link wasm modules/instances together.
///
/// This structure is used to assist in instantiating a [`Module`]. A `Linker`
/// is a way of performing name resolution to make instantiating a module easier
/// (as opposed to calling [`Instance::new`]). `Linker` is a name-based resolver
/// where names are dynamically defined and then used to instantiate a
/// [`Module`]. The goal of a `Linker` is to have a one-argument method,
/// [`Linker::instantiate`], which takes a [`Module`] and produces an
/// [`Instance`]. This method will automatically select all the right imports
/// for the [`Module`] to be instantiated, and will otherwise return an error
/// if an import isn't satisfied.
///
/// ## Name Resolution
///
/// As mentioned previously, `Linker` is a form of name resolver. It will be
/// using the string-based names of imports on a module to attempt to select a
/// matching item to hook up to it. This name resolution has two-levels of
/// namespaces, a module level and a name level. Each item is defined within a
/// module and then has its own name. This basically follows the wasm standard
/// for modularization.
///
/// Names in a `Linker` can be defined twice, but only for different signatures
/// of items. This means that every item defined in a `Linker` has a unique
/// name/type pair. For example you can define two functions with the module
/// name `foo` and item name `bar`, so long as they have different function
/// signatures. Currently duplicate memories and tables are not allowed, only
/// one-per-name is allowed.
///
/// Note that allowing duplicates by shadowing the previous definition can be
/// controlled with the [`Linker::allow_shadowing`] method as well.
pub struct Linker {
store: Store,
string2idx: HashMap<Rc<str>, usize>,
strings: Vec<Rc<str>>,
map: HashMap<ImportKey, Extern>,
allow_shadowing: bool,
}
#[derive(Hash, PartialEq, Eq)]
struct ImportKey {
name: usize,
module: usize,
kind: ImportKind,
}
#[derive(Hash, PartialEq, Eq, Debug)]
enum ImportKind {
Func(FuncType),
Global(GlobalType),
Memory,
Table,
}
impl Linker {
/// Creates a new [`Linker`].
///
/// This function will create a new [`Linker`] which is ready to start
/// linking modules. All items defined in this linker and produced by this
/// linker will be connected with `store` and must come from the same
/// `store`.
///
/// # Examples
///
/// ```
/// use wasmtime::{Linker, Store};
///
/// let store = Store::default();
/// let mut linker = Linker::new(&store);
/// //...
/// ```
pub fn new(store: &Store) -> Linker {
Linker {
store: store.clone(),
map: HashMap::new(),
string2idx: HashMap::new(),
strings: Vec::new(),
allow_shadowing: false,
}
}
/// Configures whether this [`Linker`] will shadow previous duplicate
/// definitions of the same signature.
///
/// By default a [`Linker`] will disallow duplicate definitions of the same
/// signature. This method, however, can be used to instead allow duplicates
/// and have the latest definition take precedence when linking modules.
///
/// # Examples
///
/// ```
/// # use wasmtime::*;
/// # fn main() -> anyhow::Result<()> {
/// # let store = Store::default();
/// let mut linker = Linker::new(&store);
/// linker.func("", "", || {})?;
///
/// // by default, duplicates are disallowed
/// assert!(linker.func("", "", || {}).is_err());
///
/// // but shadowing can be configured to be allowed as well
/// linker.allow_shadowing(true);
/// linker.func("", "", || {})?;
/// # Ok(())
/// # }
/// ```
pub fn allow_shadowing(&mut self, allow: bool) -> &mut Linker {
self.allow_shadowing = allow;
self
}
/// Defines a new item in this [`Linker`].
///
/// This method will add a new definition, by name, to this instance of
/// [`Linker`]. The `module` and `name` provided are what to name the
/// `item`.
///
/// # Errors
///
/// Returns an error if the `module` and `name` already identify an item
/// of the same type as the `item` provided and if shadowing is disallowed.
/// For more information see the documentation on [`Linker`].
///
/// Also returns an error if `item` comes from a different store than this
/// [`Linker`] was created with.
///
/// # Examples
///
/// ```
/// # use wasmtime::*;
/// # fn main() -> anyhow::Result<()> {
/// # let store = Store::default();
/// let mut linker = Linker::new(&store);
/// let ty = GlobalType::new(ValType::I32, Mutability::Const);
/// let global = Global::new(&store, ty, Val::I32(0x1234))?;
/// linker.define("host", "offset", global)?;
///
/// let wat = r#"
/// (module
/// (import "host" "offset" (global i32))
/// (memory 1)
/// (data (global.get 0) "foo")
/// )
/// "#;
/// let module = Module::new(&store, wat)?;
/// linker.instantiate(&module)?;
/// # Ok(())
/// # }
/// ```
pub fn define(
&mut self,
module: &str,
name: &str,
item: impl Into<Extern>,
) -> Result<&mut Self> {
self._define(module, name, item.into())
}
fn _define(&mut self, module: &str, name: &str, item: Extern) -> Result<&mut Self> {
if!item.comes_from_same_store(&self.store) {
bail!("all linker items must be from the same store");
}
self.insert(module, name, item)?;
Ok(self)
}
/// Convenience wrapper to define a function import.
///
/// This method is a convenience wrapper around [`Linker::define`] which
/// internally delegates to [`Func::wrap`].
///
/// # Errors
///
/// Returns an error if the `module` and `name` already identify an item
/// of the same type as the `item` provided and if shadowing is disallowed.
/// For more information see the documentation on [`Linker`].
///
/// # Examples
///
/// ```
/// # use wasmtime::*;
/// # fn main() -> anyhow::Result<()> {
/// # let store = Store::default();
/// let mut linker = Linker::new(&store);
/// linker.func("host", "double", |x: i32| x * 2)?;
/// linker.func("host", "log_i32", |x: i32| println!("{}", x))?;
/// linker.func("host", "log_str", |caller: Caller, ptr: i32, len: i32| {
/// //...
/// })?;
///
/// let wat = r#"
/// (module
/// (import "host" "double" (func (param i32) (result i32)))
/// (import "host" "log_i32" (func (param i32)))
/// (import "host" "log_str" (func (param i32 i32)))
/// )
/// "#;
/// let module = Module::new(&store, wat)?;
/// linker.instantiate(&module)?;
/// # Ok(())
/// # }
/// ```
pub fn func<Params, Args>(
&mut self,
module: &str,
name: &str,
func: impl IntoFunc<Params, Args>,
) -> Result<&mut Self> {
self._define(module, name, Func::wrap(&self.store, func).into())
}
/// Convenience wrapper to define an entire [`Instance`] in this linker.
///
/// This function is a convenience wrapper around [`Linker::define`] which
/// will define all exports on `instance` into this linker. The module name
/// for each export is `module_name`, and the name for each export is the
/// name in the instance itself.
///
/// # Errors
///
/// Returns an error if the any item is redefined twice in this linker (for
/// example the same `module_name` was already defined) and shadowing is
/// disallowed, or if `instance` comes from a different [`Store`] than this
/// [`Linker`] originally was created with.
///
/// # Examples
///
/// ```
/// # use wasmtime::*;
/// # fn main() -> anyhow::Result<()> {
/// # let store = Store::default();
/// let mut linker = Linker::new(&store);
///
/// // Instantiate a small instance...
/// let wat = r#"(module (func (export "run") ))"#;
/// let module = Module::new(&store, wat)?;
/// let instance = linker.instantiate(&module)?;
///
/// //... and inform the linker that the name of this instance is
/// // `instance1`. This defines the `instance1::run` name for our next
/// // module to use.
/// linker.instance("instance1", &instance)?;
///
/// let wat = r#"
/// (module
/// (import "instance1" "run" (func $instance1_run))
/// (func (export "run")
/// call $instance1_run
/// )
/// )
/// "#;
/// let module = Module::new(&store, wat)?;
/// let instance = linker.instantiate(&module)?;
/// # Ok(())
/// # }
/// ```
pub fn instance(&mut self, module_name: &str, instance: &Instance) -> Result<&mut Self> |
/// Aliases one module's name as another.
///
/// This method will alias all currently defined under `module` to also be
/// defined under the name `as_module` too.
///
/// # Errors
///
/// Returns an error if any shadowing violations happen while defining new
/// items.
pub fn alias(&mut self, module: &str, as_module: &str) -> Result<()> {
let items = self
.iter()
.filter(|(m, _, _)| *m == module)
.map(|(_, name, item)| (name.to_string(), item))
.collect::<Vec<_>>();
for (name, item) in items {
self.define(as_module, &name, item)?;
}
Ok(())
}
fn insert(&mut self, module: &str, name: &str, item: Extern) -> Result<()> {
let key = self.import_key(module, name, item.ty());
match self.map.entry(key) {
Entry::Occupied(o) if!self.allow_shadowing => bail!(
"import of `{}::{}` with kind {:?} defined twice",
module,
name,
o.key().kind,
),
Entry::Occupied(mut o) => {
o.insert(item);
}
Entry::Vacant(v) => {
v.insert(item);
}
}
Ok(())
}
fn import_key(&mut self, module: &str, name: &str, ty: ExternType) -> ImportKey {
ImportKey {
module: self.intern_str(module),
name: self.intern_str(name),
kind: self.import_kind(ty),
}
}
fn import_kind(&self, ty: ExternType) -> ImportKind {
match ty {
ExternType::Func(f) => ImportKind::Func(f),
ExternType::Global(f) => ImportKind::Global(f),
ExternType::Memory(_) => ImportKind::Memory,
ExternType::Table(_) => ImportKind::Table,
}
}
fn intern_str(&mut self, string: &str) -> usize {
if let Some(idx) = self.string2idx.get(string) {
return *idx;
}
let string: Rc<str> = string.into();
let idx = self.strings.len();
self.strings.push(string.clone());
self.string2idx.insert(string, idx);
idx
}
/// Attempts to instantiate the `module` provided.
///
/// This method will attempt to assemble a list of imports that correspond
/// to the imports required by the [`Module`] provided. This list
/// of imports is then passed to [`Instance::new`] to continue the
/// instantiation process.
///
/// Each import of `module` will be looked up in this [`Linker`] and must
/// have previously been defined. If it was previously defined with an
/// incorrect signature or if it was not prevoiusly defined then an error
/// will be returned because the import can not be satisfied.
///
/// # Errors
///
/// This method can fail because an import may not be found, or because
/// instantiation itself may fail. For information on instantiation
/// failures see [`Instance::new`].
///
/// # Examples
///
/// ```
/// # use wasmtime::*;
/// # fn main() -> anyhow::Result<()> {
/// # let store = Store::default();
/// let mut linker = Linker::new(&store);
/// linker.func("host", "double", |x: i32| x * 2)?;
///
/// let wat = r#"
/// (module
/// (import "host" "double" (func (param i32) (result i32)))
/// )
/// "#;
/// let module = Module::new(&store, wat)?;
/// linker.instantiate(&module)?;
/// # Ok(())
/// # }
/// ```
pub fn instantiate(&self, module: &Module) -> Result<Instance> {
let mut imports = Vec::new();
for import in module.imports() {
if let Some(item) = self.get(&import) {
imports.push(item);
continue;
}
let mut options = String::new();
for i in self.map.keys() {
if &*self.strings[i.module]!= import.module()
|| &*self.strings[i.name]!= import.name()
{
continue;
}
options.push_str(" * ");
options.push_str(&format!("{:?}", i.kind));
options.push_str("\n");
}
if options.len() == 0 {
bail!(
"unknown import: `{}::{}` has not been defined",
import.module(),
import.name()
)
}
bail!(
"incompatible import type for `{}::{}` specified\n\
desired signature was: {:?}\n\
signatures available:\n\n{}",
import.module(),
import.name(),
import.ty(),
options,
)
}
Instance::new(module, &imports)
}
/// Returns the [`Store`] that this linker is connected to.
pub fn store(&self) -> &Store {
&self.store
}
/// Returns an iterator over all items defined in this `Linker`.
///
/// The iterator returned will yield 3-tuples where the first two elements
/// are the module name and item name for the external item, and the third
/// item is the item itself that is defined.
///
/// Note that multiple `Extern` items may be defined for the same
/// module/name pair.
pub fn iter(&self) -> impl Iterator<Item = (&str, &str, Extern)> {
self.map.iter().map(move |(key, item)| {
(
&*self.strings[key.module],
&*self.strings[key.name],
item.clone(),
)
})
}
/// Looks up a value in this `Linker` which matches the `import` type
/// provided.
///
/// Returns `None` if no match was found.
pub fn get(&self, import: &ImportType) -> Option<Extern> {
let key = ImportKey {
module: *self.string2idx.get(import.module())?,
name: *self.string2idx.get(import.name())?,
kind: self.import_kind(import.ty()),
};
self.map.get(&key).cloned()
}
/// Returns all items defined for the `module` and `name` pair.
///
/// This may return an empty iterator, but it may also return multiple items
/// if the module/name have been defined twice.
pub fn get_by_name<'a: 'p, 'p>(
&'a self,
module: &'p str,
name: &'p str,
) -> impl Iterator<Item = &'a Extern> + 'p {
self.map
.iter()
.filter(move |(key, _item)| {
&*self.strings[key.module] == module && &*self.strings[key.name] == name
})
.map(|(_, item)| item)
}
/// Returns the single item defined for the `module` and `name` pair.
///
/// Unlike the similar [`Linker::get_by_name`] method this function returns
/// a single `Extern` item. If the `module` and `name` pair isn't defined
/// in this linker then an error is returned. If more than one value exists
/// for the `module` and `name` pairs, then an error is returned as well.
pub fn get_one_by_name(&self, module: &str, name: &str) -> Result<Extern> {
let mut items = self.get_by_name(module, name);
let ret = items
.next()
.ok_or_else(|| anyhow!("no item named `{}` in `{}`", name, module))?;
if items.next().is_some() {
bail!("too many items named `{}` in `{}`", name, module);
}
Ok(ret.clone())
}
}
| {
if !Store::same(&self.store, instance.store()) {
bail!("all linker items must be from the same store");
}
for export in instance.exports() {
self.insert(module_name, export.name(), export.into_extern())?;
}
Ok(self)
} | identifier_body |
lib.rs | //! # Acteur Actor System
//!
//! An safe & opinionated actor-like framework written in Rust that just works. Simple, robust, fast, documented.
//!
//!<div align="center">
//! <!-- Crates version -->
//! <a href="https://crates.io/crates/acteur">
//! <img src="https://img.shields.io/crates/v/acteur.svg?style=flat-square"
//! alt="Crates.io version" />
//! </a>
//! <!-- docs.rs docs -->
//! <a href="https://docs.rs/acteur">
//! <img src="https://img.shields.io/badge/docs-latest-blue.svg?style=flat-square"
//! alt="docs.rs docs" />
//! </a>
//! </div>
//!
//! ## Status update
//!
//! #### Update 1:
//! So, I took some time to think about this framework and have intention to move it into business
//! logic + distributed framework. The idea is to make a framework that allows you to write identified
//! aggregates/models/actors without much burden.
//!
//! #### Update 2:
//! I'm playing with raft and sled in order to implement the cluster part. You can it in the file
//! playing_with_raft.rs
//!
//! ## Motivation
//!
//! Actors are cool. Many people write about them and Actix rules the benchmarks. But to write a backend system
//! spawning several servers using actors is not easy. Actually, it bring many other complexities. But actors are
//! not a bad abstraction, but they are a solution for concurrency, not for business logic organization. They
//! tangentially solve some problems and that is nice, but introduce others. So, this framework seeks to implement
//! a framework which implement something very similar to Actors but with many adaptations and niceties in order
//! to write business logic.
//!
//! Said that, Acteur is provably **not** the tool you want if:
//!
//! - You want to have a full ACID compliant system
//! - You want to fully follow the Actor model
//! - You need to scale to A LOT of traffic. In which case you will need more than one server. (I'm planning to
//! implement some multi-server clustering, but for now, only one server).
//!
//! But it may help you if you want:
//!
//! - To have a database but not incur in the cost of READ, APPLY, SAVE, and instead you want to keep object
//! instances in RAM.
//! - You don't want to deal with optimistic concurrency and you want the messages to process one by one for each
//! ID, but concurrently between IDs.
//! - You want to make an backend for an online videogame with many entities interacting at the same time but don't
//! want to go all the way with ECS.
//!
//! ## Main features of Acteur
//!
//! This actor system is a bit different than other frameworks. It work under the following premises:
//! - **High-level**: The framework is oriented to map business logic rather than task concurrency.
//! - **Simple**: The API should be small, simple and intuitive. No surprises.
//! - **Concurrent**: The system should be fast and use all available CPU cores.
//! - **Documented**: Everything must be documented with exhaustive examples.
//!
//! ### Regarding the implementation:
//!
//! - Acteur is **asynchronous** and uses `async_std` under the hood.
//! - Actors have an *ID* which its type is defined by the developer.
//! - Messages are routed to an *Actor* and an *ID*.
//! - Actor life-cycle is *automatically* managed by the framework.
//! - Messages for the same Actor & ID are *sequential*. Everything else is executed **concurrently**.
//! - Services are provided for other concurrency forms.
//! - Services **don't** have ID and are concurrent.
//! - Services can **subscribe** to messages and everyone can **publish** messages.
//! - Acteur is **global**, only one instance can exist.
//!
//! ### State of the implementation
//!
//! My main focus of work now is in adding concurrency and improving ergonomics. Features already implemented:
//!
//! - ☑️ Actor / Service is activated on first message
//! - ☑️ Actor can send messages to other actors / services
//! - ☑️ System can send messages to any actor / service
//! - ☑️ Actors / Services can optionally, respond to messages
//! - ☑️ Services: statefull or stateless, without ID (like real actors) and concurrent.
//! - ☑️ Automatic deallocation of unused actors (after 5 minutes without messages)
//! - ☑️ Services can subscribe to messages
//! - □ Actor deallocation configuration (based in RAM, Actor count or timeout)
//! - □ Clustering: Implement Raft in order to assign each actor to a different server
//!
//! ## Acteur structure
//!
//! In order to use Acteur you just need to implement the correct trait and Acteur will
//! automatically use your implementation when a message is routed to your Actor/Service.
//!
//! The main traits are:
//!
//! - [Actor](./trait.Actor.html): Represents an actor
//! - [Service](./trait.Service.html): Represents a service
//!
//! Just implement them and your Actor/Service is ready to use.
//!
//! For Actors you have two traits in order to handle messages:
//!
//! - [Receive](./trait.Receive.html): Receives a message without responding to it. The most
//! efficient way to handle messages.
//! - [Respond](./trait.Respond.html): Receives a message and allows to respond to it. Forces
//! to sender to await until the actor respond.
//!
//! For Services you have other two traits.
//!
//! - [Listen](./trait.Listen.html): Receives a message without responding to it. The most efficient way
//! to handle messages.
//! - [Serve](./trait.Serve.html): Receives a message and allows to respond to it. Forces to sender to
//! await until the actor respond.
//!
//! ### Why are you using 4 different trait instead of 1 or 2?
//!
//! I tried to merge Traits but I didn't find how to do it because:
//!
//! A) The handle method contains the ActorAssistant and ServiceAssistant types in the signatures,
//! witch have different types.
//! B) I don't like to create a response channel for EVERY message when many messages don't need a response.
//!
//! Both blocks make 4 combinations. Receive/Respond for Actors and Listen/Serve for Services.
//!
//! I'm still trying to improve the naming and ergonomics. I think the concept will remain, but the ergonomics may change a bit.
//!
//! ## Actors vs Services
//!
//! Acteur provides 2 ways of concurrency. Actors and Services.
//!
//! ### Actors
//!
//! Actors have an ID and will consume messages directed to the same Actor's ID sequentially.
//! That means that if you send 2 messages to the Actor User-32, they will be handled sequentially.
//! On the other side, if you send a message to the Actor User-32 and other to the User-52 the
//! messages will be handled concurrently.
//!
//! That means, Actors instances keep messages order for the same ID, but not between different IDs.
//!
//! ### Services
//!
//! Services, on the other side, have no ID and they are concurrent. That means that you choose
//! how many instances of the Service there will be (Acteur provides a default). Services can
//! or can't have an State, but if they have, they require to be Sync (aka Mutex<state>).
//!
//! In short. Services are more like normal Actors (or, you can think as normal web services)
//! but with some preset concurrency factor. You can have many instances and there is
//! no synchronization of any type when consuming messages. Think of them as the primitive you
//! use when you want to create something that doesn't fit the Actors model in this framework.
//!
//! ### Use cases
//!
//! Choose Actor for Entities (Users, Invoices, Players, anything which their instances are identified).
//!
//! Choose Services for Business Logic, Infrastructure, Adapters, etc (Storage, DB access, HTTP services, | //!
//! ## Subscription or Pub/Sub
//!
//! Sometime we don't want to know who should receive the message but to subscribe to a type and wait.
//! Acteur models the Pub/Sub patter with Services. Actors in Acteur can't perform subscriptions as
//! that would require the framework to know all possible IDs of all possible Actor instances in
//! order to direct the message to the correct one (or all) and it doesn't play well with the deallocation
//! of unused actors.
//!
//! If you want to send messages to some Actors from a Subscription, you can create a Service that
//! subscribes to a message and then figures out to what Actor IDs to send the message. For example,
//! doing a query in the DB/Service in order to get the set of IDs that need to receive some message.
//!
//! Unlike sending/calling to services/actors, publishing doesn't know who needs to receive the
//! message in compilation time. That is the reason behind requiring the Services to subscribe in
//! runtime to any message they want to receive. In order to ensure that services perform the
//! subscriptions, it is a good idea to run `acteur.preload_service<Service>();` for each service
//! that should perform any subscription at the beginning of your Application start.
//!
//! ## Simple Example
//!
//! ```rust,no_run
//! use acteur::{Actor, Receive, ActorAssistant, Acteur};
//! use async_trait::async_trait;
//!
//! #[derive(Debug)]
//! struct Employee {
//! salary: u32
//! }
//!
//! #[async_trait]
//! impl Actor for Employee {
//! type Id = u32;
//!
//! async fn activate(_: Self::Id, _: &ActorAssistant<Self>) -> Self {
//! Employee {
//! salary: 0 // Load from DB or set a default,
//! }
//! }
//! }
//!
//! #[derive(Debug)]
//! struct SalaryChanged(u32);
//!
//! #[async_trait]
//! impl Receive<SalaryChanged> for Employee {
//! async fn handle(&mut self, message: SalaryChanged, _: &ActorAssistant<Employee>) {
//! self.salary = message.0;
//! }
//! }
//!
//! fn main() {
//! let sys = Acteur::new();
//!
//! sys.send_to_actor_sync::<Employee, SalaryChanged>(42, SalaryChanged(55000));
//!
//! sys.wait_until_stopped();
//! }
//!
//! ```
//!
//! ## Why another Actors framework?
//!
//! Somethings bothered me.
//!
//! 1. Actor systems are a concurrency level but I see example of them being used for business logic. Using
//! a normal HTTP framework + SQL feels more natural than using Actix.
//! 2. In order to use Actix you need to learn how it works. You need to manage the concurrency,
//! the addresses, etc
//! 3. Unsafe. I don't want unsafe. I wouldn't trust myself to do something like this in C++,
//! therefore, I don't want to have unsafe code. Rust opens the door to do these kind of projects
//! to people with less than 10 years of experience in C/C++ in a safer way.
//!
//! After async_std 1.0 announcement and speaking with some friends I started to envision how I would
//! like an actor framework be. Not that Actix and others are wrong, but they are too low level in my
//! opinion and not for business logic. I wanted something that just runs without leaking so many underlying
//! concepts. At the same time I don't think that competing for the last nanosecond is healthy. Even less
//! if the framework is already super fast.
//!
//! ## Common patterns
//!
//! This section will be updated with common patters you can use in your applications. If
//! you have one you want to add or just a question of how to so something, let me know with a GitHub Issue.
//!
//! ### Web server
//!
//! Given that all actors are managed by the framework, it is really easy to have, for
//! example, Rocket or Tide getting new HTTP calls and just calling `acteur.call_service` or
//! `acteur.call_actor` and wait for the response. You can use the sync version of the call
//! if you are working with synchronous code. Keep in mind that you can clone Acteur and send
//! it to as many threads/struct you need.
//!
//! ```rust,no_run
//!
//! use acteur::Acteur;
//!
//! let acteur = Acteur::new();
//!
//! // You can clone and send it to another thread/struct
//! let acteur2 = acteur.clone();
//!
//! ```
//!
//! If you need actors to query databases it would, generally, be a good idea to keep the database
//! connection / pool in a service, where you can handle connection errors, reconnect in case of error
//! and where you can control the concurrency.
//!
//! ## Error handling
//!
//! If you have operation that can error it is better if you encode them in services and reserve
//! Actors to operations that cannot fail. For example, database connections, network connections, etc.
//!
//! It is perfectly ok to encode a failure, from the point of view of the business rules, in an actor, for
//! example, in a videogame, where a character cannot attack another character because the second is
//! invulnerable.
//!
//! So, keep anything that can fail because external circumstances (network, hard drive, etc) in services
//! and let actors to request the services for whatever they need.
//!
//! If you have an error that should stop the application startup like database connections, add them to
//! a service construction and use the method `preload_service` for trying to start the service on the
//! app startup and let the app crash is something goes wrong.
//!
//! ## Safe Rust
//!
//! No unsafe code was directly used in this crate. You can check in lib.rs the `#![deny(unsafe_code)]` line.
//!
//! ## Contributing
//!
//! First of all, I would be really happy if you decide to check the framework and contribute to it! Just open
//! an issue / pull request and we can check what you would like to implement. Check more about contributing in
//! here: [https://github.com/DavidBM/acteur-rs/blob/master/CONTRIBUTING.md]()
//!
//! ## License
//!
//! <sup>
//! Licensed under either of <a href="LICENSE-APACHE">Apache License, Version
//! 2.0</a> or <a href="LICENSE-MIT">MIT license</a> at your option.
//! </sup>
//!
//! <br/>
//!
//! <sub>
//! Unless you explicitly state otherwise, any contribution intentionally submitted
//! for inclusion in this crate by you, as defined in the Apache-2.0 license, shall
//! be dual licensed as above, without any additional terms or conditions.
//! </sub>
//!
#![deny(unsafe_code)]
#[macro_use]
mod utils;
mod actors;
mod facade;
mod services;
mod system_director;
pub use facade::Acteur;
pub use actors::actor::Actor;
pub use actors::assistant::ActorAssistant;
pub use actors::handle::{Receive, Respond};
pub use services::handle::{Listen, Serve};
pub use services::service::{Service, ServiceConcurrency, ServiceConfiguration};
pub use services::system_facade::ServiceAssistant; | //! calculations of some sort that doesn't belong to any Actor, etc) and for subscribing to messages (Pub/Sub) | random_line_split |
progress.rs | use num_enum::IntoPrimitive;
use once_cell::sync::Lazy;
use std::sync::mpsc::Sender;
use std::{mem, pin::Pin, ptr};
use wchar::*;
use widestring::*;
use winapi::shared::basetsd;
use winapi::shared::minwindef as win;
use winapi::shared::windef::*;
use winapi::um::commctrl;
use winapi::um::errhandlingapi;
use winapi::um::libloaderapi;
use winapi::um::wingdi;
use winapi::um::winuser;
use wslscript_common::error::*;
use wslscript_common::font::Font;
use wslscript_common::wcstring;
use wslscript_common::win32;
pub struct ProgressWindow {
/// Maximum value for progress.
high_limit: usize,
/// Sender to signal for cancellation.
cancel_sender: Option<Sender<()>>,
/// Window handle.
hwnd: HWND,
/// Default font.
font: Font,
}
impl Default for ProgressWindow {
fn default() -> Self {
Self {
high_limit: 0,
cancel_sender: None,
hwnd: ptr::null_mut(),
font: Font::default(),
}
}
}
/// Progress window class name.
static WND_CLASS: Lazy<WideCString> = Lazy::new(|| wcstring("WSLScriptProgress"));
/// Window message for progress update.
pub const WM_PROGRESS: win::UINT = winuser::WM_USER + 1;
/// Child window identifiers.
#[derive(IntoPrimitive, PartialEq)]
#[repr(u16)]
enum Control {
ProgressBar = 100,
Message,
Title,
}
/// Minimum and initial main window size as a (width, height) tuple.
const MIN_WINDOW_SIZE: (i32, i32) = (300, 150);
impl ProgressWindow {
pub fn new(high_limit: usize, cancel_sender: Sender<()>) -> Result<Pin<Box<Self>>, Error> {
use winuser::*;
// register window class
if!Self::is_window_class_registered() {
Self::register_window_class()?;
}
let mut wnd = Pin::new(Box::new(Self::default()));
wnd.high_limit = high_limit;
wnd.cancel_sender = Some(cancel_sender);
let instance = unsafe { libloaderapi::GetModuleHandleW(ptr::null_mut()) };
let title = wchz!("WSL Script");
// create window
#[rustfmt::skip]
let hwnd = unsafe { CreateWindowExW(
WS_EX_TOOLWINDOW | WS_EX_TOPMOST, WND_CLASS.as_ptr(), title.as_ptr(),
WS_OVERLAPPEDWINDOW &!WS_MAXIMIZEBOX | WS_VISIBLE,
CW_USEDEFAULT, CW_USEDEFAULT, MIN_WINDOW_SIZE.0, MIN_WINDOW_SIZE.1,
ptr::null_mut(), ptr::null_mut(), instance,
// self as a `CREATESTRUCT`'s `lpCreateParams`
&*wnd as *const Self as win::LPVOID)
};
if hwnd.is_null() {
return Err(win32::last_error());
}
Ok(wnd)
}
/// Get handle to main window.
pub fn handle(&self) -> HWND {
self.hwnd
}
/// Run message loop.
pub fn run(&self) -> Result<(), Error> {
log::debug!("Starting message loop");
loop {
let mut msg: winuser::MSG = unsafe { mem::zeroed() };
match unsafe { winuser::GetMessageW(&mut msg, ptr::null_mut(), 0, 0) } {
1..=std::i32::MAX => unsafe {
winuser::TranslateMessage(&msg);
winuser::DispatchMessageW(&msg);
},
std::i32::MIN..=-1 => return Err(win32::last_error()),
0 => {
log::debug!("Received WM_QUIT");
return Ok(());
}
}
}
}
| /// Signal that progress should be cancelled.
pub fn cancel(&self) {
if let Some(tx) = &self.cancel_sender {
tx.send(()).unwrap_or_else(|_| {
log::error!("Failed to send cancel signal");
});
}
}
/// Close main window.
pub fn close(&self) {
unsafe { winuser::PostMessageW(self.hwnd, winuser::WM_CLOSE, 0, 0) };
}
/// Create child control windows.
fn create_window_controls(&mut self) -> Result<(), Error> {
use winuser::*;
let instance = unsafe { GetWindowLongPtrW(self.hwnd, GWLP_HINSTANCE) as win::HINSTANCE };
self.font = Font::new_caption(20)?;
// init common controls
let icex = commctrl::INITCOMMONCONTROLSEX {
dwSize: mem::size_of::<commctrl::INITCOMMONCONTROLSEX>() as u32,
dwICC: commctrl::ICC_PROGRESS_CLASS,
};
unsafe { commctrl::InitCommonControlsEx(&icex) };
// progress bar
#[rustfmt::skip]
let hwnd = unsafe { CreateWindowExW(
0, wcstring(commctrl::PROGRESS_CLASS).as_ptr(), ptr::null_mut(),
WS_CHILD | WS_VISIBLE | commctrl::PBS_MARQUEE,
0, 0, 0, 0, self.hwnd,
Control::ProgressBar as u16 as _, instance, ptr::null_mut(),
) };
unsafe { SendMessageW(hwnd, commctrl::PBM_SETRANGE32, 0, self.high_limit as _) };
unsafe { SendMessageW(hwnd, commctrl::PBM_SETMARQUEE, 1, 0) };
// static message area
#[rustfmt::skip]
let hwnd = unsafe { CreateWindowExW(
0, wchz!("STATIC").as_ptr(), ptr::null_mut(),
SS_CENTER | WS_CHILD | WS_VISIBLE,
0, 0, 0, 0, self.hwnd,
Control::Message as u16 as _, instance, ptr::null_mut(),
) };
Self::set_window_font(hwnd, &self.font);
// static title
#[rustfmt::skip]
let hwnd = unsafe { CreateWindowExW(
0, wchz!("STATIC").as_ptr(), ptr::null_mut(),
SS_CENTER | WS_CHILD | WS_VISIBLE,
0, 0, 0, 0, self.hwnd,
Control::Title as u16 as _, instance, ptr::null_mut(),
) };
Self::set_window_font(hwnd, &self.font);
unsafe { SetWindowTextW(hwnd, wchz!("Converting paths...").as_ptr()) };
Ok(())
}
/// Called when client was resized.
fn on_resize(&self, width: i32, _height: i32) {
self.move_control(Control::Title, 10, 10, width - 20, 20);
self.move_control(Control::ProgressBar, 10, 40, width - 20, 30);
self.move_control(Control::Message, 10, 80, width - 20, 20);
}
/// Move control relative to main window.
fn move_control(&self, control: Control, x: i32, y: i32, width: i32, height: i32) {
let hwnd = self.get_control_handle(control);
unsafe { winuser::MoveWindow(hwnd, x, y, width, height, win::TRUE) };
}
/// Get window handle of given control.
fn get_control_handle(&self, control: Control) -> HWND {
unsafe { winuser::GetDlgItem(self.hwnd, control as i32) }
}
/// Set font to given window.
fn set_window_font(hwnd: HWND, font: &Font) {
unsafe {
winuser::SendMessageW(hwnd, winuser::WM_SETFONT, font.handle as _, win::TRUE as _)
};
}
/// Update controls to display given progress.
fn update_progress(&mut self, current: usize, max: usize) {
use commctrl::*;
use winuser::*;
log::debug!("Progress update: {}/{}", current, max);
let msg = format!("{} / {}", current, max);
unsafe {
SetWindowTextW(
self.get_control_handle(Control::Message),
wcstring(msg).as_ptr(),
)
};
if self.is_marquee_progress() {
self.set_progress_to_range_mode();
}
let hwnd = self.get_control_handle(Control::ProgressBar);
unsafe { SendMessageW(hwnd, PBM_SETPOS, current, 0) };
// if done, close cancellation channel
if current == max {
self.cancel_sender.take();
}
}
/// Check whether progress bar is in marquee mode.
fn is_marquee_progress(&self) -> bool {
let style = unsafe {
winuser::GetWindowLongW(
self.get_control_handle(Control::ProgressBar),
winuser::GWL_STYLE,
)
} as u32;
style & commctrl::PBS_MARQUEE!= 0
}
/// Set progress bar to range mode.
fn set_progress_to_range_mode(&self) {
use commctrl::*;
use winuser::*;
let hwnd = self.get_control_handle(Control::ProgressBar);
let mut style = unsafe { GetWindowLongW(hwnd, GWL_STYLE) } as u32;
style &=!PBS_MARQUEE;
style |= PBS_SMOOTH;
unsafe { SetWindowLongW(hwnd, GWL_STYLE, style as _) };
unsafe { SendMessageW(hwnd, PBM_SETMARQUEE, 0, 0) };
}
}
impl ProgressWindow {
/// Check whether window class is registered.
pub fn is_window_class_registered() -> bool {
unsafe {
let instance = libloaderapi::GetModuleHandleW(ptr::null_mut());
let mut wc: winuser::WNDCLASSEXW = mem::zeroed();
winuser::GetClassInfoExW(instance, WND_CLASS.as_ptr(), &mut wc)!= 0
}
}
/// Register window class.
pub fn register_window_class() -> Result<(), Error> {
use winuser::*;
log::debug!("Registering {} window class", WND_CLASS.to_string_lossy());
let instance = unsafe { libloaderapi::GetModuleHandleW(ptr::null_mut()) };
let wc = WNDCLASSEXW {
cbSize: mem::size_of::<WNDCLASSEXW>() as u32,
style: CS_OWNDC | CS_HREDRAW | CS_VREDRAW,
hbrBackground: (COLOR_WINDOW + 1) as HBRUSH,
lpfnWndProc: Some(window_proc_wrapper::<ProgressWindow>),
hInstance: instance,
lpszClassName: WND_CLASS.as_ptr(),
hIcon: ptr::null_mut(),
hCursor: unsafe { LoadCursorW(ptr::null_mut(), IDC_ARROW) },
..unsafe { mem::zeroed() }
};
if 0 == unsafe { RegisterClassExW(&wc) } {
Err(win32::last_error())
} else {
Ok(())
}
}
/// Unregister window class.
pub fn unregister_window_class() {
log::debug!("Unregistering {} window class", WND_CLASS.to_string_lossy());
unsafe {
let instance = libloaderapi::GetModuleHandleW(ptr::null_mut());
winuser::UnregisterClassW(WND_CLASS.as_ptr(), instance);
}
}
}
trait WindowProc {
/// Window procedure callback.
///
/// If None is returned, underlying wrapper calls `DefWindowProcW`.
fn window_proc(
&mut self,
hwnd: HWND,
msg: win::UINT,
wparam: win::WPARAM,
lparam: win::LPARAM,
) -> Option<win::LRESULT>;
}
/// Window proc wrapper that manages the `&self` pointer to `ProgressWindow` object.
///
/// Must be `extern "system"` because the function is called by Windows.
extern "system" fn window_proc_wrapper<T: WindowProc>(
hwnd: HWND,
msg: win::UINT,
wparam: win::WPARAM,
lparam: win::LPARAM,
) -> win::LRESULT {
use winuser::*;
// get pointer to T from userdata
let mut ptr = unsafe { GetWindowLongPtrW(hwnd, GWLP_USERDATA) } as *mut T;
// not yet set, initialize from CREATESTRUCT
if ptr.is_null() && msg == WM_NCCREATE {
let cs = unsafe { &*(lparam as LPCREATESTRUCTW) };
ptr = cs.lpCreateParams as *mut T;
log::debug!("Initialize window pointer {:p}", ptr);
unsafe { errhandlingapi::SetLastError(0) };
if 0 == unsafe {
SetWindowLongPtrW(hwnd, GWLP_USERDATA, ptr as *const _ as basetsd::LONG_PTR)
} && unsafe { errhandlingapi::GetLastError() }!= 0
{
return win::FALSE as win::LRESULT;
}
}
// call wrapped window proc
if!ptr.is_null() {
let this = unsafe { &mut *(ptr as *mut T) };
if let Some(result) = this.window_proc(hwnd, msg, wparam, lparam) {
return result;
}
}
unsafe { DefWindowProcW(hwnd, msg, wparam, lparam) }
}
impl WindowProc for ProgressWindow {
fn window_proc(
&mut self,
hwnd: HWND,
msg: win::UINT,
wparam: win::WPARAM,
lparam: win::LPARAM,
) -> Option<win::LRESULT> {
use winuser::*;
match msg {
// https://docs.microsoft.com/en-us/windows/win32/winmsg/wm-nccreate
WM_NCCREATE => {
// store main window handle
self.hwnd = hwnd;
// WM_NCCREATE must be passed to DefWindowProc
None
}
// https://docs.microsoft.com/en-us/windows/win32/winmsg/wm-create
WM_CREATE => match self.create_window_controls() {
Err(e) => {
log::error!("Failed to create window controls: {}", e);
Some(-1)
}
Ok(()) => Some(0),
},
// https://docs.microsoft.com/en-us/windows/win32/winmsg/wm-size
WM_SIZE => {
self.on_resize(
i32::from(win::LOWORD(lparam as u32)),
i32::from(win::HIWORD(lparam as u32)),
);
Some(0)
}
// https://docs.microsoft.com/en-us/windows/win32/winmsg/wm-getminmaxinfo
WM_GETMINMAXINFO => {
let mmi = unsafe { &mut *(lparam as LPMINMAXINFO) };
mmi.ptMinTrackSize.x = MIN_WINDOW_SIZE.0;
mmi.ptMinTrackSize.y = MIN_WINDOW_SIZE.1;
Some(0)
}
// https://docs.microsoft.com/en-us/windows/win32/controls/wm-ctlcolorstatic
WM_CTLCOLORSTATIC => {
Some(unsafe { wingdi::GetStockObject(COLOR_WINDOW + 1) } as win::LPARAM)
}
// https://docs.microsoft.com/en-us/windows/win32/winmsg/wm-close
WM_CLOSE => {
self.cancel();
unsafe { DestroyWindow(hwnd) };
Some(0)
}
// https://docs.microsoft.com/en-us/windows/win32/winmsg/wm-destroy
WM_DESTROY => {
unsafe { PostQuitMessage(0) };
Some(0)
}
WM_PROGRESS => {
self.update_progress(wparam, lparam as _);
Some(0)
}
_ => None,
}
}
} | random_line_split |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.