Struct simple_parallel::pool::Pool
[−]
[src]
pub struct Pool { // some fields omitted }
A thread pool.
This pool allows one to spawn several threads in one go, and then execute any number of "short-lifetime" jobs on those threads, without having to pay the thread spawning cost, or risk exhausting system resources.
The pool currently consists of some number of worker threads (dynamic, chosen at creation time) along with a single supervisor thread. The synchronisation overhead is currently very large.
"Short-lifetime"?
Jobs submitted to this pool can have any lifetime at all, that is,
the closures passed in (and elements of iterators used, etc.) can
have borrows pointing into arbitrary stack frames, even stack
frames that don't outlive the pool itself. This differs to
something like
scoped_threadpool
,
where the jobs must outlive the pool.
This extra flexibility is achieved with careful unsafe code, by
exposing an API that is a generalised version of
crossbeam
Scope::spawn
and the old std::thread::scoped
: at the lowest-level a submitted
job returns a JobHandle
token that ensures that job is finished
before any data the job might reference is invalidated
(i.e. manages the lifetimes). Higher-level functions will usually
wrap or otherwise hide the handle.
However, this comes at a cost: for easy of implementation Pool
currently only exposes "batch" jobs like for_
and map
and
these jobs take control of the whole pool. That is, one cannot
easily incrementally submit arbitrary closures to execute on this
thread pool, which is functionality that threadpool::ScopedPool
offers.
Example
extern crate crossbeam; extern crate simple_parallel; use simple_parallel::Pool; // a function that takes some arbitrary pool and uses the pool to // manipulate data in its own stack frame. fn do_work(pool: &mut Pool) { let mut v = [0; 8]; // set each element, in parallel pool.for_(&mut v, |element| *element = 3); let w = [2, 0, 1, 5, 0, 3, 0, 3]; // add the two arrays, in parallel let z: Vec<_> = crossbeam::scope(|scope| { pool.map(scope, v.iter().zip(w.iter()), |(x, y)| *x + *y).collect() }); assert_eq!(z, &[5, 3, 4, 8, 3, 6, 3, 6]); } let mut pool = Pool::new(4); do_work(&mut pool);
Methods
impl Pool
fn new(n_threads: usize) -> Pool
Create a new thread pool with n_threads
worker threads.
fn for_<Iter: IntoIterator, F>(&mut self, iter: Iter, f: F) where Iter::Item: Send, Iter: Send, F: Fn(Iter::Item) + Sync
Execute f
on each element of iter
.
This panics if f
panics, although the precise time and
number of elements consumed after the element that panics is
not specified.
Examples
use simple_parallel::Pool; let mut pool = Pool::new(4); let mut v = [0; 8]; // set each element, in parallel pool.for_(&mut v, |element| *element = 3); assert_eq!(v, [3; 8]);
fn unordered_map<'pool, 'a, I: IntoIterator, F, T>(&'pool mut self, scope: &Scope<'a>, iter: I, f: F) -> UnorderedParMap<'pool, 'a, T> where I: 'a + Send, I::Item: Send + 'a, F: 'a + Sync + Send + Fn(I::Item) -> T, T: Send + 'a
Execute f
on each element in iter
in parallel across the
pool's threads, with unspecified yield order.
This behaves like map
, but does not make efforts to ensure
that the elements are returned in the order of iter
, hence
this is cheaper.
The iterator yields (uint, T)
tuples, where the uint
is
the index of the element in the original iterator.
Examples
extern crate crossbeam; extern crate simple_parallel; use simple_parallel::Pool; let mut pool = Pool::new(4); // adjust each element in parallel, and iterate over them as // they are generated (or as close to that as possible) crossbeam::scope(|scope| { for (index, output) in pool.unordered_map(scope, 0..8, |i| i + 10) { // each element is exactly 10 more than its original index assert_eq!(output, index as i32 + 10); } })
fn map<'pool, 'a, I: IntoIterator, F, T>(&'pool mut self, scope: &Scope<'a>, iter: I, f: F) -> ParMap<'pool, 'a, T> where I: 'a + Send, I::Item: Send + 'a, F: 'a + Send + Sync + Fn(I::Item) -> T, T: Send + 'a
Execute f
on iter
in parallel across the pool's threads,
returning an iterator that yields the results in the order of
the elements of iter
to which they correspond.
This is a drop-in replacement for iter.map(f)
, that runs in
parallel, and consumes iter
as the pool's threads complete
their previous tasks.
See unordered_map
if the output order is unimportant.
Examples
extern crate crossbeam; extern crate simple_parallel; use simple_parallel::Pool; let mut pool = Pool::new(4); // create a vector by adjusting 0..8, in parallel let elements: Vec<_> = crossbeam::scope(|scope| { pool.map(scope, 0..8, |i| i + 10).collect() }); assert_eq!(elements, &[10, 11, 12, 13, 14, 15, 16, 17]);
impl Pool
Low-level/internal functionality.
unsafe fn execute<'pool, 'f, A, GenFn, WorkerFn, MainFn>(&'pool mut self, scope: &Scope<'f>, data: A, gen_fn: GenFn, main_fn: MainFn) -> JobHandle<'pool, 'f> where A: 'f + Send, GenFn: 'f + FnMut(&mut A) -> WorkerFn + Send, WorkerFn: 'f + FnMut(WorkerId) + Send, MainFn: 'f + FnOnce(A) + Send
Run a job on the thread pool.
gen_fn
is called self.n_threads
times to create the
functions to execute on the worker threads. Each of these is
immediately called exactly once on a worker thread (that is,
they are semantically FnOnce
), and main_fn
is also called,
on the supervisor thread. It is expected that the workers and
main_fn
will manage any internal coordination required to
distribute chunks of work.
The job must take pains to ensure main_fn
doesn't quit
before the workers do.