ThreadSchedule 1.0.0
Modern C++ thread management library
Loading...
Searching...
No Matches
threadschedule::ThreadPool Class Reference

Simple, general-purpose thread pool. More...

#include <thread_pool.hpp>

Classes

struct  Statistics

Public Types

using Task = std::function<void()>

Public Member Functions

 ThreadPool (size_t num_threads=std::thread::hardware_concurrency())
 ThreadPool (ThreadPool const &)=delete
auto operator= (ThreadPool const &) -> ThreadPool &=delete
template<typename F, typename... Args>
auto submit (F &&f, Args &&... args) -> std::future< std::invoke_result_t< F, Args... > >
 Submit a task to the thread pool.
template<typename Iterator>
auto submit_range (Iterator begin, Iterator end) -> std::vector< std::future< void > >
 Submit multiple tasks.
template<typename Iterator, typename F>
void parallel_for_each (Iterator begin, Iterator end, F &&func)
 Apply a function to a range of values in parallel.
auto size () const noexcept -> size_t
auto pending_tasks () const -> size_t
auto configure_threads (std::string const &name_prefix, SchedulingPolicy policy=SchedulingPolicy::OTHER, ThreadPriority priority=ThreadPriority::normal()) -> bool
 Configure thread properties.
auto set_affinity (ThreadAffinity const &affinity) -> bool
auto distribute_across_cpus () -> bool
void wait_for_tasks ()
void shutdown ()
auto get_statistics () const -> Statistics

Detailed Description

Simple, general-purpose thread pool.

This is a straightforward thread pool implementation suitable for:

  • Simple workloads with low task counts (< 1k tasks)
  • General application use (50k-500k tasks/second)
  • Simple task submission patterns
  • Lower memory overhead and complexity
  • Easier to understand and debug

For high-throughput scenarios (> 1k tasks), consider FastThreadPool or HighPerformancePool.

How task execution works
When you call submit(), the callable is wrapped in a std::packaged_task and pushed into a single shared std::queue under a mutex lock. One sleeping worker is then woken via condition_variable::notify_one(). The woken worker pops the front task from the queue and executes it. Workers block indefinitely on the condition_variable when the queue is empty (no polling timeout), so they consume zero CPU while idle.
Execution guarantees
  • Every successfully submitted task (submit() returned without throwing) is guaranteed to eventually execute.
  • submit() throws std::runtime_error if the pool is already shutting down. In that case the task is NOT enqueued.
  • Tasks are stored in a FIFO queue. Multiple workers pop concurrently, so submission order is roughly preserved but completion order is non-deterministic.
  • The returned std::future becomes ready once the task finishes. If the task threw an exception, future.get() rethrows it.
  • On shutdown(), the stop flag is set and all workers are woken. Each worker finishes its current task and then exits only if the queue is empty. This means all tasks that were enqueued before shutdown() are guaranteed to execute.
  • wait_for_tasks() blocks until the queue is empty AND no worker is currently executing a task.
Thread safety
submit() may be called from any thread concurrently. All task-queue access is serialized through queue_mutex_.
Wake-up behaviour
Workers block on a std::condition_variable (no polling timeout), so they consume no CPU while idle but wake instantly when a task is enqueued.
Internal counter note
Unlike FastThreadPool and HighPerformancePool, active_tasks_ and completed_tasks_ are incremented/decremented while queue_mutex_ is held. This means they are always consistent with the queue size, but every task completion acquires the mutex an extra time.
Exception handling
Exceptions thrown by tasks are caught inside the worker loop. They are stored in the std::future returned by submit(). The worker thread continues processing.
Lifetime
The destructor calls shutdown() and joins all worker threads. Can block if tasks are still running.
Copyability / movability
Not copyable, not movable.

Definition at line 1105 of file thread_pool.hpp.

Member Typedef Documentation

◆ Task

using threadschedule::ThreadPool::Task = std::function<void()>

Definition at line 1108 of file thread_pool.hpp.

Constructor & Destructor Documentation

◆ ThreadPool()

threadschedule::ThreadPool::ThreadPool ( size_t num_threads = std::thread::hardware_concurrency())
inlineexplicit

Definition at line 1118 of file thread_pool.hpp.

◆ ~ThreadPool()

threadschedule::ThreadPool::~ThreadPool ( )
inline

Definition at line 1133 of file thread_pool.hpp.

Member Function Documentation

◆ configure_threads()

auto threadschedule::ThreadPool::configure_threads ( std::string const & name_prefix,
SchedulingPolicy policy = SchedulingPolicy::OTHER,
ThreadPriority priority = ThreadPriority::normal() ) -> bool
inline

Configure thread properties.

Definition at line 1218 of file thread_pool.hpp.

◆ distribute_across_cpus()

auto threadschedule::ThreadPool::distribute_across_cpus ( ) -> bool
inline

Definition at line 1256 of file thread_pool.hpp.

◆ get_statistics()

auto threadschedule::ThreadPool::get_statistics ( ) const -> Statistics
inlinenodiscard

Definition at line 1304 of file thread_pool.hpp.

◆ parallel_for_each()

template<typename Iterator, typename F>
void threadschedule::ThreadPool::parallel_for_each ( Iterator begin,
Iterator end,
F && func )
inline

Apply a function to a range of values in parallel.

Definition at line 1187 of file thread_pool.hpp.

References submit().

◆ pending_tasks()

auto threadschedule::ThreadPool::pending_tasks ( ) const -> size_t
inlinenodiscard

Definition at line 1209 of file thread_pool.hpp.

◆ set_affinity()

auto threadschedule::ThreadPool::set_affinity ( ThreadAffinity const & affinity) -> bool
inline

Definition at line 1241 of file thread_pool.hpp.

◆ shutdown()

void threadschedule::ThreadPool::shutdown ( )
inline

Definition at line 1282 of file thread_pool.hpp.

◆ size()

auto threadschedule::ThreadPool::size ( ) const -> size_t
inlinenodiscardnoexcept

Definition at line 1204 of file thread_pool.hpp.

◆ submit()

template<typename F, typename... Args>
auto threadschedule::ThreadPool::submit ( F && f,
Args &&... args ) -> std::future<std::invoke_result_t<F, Args...>>
inline

Submit a task to the thread pool.

Definition at line 1142 of file thread_pool.hpp.

Referenced by parallel_for_each(), and submit_range().

◆ submit_range()

template<typename Iterator>
auto threadschedule::ThreadPool::submit_range ( Iterator begin,
Iterator end ) -> std::vector<std::future<void>>
inline

Submit multiple tasks.

Definition at line 1170 of file thread_pool.hpp.

References submit().

◆ wait_for_tasks()

void threadschedule::ThreadPool::wait_for_tasks ( )
inline

Definition at line 1276 of file thread_pool.hpp.


The documentation for this class was generated from the following file: