ThreadSchedule 1.0.0
Modern C++ thread management library
Loading...
Searching...
No Matches
threadschedule::FastThreadPool Class Reference

Single-queue thread pool with optimized locking for medium workloads. More...

#include <thread_pool.hpp>

Classes

struct  Statistics

Public Types

using Task = std::function<void()>

Public Member Functions

 FastThreadPool (size_t num_threads=std::thread::hardware_concurrency())
 FastThreadPool (FastThreadPool const &)=delete
auto operator= (FastThreadPool const &) -> FastThreadPool &=delete
template<typename F, typename... Args>
auto submit (F &&f, Args &&... args) -> std::future< std::invoke_result_t< F, Args... > >
 Optimized task submission with minimal locking.
template<typename Iterator>
auto submit_batch (Iterator begin, Iterator end) -> std::vector< std::future< void > >
 Efficient batch processing.
void shutdown ()
auto configure_threads (std::string const &name_prefix, SchedulingPolicy policy=SchedulingPolicy::OTHER, ThreadPriority priority=ThreadPriority::normal()) -> bool
auto set_affinity (ThreadAffinity const &affinity) -> bool
auto distribute_across_cpus () -> bool
auto size () const noexcept -> size_t
auto pending_tasks () const -> size_t
void wait_for_tasks ()
auto get_statistics () const -> Statistics

Detailed Description

Single-queue thread pool with optimized locking for medium workloads.

Alternative to HighPerformancePool for cases where work-stealing overhead is not justified. All tasks share one std::queue protected by a single mutex, which keeps per-task overhead low while still scaling to multiple workers.

Best for: Medium workloads (100-10k tasks), consistent task patterns where work-stealing complexity is not needed but better performance than the basic ThreadPool is desired.

How task execution works
When you call submit(), the callable is wrapped in a std::packaged_task, pushed into the single shared task queue under a mutex lock, and one sleeping worker is woken via condition_variable::notify_one(). The woken worker pops the front element from the queue and executes it. If the queue is empty when a worker wakes up, it goes back to sleep with a 10 ms timeout before checking again.
Execution guarantees
  • Every successfully submitted task (submit() returned without throwing) is guaranteed to eventually execute, as long as the pool is not destroyed while shutdown() is draining remaining work.
  • submit() throws std::runtime_error if the pool is already shutting down. In that case the task is NOT enqueued and will NOT execute.
  • Tasks are stored in a FIFO queue, so they are picked up roughly in submission order. However, since multiple workers pop concurrently, the actual completion order is non-deterministic.
  • The returned std::future becomes ready once the task finishes. If the task threw an exception, future.get() rethrows it. The worker thread itself is not affected and continues processing further tasks.
  • On shutdown(), workers finish their current task, then drain all remaining queued tasks before exiting. Tasks submitted before shutdown() are guaranteed to execute.
Thread safety
submit() and submit_batch() may be called from any thread concurrently. shutdown() is internally guarded and safe to call more than once.
Polling / wake-up
Workers use condition_variable::wait_for with a 10 ms timeout, so an idle worker may take up to 10 ms to notice the stop flag after shutdown() is called.
Exception handling
Exceptions thrown by tasks are caught inside the worker loop. They are stored in the std::future returned by submit(). The worker thread continues processing.
Configuration return type
configure_threads() and set_affinity() return bool (not expected<void, std::error_code> as in HighPerformancePool). A return value of false means at least one worker could not be configured.
Lifetime
The destructor calls shutdown() and joins all worker threads. Can block if tasks are still running.
Copyability / movability
Not copyable, not movable.

Definition at line 747 of file thread_pool.hpp.

Member Typedef Documentation

◆ Task

using threadschedule::FastThreadPool::Task = std::function<void()>

Definition at line 750 of file thread_pool.hpp.

Constructor & Destructor Documentation

◆ FastThreadPool()

threadschedule::FastThreadPool::FastThreadPool ( size_t num_threads = std::thread::hardware_concurrency())
inlineexplicit

Definition at line 762 of file thread_pool.hpp.

◆ ~FastThreadPool()

threadschedule::FastThreadPool::~FastThreadPool ( )
inline

Definition at line 777 of file thread_pool.hpp.

Member Function Documentation

◆ configure_threads()

auto threadschedule::FastThreadPool::configure_threads ( std::string const & name_prefix,
SchedulingPolicy policy = SchedulingPolicy::OTHER,
ThreadPriority priority = ThreadPriority::normal() ) -> bool
inline

Definition at line 860 of file thread_pool.hpp.

◆ distribute_across_cpus()

auto threadschedule::FastThreadPool::distribute_across_cpus ( ) -> bool
inline

Definition at line 898 of file thread_pool.hpp.

◆ get_statistics()

auto threadschedule::FastThreadPool::get_statistics ( ) const -> Statistics
inlinenodiscard

Definition at line 936 of file thread_pool.hpp.

◆ pending_tasks()

auto threadschedule::FastThreadPool::pending_tasks ( ) const -> size_t
inlinenodiscard

Definition at line 923 of file thread_pool.hpp.

◆ set_affinity()

auto threadschedule::FastThreadPool::set_affinity ( ThreadAffinity const & affinity) -> bool
inline

Definition at line 883 of file thread_pool.hpp.

◆ shutdown()

void threadschedule::FastThreadPool::shutdown ( )
inline

Definition at line 838 of file thread_pool.hpp.

◆ size()

auto threadschedule::FastThreadPool::size ( ) const -> size_t
inlinenodiscardnoexcept

Definition at line 918 of file thread_pool.hpp.

◆ submit()

template<typename F, typename... Args>
auto threadschedule::FastThreadPool::submit ( F && f,
Args &&... args ) -> std::future<std::invoke_result_t<F, Args...>>
inline

Optimized task submission with minimal locking.

Definition at line 786 of file thread_pool.hpp.

◆ submit_batch()

template<typename Iterator>
auto threadschedule::FastThreadPool::submit_batch ( Iterator begin,
Iterator end ) -> std::vector<std::future<void>>
inline

Efficient batch processing.

Definition at line 812 of file thread_pool.hpp.

◆ wait_for_tasks()

void threadschedule::FastThreadPool::wait_for_tasks ( )
inline

Definition at line 929 of file thread_pool.hpp.


The documentation for this class was generated from the following file: