|
| | HighPerformancePool (size_t num_threads=std::thread::hardware_concurrency()) |
|
| HighPerformancePool (HighPerformancePool const &)=delete |
|
auto | operator= (HighPerformancePool const &) -> HighPerformancePool &=delete |
| template<typename F, typename... Args> |
| auto | submit (F &&f, Args &&... args) -> std::future< std::invoke_result_t< F, Args... > > |
| | High-performance task submission (optimized hot path)
|
| template<typename Iterator> |
| auto | submit_batch (Iterator begin, Iterator end) -> std::vector< std::future< void > > |
| | Batch task submission for maximum throughput.
|
| template<typename Iterator, typename F> |
| void | parallel_for_each (Iterator begin, Iterator end, F &&func) |
| | Optimized parallel for_each with work distribution.
|
| auto | size () const noexcept -> size_t |
| auto | pending_tasks () const -> size_t |
| auto | configure_threads (std::string const &name_prefix, SchedulingPolicy policy=SchedulingPolicy::OTHER, ThreadPriority priority=ThreadPriority::normal()) -> expected< void, std::error_code > |
| | Configure all worker threads.
|
| auto | set_affinity (ThreadAffinity const &affinity) -> expected< void, std::error_code > |
| auto | distribute_across_cpus () -> expected< void, std::error_code > |
| void | wait_for_tasks () |
| void | shutdown () |
| auto | get_statistics () const -> Statistics |
| | Get detailed performance statistics.
|
High-performance thread pool optimized for high-frequency task submission.
Optimizations for 1k+ tasks with 10k+ tasks/second throughput:
- Work-stealing architecture with proper synchronization
- Per-thread queues with efficient load balancing
- Batch processing support for maximum throughput
- Optimized wake-up mechanisms
- Cache-friendly data structures with proper alignment
- Performance monitoring and statistics
Note: Has overhead for small task counts (< 100 tasks) due to work-stealing complexity. Best for high-throughput scenarios like image processing, batch operations, etc.
Definition at line 149 of file thread_pool.hpp.