ThreadSchedule 1.0.0
Modern C++ thread management library
Loading...
Searching...
No Matches
topology.hpp File Reference

Hardware topology helpers (CPU count, NUMA nodes) and affinity builders. More...

#include "scheduler_policy.hpp"
#include <cctype>
#include <thread>
#include <vector>
#include <fstream>
#include <string>
#include <unistd.h>
Include dependency graph for topology.hpp:
This graph shows which files directly or indirectly include this file:

Go to the source code of this file.

Classes

struct  threadschedule::CpuTopology
 Snapshot of basic CPU/NUMA topology. More...

Functions

auto threadschedule::read_topology () -> CpuTopology
 Discover basic topology. Linux: reads /sys for NUMA nodes. Windows: single node, sequential CPU indices.
auto threadschedule::affinity_for_node (int node_index, int thread_index, int threads_per_node=1) -> ThreadAffinity
 Build a ThreadAffinity for the given NUMA node.
auto threadschedule::distribute_affinities_by_numa (size_t num_threads) -> std::vector< ThreadAffinity >
 Distribute thread affinities across NUMA nodes in round-robin order.

Detailed Description

Hardware topology helpers (CPU count, NUMA nodes) and affinity builders.

Exposes lightweight discovery of CPU/NUMA topology and convenience functions to construct NUMA-aware ThreadAffinity masks. On Linux, NUMA nodes are detected via sysfs (nodeX/cpulist). On Windows, nodes default to 1 and CPUs are assigned sequentially.

Definition in file topology.hpp.

Function Documentation

◆ affinity_for_node()

auto threadschedule::affinity_for_node ( int node_index,
int thread_index,
int threads_per_node = 1 ) -> ThreadAffinity
inline

Build a ThreadAffinity for the given NUMA node.

Calls read_topology() internally on every invocation (no caching).

Parameters
node_indexNUMA node index (wraps if out of range).
thread_indexUsed to select CPU(s) within the node.
threads_per_nodeNumber of CPUs to include per thread (default 1).

Definition at line 151 of file topology.hpp.

◆ distribute_affinities_by_numa()

auto threadschedule::distribute_affinities_by_numa ( size_t num_threads) -> std::vector<ThreadAffinity>
inline

Distribute thread affinities across NUMA nodes in round-robin order.

Returns one ThreadAffinity per thread, cycling through NUMA nodes so that consecutive threads are spread across different nodes.

Parameters
num_threadsNumber of affinity masks to generate.
Returns
Vector of num_threads ThreadAffinity objects.

Definition at line 182 of file topology.hpp.

◆ read_topology()

auto threadschedule::read_topology ( ) -> CpuTopology
inline

Discover basic topology. Linux: reads /sys for NUMA nodes. Windows: single node, sequential CPU indices.

Called frequently by chaos/affinity helpers. The result is not cached internally – consider caching the returned CpuTopology yourself if performance of repeated calls matters.

Definition at line 53 of file topology.hpp.