Scheduler#
Aidge introduces a well-defined consumer-producer (C-P) model for operator implementations, similar to transaction-level modeling (TLM) for electronic design. A generic, default implementation is provided as well. C-P model can be specified as precise amounts of data or arbitrary data quantity (token), for each operator and dynamically at each execution step. The C-P model execution path is decoupled from the data execution path, thus allowing to statically schedule the graph execution without providing the actual operator’s implementation.
Thanks to Aidge’s C-P model, arbitrary complex cyclic and acyclic dataflow graphs can be statically scheduled. Generic sequential and parallel schedulers are available, and a custom scheduler can be built using static scheduling data (logical early and late execution steps and associated dependencies for each scheduled node).
Scheduler base class#
This is the base class for scheduling in Aidge. It can generate static scheduling for cyclic and acyclic graphs, including logical early and late execution steps and associated dependencies for each scheduled node.
- class aidge_core.Scheduler#
- __init__(self: aidge_core.aidge_core.Scheduler, graph_view: aidge_core.aidge_core.GraphView, reset_cp_model: bool = True) None #
- clear_scheduling(self: aidge_core.aidge_core.Scheduler) None #
- generate_scheduling(self: aidge_core.aidge_core.Scheduler) None #
- get_backward_scheduling(self: aidge_core.aidge_core.Scheduler) list[aidge_core.aidge_core.SchedulingElement] #
- get_forward_scheduling(self: aidge_core.aidge_core.Scheduler) list[aidge_core.aidge_core.SchedulingElement] #
- get_sequential_static_scheduling(self: aidge_core.aidge_core.Scheduler, step: int = 0, sorting: aidge_core.aidge_core.SchedulingPolicy = <SchedulingPolicy.Default: 0>) list[aidge_core.aidge_core.Node] #
- get_static_scheduling(self: aidge_core.aidge_core.Scheduler, step: int = 0) list[aidge_core.aidge_core.StaticSchedulingElement] #
- graph_view(*args, **kwargs)#
Overloaded function.
graph_view(self: aidge_core.aidge_core.Scheduler) -> aidge_core.aidge_core.GraphView
graph_view(self: aidge_core.aidge_core.Scheduler) -> aidge_core.aidge_core.GraphView
- reset_scheduling(self: aidge_core.aidge_core.Scheduler) None #
- save_factorized_static_scheduling_diagram(self: aidge_core.aidge_core.Scheduler, file_name: str, ignore_producers: bool = False, min_repeat: int = 2) None #
- save_scheduling_diagram(self: aidge_core.aidge_core.Scheduler, file_name: str, ignore_producers: bool = False) None #
- save_static_scheduling_diagram(self: aidge_core.aidge_core.Scheduler, file_name: str, ignore_producers: bool = False) None #
- tag_conditional_nodes(self: aidge_core.aidge_core.Scheduler) None #
-
class Scheduler#
Generate and manage the execution schedule order of nodes in a graph. It provides functionality for static scheduling, memory management, and visualization of the scheduling process.
Key features:
Static scheduling generation with early and late execution times
Memory layout generation for scheduled nodes
Input tensor connection to graph nodes
Scheduling visualization through diagram generation
See also
See also
See also
MemoryManager
Subclassed by Aidge::ParallelScheduler, Aidge::SequentialScheduler
Public Types
Public Functions
-
Scheduler() = delete#
Constructor for the Scheduler class.
-
virtual ~Scheduler()#
-
void tagConditionalNodes() const#
Add schedule.cond attribute to conditional nodes. The schedule.cond attribute is a
std::set<std::pair<NodePtr, size_t>>
, where the first element is the Select node and the second element, the Select input index (starting from 0, ignoring the condition input).
-
void tagForkBranches() const#
Add schedule.branch attribute to nodes.
-
inline std::vector<StaticSchedulingElement*> getStaticScheduling(std::size_t step = 0) const#
Get the static scheduling (after generate scheduling).
- Returns:
Vector of StaticSchedulingElement pointers.
-
std::vector<std::shared_ptr<Node>> getSequentialStaticScheduling(std::size_t step = 0, SchedulingPolicy policy = SchedulingPolicy::Default) const#
Get the static scheduling sequential order of nodes.
- Parameters:
step – The step of the static schedule to retrieve (default is 0).
policy – Sorting mode.
- Returns:
Vector of shared pointers to Nodes in their scheduled order.
-
inline std::vector<SchedulingElement> getForwardScheduling() const#
Get the dynamic scheduling for the forward pass (after graph execution).
- Returns:
Vector of SchedulingElement.
-
inline std::vector<SchedulingElement> getBackwardScheduling() const#
Get the dynamic scheduling for the backward pass (after graph execution).
- Returns:
Vector of SchedulingElement.
-
inline std::shared_ptr<GraphView> graphView() const noexcept#
Get the GraphView associated with this Scheduler.
- Returns:
Shared pointer to the GraphView.
-
void generateScheduling()#
Generate full static scheduling of the GraphView. For each node, an earliest and latest possible execution logical step is specified. Nodes that may be scheduled at the same logical step have no data dependency and can be run in parallel.
-
void resetScheduling()#
Reset all scheduling and associated nodes producer consumer.
-
void clearScheduling()#
Clear only the dynamic scheduling obtained during execution.
Connect input tensors to the data input of the GraphView. In case of multiple data input tensors, they are mapped to producers in the order given by the graph.
- Parameters:
data – data input tensors
-
void saveStaticSchedulingDiagram(const std::string &fileName, bool ignoreProducers = false) const#
Save the static scheduling diagram, with early and late relative order of execution for the nodes, to a file in Mermaid format.
- Parameters:
fileName – Name of the file to save the diagram (without extension).
-
void saveFactorizedStaticSchedulingDiagram(const std::string &fileName, bool ignoreProducers = false, size_t minRepeat = 2) const#
-
void saveSchedulingDiagram(const std::string &fileName, bool ignoreProducers = false) const#
Save in a Mermaid file the order of layers execution.
- Parameters:
fileName – Name of the generated file.
-
class ExecTime#
Public Functions
-
void update(const std::vector<SchedulingElement> &scheduling)#
-
inline std::map<std::shared_ptr<Node>, NodeExecTime> get() const#
-
void update(const std::vector<SchedulingElement> &scheduling)#
-
struct NodeExecTime#
-
struct PriorProducersConsumers#
Manages producer-consumer relationships for nodes.
Public Functions
-
PriorProducersConsumers()#
-
PriorProducersConsumers(const PriorProducersConsumers&)#
-
~PriorProducersConsumers() noexcept#
-
PriorProducersConsumers()#
-
struct SchedulingElement#
Represent a
Node
with its actual execution times.Start and end times are stored for later display.
-
struct StaticSchedulingElement#
Represents a node in the static schedule.
Public Functions
Public Members
-
std::size_t late#
Earliest possible execution time
-
std::vector<StaticSchedulingElement*> earlierThan#
Latest possible execution time
-
std::vector<StaticSchedulingElement*> laterThan#
Nodes that must be executed earlier
-
std::size_t late#
Sequential scheduler#
- class aidge_core.SequentialScheduler#
- __init__(self: aidge_core.aidge_core.SequentialScheduler, graph_view: aidge_core.aidge_core.GraphView, reset_cp_model: bool = True) None #
- backward(self: aidge_core.aidge_core.SequentialScheduler) None #
- clear_scheduling(self: aidge_core.aidge_core.Scheduler) None #
- forward(self: aidge_core.aidge_core.SequentialScheduler, forward_dims: bool = True, data: list[aidge_core.aidge_core.Tensor] = []) None #
- generate_memory(self: aidge_core.aidge_core.SequentialScheduler, inc_producers: bool = False, wrap_around_buffer: bool = False) Aidge::MemoryManager #
- generate_memory_auto_concat(self: aidge_core.aidge_core.SequentialScheduler, inc_producers: bool = False, wrap_around_buffer: bool = False) Aidge::MemoryManager #
- generate_scheduling(self: aidge_core.aidge_core.Scheduler) None #
- get_backward_scheduling(self: aidge_core.aidge_core.Scheduler) list[aidge_core.aidge_core.SchedulingElement] #
- get_forward_scheduling(self: aidge_core.aidge_core.Scheduler) list[aidge_core.aidge_core.SchedulingElement] #
- get_sequential_static_scheduling(self: aidge_core.aidge_core.SequentialScheduler, step: int = 0) list[aidge_core.aidge_core.Node] #
- get_static_scheduling(self: aidge_core.aidge_core.Scheduler, step: int = 0) list[aidge_core.aidge_core.StaticSchedulingElement] #
- graph_view(*args, **kwargs)#
Overloaded function.
graph_view(self: aidge_core.aidge_core.Scheduler) -> aidge_core.aidge_core.GraphView
graph_view(self: aidge_core.aidge_core.Scheduler) -> aidge_core.aidge_core.GraphView
- reset_scheduling(self: aidge_core.aidge_core.Scheduler) None #
- save_factorized_static_scheduling_diagram(self: aidge_core.aidge_core.Scheduler, file_name: str, ignore_producers: bool = False, min_repeat: int = 2) None #
- save_scheduling_diagram(self: aidge_core.aidge_core.Scheduler, file_name: str, ignore_producers: bool = False) None #
- save_static_scheduling_diagram(self: aidge_core.aidge_core.Scheduler, file_name: str, ignore_producers: bool = False) None #
- set_scheduling_policy(self: aidge_core.aidge_core.SequentialScheduler, policy: aidge_core.aidge_core.SchedulingPolicy) None #
- tag_conditional_nodes(self: aidge_core.aidge_core.Scheduler) None #
-
class SequentialScheduler : public Aidge::Scheduler#
Multi-threaded parallel scheduler with dynamic scheduling.
Public Functions
-
~SequentialScheduler() = default#
-
inline void setSchedulingPolicy(SchedulingPolicy policy)#
-
std::vector<std::shared_ptr<Node>> getSequentialStaticScheduling(std::size_t step = 0) const#
Get the static scheduling sequential order of nodes following the current scheduling policy.
- Parameters:
step – The step of the static schedule to retrieve (default is 0).
- Returns:
Vector of shared pointers to Nodes in their scheduled order.
-
MemoryManager generateMemory(bool incProducers = false, bool wrapAroundBuffer = false) const#
Generate the memory layout for the static scheduling following the current scheduling policy.
- Parameters:
incProducers – If true, include the producers in the memory layout.
wrapAroundBuffer – If true, allow wrapping in memory planes.
-
MemoryManager generateMemoryAutoConcat(bool incProducers = false, bool wrapAroundBuffer = false) const#
Generate the memory layout for the static scheduling following the current scheduling policy, with auto-concatenation: the Concat operator is replaced by direct allocation when possible.
- Parameters:
incProducers – If true, include the producers in the memory layout.
wrapAroundBuffer – If true, allow wrapping in memory planes.
Run the provided Computational Graph with a batch of data.
-
void backward()#
Run backpropagation on the Computational Graph.
Executes the backward pass through the nodes in reverse order of the forward pass. Nodes will be skipped during the backward pass in two cases:
If they are not conditionally required (as determined by isConditionalNodeRequired)
If they have the ‘skipBackward’ attribute set to true
-
~SequentialScheduler() = default#
Parallel scheduler#
The parallel scheduler is implemented with a pool of threads (see class ThreadPool
). Given a set of N threads, the algorithm works as follows:
First, add all nodes in the critical path for the current step in the thread pool queue (meaning logical early and late execution steps are equal to the current step);
If there are still some threads available, add nodes with the earliest execution step to the queue until all threads are busy;
Wait for all threads in the critical path to finish, then repeat.
- class aidge_core.ParallelScheduler#
- __init__(self: aidge_core.aidge_core.ParallelScheduler, graph_view: aidge_core.aidge_core.GraphView, reset_cp_model: bool = True) None #
- clear_scheduling(self: aidge_core.aidge_core.Scheduler) None #
- forward(self: aidge_core.aidge_core.ParallelScheduler, forward_dims: bool = True, data: list[aidge_core.aidge_core.Tensor] = []) None #
- generate_scheduling(self: aidge_core.aidge_core.Scheduler) None #
- get_backward_scheduling(self: aidge_core.aidge_core.Scheduler) list[aidge_core.aidge_core.SchedulingElement] #
- get_forward_scheduling(self: aidge_core.aidge_core.Scheduler) list[aidge_core.aidge_core.SchedulingElement] #
- get_sequential_static_scheduling(self: aidge_core.aidge_core.Scheduler, step: int = 0, sorting: aidge_core.aidge_core.SchedulingPolicy = <SchedulingPolicy.Default: 0>) list[aidge_core.aidge_core.Node] #
- get_static_scheduling(self: aidge_core.aidge_core.Scheduler, step: int = 0) list[aidge_core.aidge_core.StaticSchedulingElement] #
- graph_view(*args, **kwargs)#
Overloaded function.
graph_view(self: aidge_core.aidge_core.Scheduler) -> aidge_core.aidge_core.GraphView
graph_view(self: aidge_core.aidge_core.Scheduler) -> aidge_core.aidge_core.GraphView
- reset_scheduling(self: aidge_core.aidge_core.Scheduler) None #
- save_factorized_static_scheduling_diagram(self: aidge_core.aidge_core.Scheduler, file_name: str, ignore_producers: bool = False, min_repeat: int = 2) None #
- save_scheduling_diagram(self: aidge_core.aidge_core.Scheduler, file_name: str, ignore_producers: bool = False) None #
- save_static_scheduling_diagram(self: aidge_core.aidge_core.Scheduler, file_name: str, ignore_producers: bool = False) None #
- tag_conditional_nodes(self: aidge_core.aidge_core.Scheduler) None #