Scheduler#

Aidge introduces a well-defined consumer-producer (C-P) model for operator implementations, similar to transaction-level modeling (TLM) for electronic design. A generic, default implementation is provided as well. C-P model can be specified as precise amounts of data or arbitrary data quantity (token), for each operator and dynamically at each execution step. The C-P model execution path is decoupled from the data execution path, thus allowing to statically schedule the graph execution without providing the actual operator’s implementation.

Thanks to Aidge’s C-P model, arbitrary complex cyclic and acyclic dataflow graphs can be statically scheduled. Generic sequential and parallel schedulers are available, and a custom scheduler can be built using static scheduling data (logical early and late execution steps and associated dependencies for each scheduled node).

Scheduler base class#

This is the base class for scheduling in Aidge. It can generate static scheduling for cyclic and acyclic graphs, including logical early and late execution steps and associated dependencies for each scheduled node.

class aidge_core.Scheduler#
__init__(self: aidge_core.aidge_core.Scheduler, graph_view: aidge_core.aidge_core.GraphView) None#
generate_memory(self: aidge_core.aidge_core.Scheduler, inc_producers: bool = False, wrap_around_buffer: bool = False) Aidge::MemoryManager#
generate_scheduling(self: aidge_core.aidge_core.Scheduler) None#
get_static_scheduling(self: aidge_core.aidge_core.Scheduler, step: int = 0) List[aidge_core.aidge_core.Node]#
graph_view(*args, **kwargs)#

Overloaded function.

  1. graph_view(self: aidge_core.aidge_core.Scheduler) -> aidge_core.aidge_core.GraphView

  2. graph_view(self: aidge_core.aidge_core.Scheduler) -> aidge_core.aidge_core.GraphView

resetScheduling(self: aidge_core.aidge_core.Scheduler) None#
save_scheduling_diagram(self: aidge_core.aidge_core.Scheduler, file_name: str) None#
class Scheduler#

Subclassed by Aidge::ParallelScheduler, Aidge::SequentialScheduler

Public Functions

inline Scheduler(std::shared_ptr<GraphView> graphView, std::shared_ptr<Node> upperNode = nullptr)#
virtual ~Scheduler() noexcept#
std::vector<std::shared_ptr<Node>> getStaticScheduling(std::size_t step = 0) const#

Return a vector of Node ordered by the order they are called by the scheduler.

Returns:

std::vector<std::shared_ptr<Node>>

inline std::shared_ptr<GraphView> graphView() const noexcept#
void generateScheduling()#

Generate full static scheduling of the GraphView. For each node, an earliest and latest possible execution logical step is specified. Nodes that may be scheduled at the same logical step have no data dependency and can be run in parallel.

void resetScheduling()#

Reset all scheduling and associated nodes producer consumer.

MemoryManager generateMemory(bool incProducers = false, bool wrapAroundBuffer = false) const#

Generate the memory layout for the current static scheduling.

Parameters:
  • incProducers – If true, include the producers in the memory layout.

  • wrapAroundBuffer – If true, allow wrapping in memory planes.

void connectInputs(const std::vector<std::shared_ptr<Aidge::Tensor>> &data)#

Place the data tensors inside in the data input tensor of the graphView. In case of multiple data input tensors, they are mapped to producers in the order given by the graph.

Parameters:

data – data input tensors

void saveStaticSchedulingDiagram(const std::string &fileName) const#

Save in a Markdown file the static scheduling with early and late relative order for the nodes.

Parameters:

fileName – Name of the generated file.

void saveSchedulingDiagram(const std::string &fileName) const#

Save in a Markdown file the order of layers execution.

Parameters:

fileName – Name of the generated file.

struct PriorProducersConsumers#

Public Functions

PriorProducersConsumers()#
PriorProducersConsumers(const PriorProducersConsumers&)#
~PriorProducersConsumers() noexcept#

Public Members

bool isPrior = false#
std::set<std::shared_ptr<Aidge::Node>> requiredProducers#
std::set<std::shared_ptr<Aidge::Node>> priorConsumers#

Sequential scheduler#

class aidge_core.SequentialScheduler#
__init__(self: aidge_core.aidge_core.SequentialScheduler, graph_view: aidge_core.aidge_core.GraphView) None#
backward(self: aidge_core.aidge_core.SequentialScheduler) None#
forward(self: aidge_core.aidge_core.SequentialScheduler, forward_dims: bool = True, data: List[aidge_core.aidge_core.Tensor] = []) None#
generate_memory(self: aidge_core.aidge_core.Scheduler, inc_producers: bool = False, wrap_around_buffer: bool = False) Aidge::MemoryManager#
generate_scheduling(self: aidge_core.aidge_core.Scheduler) None#
get_static_scheduling(self: aidge_core.aidge_core.Scheduler, step: int = 0) List[aidge_core.aidge_core.Node]#
graph_view(*args, **kwargs)#

Overloaded function.

  1. graph_view(self: aidge_core.aidge_core.Scheduler) -> aidge_core.aidge_core.GraphView

  2. graph_view(self: aidge_core.aidge_core.Scheduler) -> aidge_core.aidge_core.GraphView

resetScheduling(self: aidge_core.aidge_core.Scheduler) None#
save_scheduling_diagram(self: aidge_core.aidge_core.Scheduler, file_name: str) None#
class SequentialScheduler : public Aidge::Scheduler#

Multi-threaded parallel scheduler with dynamic scheduling.

Public Types

enum class SchedulingPolicy#

Values:

enumerator Default#
enumerator AsSoonAsPossible#
enumerator AsLateAsPossible#

Public Functions

inline SequentialScheduler(std::shared_ptr<GraphView> graphView, std::shared_ptr<Node> upperNode = nullptr)#
~SequentialScheduler() = default#
inline void setSchedulingPolicy(SchedulingPolicy policy)#
virtual void forward(bool forwardDims = true, const std::vector<std::shared_ptr<Aidge::Tensor>> &data = {})#

Run the provided Computational Graph with a batch of data.

void backward()#

Run the provided Computational Graph with a batch of data.

Parallel scheduler#

The parallel scheduler is implemented with a pool of threads (see class ThreadPool). Given a set of N threads, the algorithm works as follows:

  • First, add all nodes in the critical path for the current step in the thread pool queue (meaning logical early and late execution steps are equal to the current step);

  • If there are still some threads available, add nodes with the earliest execution step to the queue until all threads are busy;

  • Wait for all threads in the critical path to finish, then repeat.

class aidge_core.ParallelScheduler#
__init__(self: aidge_core.aidge_core.ParallelScheduler, graph_view: aidge_core.aidge_core.GraphView) None#
forward(self: aidge_core.aidge_core.ParallelScheduler, forward_dims: bool = True, data: List[aidge_core.aidge_core.Tensor] = []) None#
generate_memory(self: aidge_core.aidge_core.Scheduler, inc_producers: bool = False, wrap_around_buffer: bool = False) Aidge::MemoryManager#
generate_scheduling(self: aidge_core.aidge_core.Scheduler) None#
get_static_scheduling(self: aidge_core.aidge_core.Scheduler, step: int = 0) List[aidge_core.aidge_core.Node]#
graph_view(*args, **kwargs)#

Overloaded function.

  1. graph_view(self: aidge_core.aidge_core.Scheduler) -> aidge_core.aidge_core.GraphView

  2. graph_view(self: aidge_core.aidge_core.Scheduler) -> aidge_core.aidge_core.GraphView

resetScheduling(self: aidge_core.aidge_core.Scheduler) None#
save_scheduling_diagram(self: aidge_core.aidge_core.Scheduler, file_name: str) None#
class ParallelScheduler : public Aidge::Scheduler#

Multi-threaded parallel scheduler with dynamic scheduling.

Public Functions

inline ParallelScheduler(std::shared_ptr<GraphView> graphView, std::shared_ptr<Node> upperNode = nullptr)#
~ParallelScheduler() = default#
virtual void forward(bool forwardDims = true, const std::vector<std::shared_ptr<Aidge::Tensor>> &data = {})#

Run the provided Computational Graph with a batch of data.