User Guide#
š§ Please be aware that this user guide is still an incomplete and only partially implemented specification of the Aidge framework. Please refer to the API section to get the current state of Aidgeās API.
Workflow overview#
AIDGE allows designing and deploying Deep Neural Networks (DNN) on embedded systems. The design and deployment stages are as follows:
High level functionalities#
AIDGE offers functionalities that can be categorized according to the following diagram.
Load and store model functions used to load or store a graph model from/to a serialized format.
Model graph: functions used to model a graph such as adding an operator.
Transform model: functions used for manipulating the graph, such as graph duplication.
Provide data: functions used for providing data in order to execute a graph on data. These functions must be runnable on device.
Generate graph: generate kernels and scheduling for a specific target, on an already optimized graph. There is no graph manipulation here, the graph is supposed to be already prepared (quantization, tiling, operator mappingā¦) for the intended target. For graph preparation and optimization, see Optimize graph.
Static analysis of the graph: functions for obtaining statics on the graph like number of parameters, operations etc, that can be computed without having to execute the graph.
Execute graph: execute a graph, either using a backend library (simple implementation change, no generation or compilation involved), or compiled operators implementation, after passing through the Compile graph function in this case.
Learn model: performing a training requires several functions of the workflow (Model graph, Provide data, Execute graph, Benchmark KPI).
Benchmark KPI: all kind of benchmarking requiring to run the network on a target in order to perform a measurement: accuracy, execution timeā¦ Some of these functions must be runable on device.
Model Hardware: functions to represent the hardware target.
Optimize graph: high-level functions to optimize the hardware mapping: quantization, pruning, tilingā¦
Learn on edge: high-level functions to condition the graph for edge learning, including continual learning and federated learning.
Ensure robustness: high-level functions to condition the graph for robustness.