Export Code Structure ===================== This guide provides an overview of the code structure of Aidge export modules. Introduction ------------ In Aidge, an *export module* refers to any module that generates a standalone program capable of running a neural network on specific hardware targets. Several export modules are already available in the framework, such as: - `Generic C++ Export `_ - `ARM Cortex-M Export `_ Throughout this guide, we will refer primarily to the **C++ Export**, since all export modules share a similar structure. We will walk through the complete export process using a small model as an example, explaining each step and mechanism involved. .. image:: /source/_static/AidgeExportStructure/export_overview.png :alt: Export overview diagram Graph Preparation ----------------- Before generating the standalone export, the Aidge graph undergoes several transformations to fit the targeted export backend. One key step in this process is the **fusion of operators** into *MetaOperators*, groups of operators designed to match specific kernel implementations. .. image:: /source/_static/AidgeExportStructure/export_fuse.png :alt: Graph fusion process For example, as shown above, the convolution kernel supports both **padding** and **activation** (e.g., `ReLU`), so these operations are fused together into a single convolution operation. This transformation is achieved by applying a set of **regular expression-based recipes** to the graph. Each export module provides its own recipe set. For more details, refer to: - `"Transform Graph" User Guide `_ - `Graph Matching Tutorial `_ Additional transformations, such as setting data formats or data types, are applied within the ``export()`` function defined in ``export.py``. Most helper functions used in this process are implemented in ``export_utils.py``. For a detailed walkthrough, see the `"Quantized LeNet C++ Export" tutorial `_. Standalone Export Structure --------------------------- The **Standalone Export** refers to the generated code that runs the exported model independently. Before exploring the internal structure of the export module itself, let's first review the structure of the generated export: .. image:: /source/_static/AidgeExportStructure/standalone_export_structure.png :alt: Standalone export structure diagram In the generated export: - ``forward.cpp`` contains the function responsible for running inference. - The generated folders include: - **kernels/** (red): Implementation code for each supported kernel. - **utils/** (green): Utility source files. - **layers/** (blue): Configuration files for each model layer. These define parameters such as kernel size, dilation, and memory offsets for outputs. - **parameters/** (yellow): Serialized model parameters, such as weights and biases. *(Note: newer exports may support fallback mechanisms that allow partial reuse of existing export kernels.)* Export Module Structure ----------------------- The **Export Module** is the Aidge component that generates the **Standalone Export**. .. image:: /source/_static/AidgeExportStructure/full_export_structure.png :alt: Full export structure diagram Copied and Generated Files ^^^^^^^^^^^^^^^^^^^^^^^^^^ In the diagram above, the export module structure includes two main categories of files: - **Copied files**: Files in ``kernels`` and ``static`` are copied directly into the generated export. - **Generated files**: Configuration and parameter files are dynamically generated based on the model's layers. Each kernel type has two template files: - A **configuration template** (in ``layers/``) - A **forward template** (used for function calls in ``forward.cpp``) Parameters are generated using a third template, ``parameters.jinja``. Operators ^^^^^^^^^ The **operators/** folder is a crucial part of the export module. It contains Python scripts that connect Aidge's intermediate representation (IR) to the actual implementation used by the export. .. image:: /source/_static/AidgeExportStructure/export_classes.png :alt: Export class hierarchy diagram The main classes defining export behavior are: - `ExportLib `_ - `ExportNode `_ Each export module defines its own ``ExportLib`` (in ``export_registry.py``), which maintains a registry mapping Aidge operators to their: - `ImplSpec `_ (implementation specification) - `ExportNode `_ **ImplSpec** defines the constraints and supported configurations for each kernel, for example, the C++ convolution kernel supports only NHWC data formats for inputs and weights. **ExportNode** holds all information required to generate files for a specific operator, including: - ``config_template`` - Path to the layer configuration template - ``forward_template`` - Path to the kernel call template in ``forward.cpp`` - ``include_list`` - Required includes for ``forward.cpp`` - ``kernels_to_copy`` - List of kernel implementation files to include (one or more) .. image:: /source/_static/AidgeExportStructure/conv_operator.png :alt: Example of Conv2D operator registration Each operator's ``ExportNode`` is defined in the corresponding file under ``operators/`` (e.g., ``operators/Conv.py``). In the example above, the **Conv2D** operator's ``ImplSpec`` specifies that only NHWC input, weight, and output formats are supported. The ``attributes`` dictionary defines the variables available to Jinja templates when generating configuration and forward files. Because the convolution implementation supports multiple combinations (e.g., padding + activation), several variants such as ``Conv2D``, ``PadConv``, and ``ConvAct`` are registered as subclasses of the same base operator. .. image:: /source/_static/AidgeExportStructure/conv_inheritance.png :alt: Convolution operator inheritance For instance, ``PadConvAct`` inherits from previous convolution variants, extending or overriding parameters as needed. Base classes initialize defaults (e.g., padding = 0, activation = ``Linear``), and derived classes modify them. Finally, the ``Producer.py`` file manages parameter exports (corresponding to producers in the Aidge graph). Adding a New Kernel ------------------- Now that the export structure has been described, let's go through the process of adding a new kernel to the export. The procedure is straightforward, simply follow these steps: 1. Create the **kernel implementation** and place the file in the ``kernels`` folder. 2. Create the **configuration template** file, defining all the parameters required to execute the kernel function. 3. Add the **forward template** file, which generates the kernel call inside the ``forward.cpp`` file. 4. Register the kernel in the ``ExportLib`` by creating a dedicated Python file within the operators folder. 5. (Optional) If your kernel combines multiple operators, you can create a recipe to fuse them into a ``MetaOperator`` and register it as well. These recipes are typically defined in the ``export_utils.py`` file. And that's it, your new operator is now supported in the export system ! For more detailed instructions on adding a kernel, refer to the tutorial: `"Add a Custom Operator to the CPP Export" `_.