--------------
Aidge workflow
--------------
Aidge provides a comprehensive workflow to streamline the development
and deployment of AI models for edge devices. Here is an overview of a
typical workflow and the key functionalities Aidge offers.
.. image:: /source/_static/aidge_workflow.png
:scale: 60%
.. raw:: html
Aidge guides you through the process of preparing, optimizing, and
deploying your AI models:
1. **Define a model**
..
Start by loading a (pre-trained) AI model from an ONNX file, or
alternatively, define your model using Aidge's API. Either way, your
model will be accessible at any time via Aidge's graph intermediate
representation.
2. **Visualize and analyze the model**
..
Utilize Aidge's tools to `visualize `__ and `analyze `__ your model's
architecture, helping you understand its structure and identify areas
for improvement.
3. **Optimize the model (optional)**
..
Train your model or apply powerful optimization techniques (such as
`quantization `__, pruning, and simplification) to reduce model size,
improve energy efficiency, and meet the specific constraints of
embedded hardware.
4. **Save the model**
..
Save your optimized model in ONNX whenever you need to.
5. **Export the model**
..
Generate the source code of your optimized model tailored for your
embedded hardware platform, making it ready for deployment.
6. **Benchmark the model**
..
Evaluate the `optimized model's performance `__ at runtime or at execution
time to ensure it meets your performance targets on the target
hardware.
Each of these functionalities is encapsulated within `Aidge's modular structure `__.
.. toctree::
:hidden:
:maxdepth: 2
aidge_architecture.rst
aidge_runtime_vs_export.rst
aidge_IR.rst
aidge_core.rst
aidge_model_vis.rst
aidge_export.rst
aidge_export_structure.rst