Aidge workflow#

Aidge provides a comprehensive workflow to streamline the development and deployment of AI models for edge devices. Here is an overview of a typical workflow and the key functionalities Aidge offers.

../../_images/aidge_workflow.png

Aidge guides you through the process of preparing, optimizing, and deploying your AI models:

  1. Define a model

Start by loading a (pre-trained) AI model from an ONNX file, or alternatively, define your model using Aidge’s API. Either way, your model will be accessible at any time via Aidge’s graph intermediate representation.

  1. Visualize and analyze the model

Utilize Aidge’s tools to visualize and analyze your model’s architecture, helping you understand its structure and identify areas for improvement.

  1. Optimize the model (optional)

Train your model or apply powerful optimization techniques (such as quantization, pruning, and simplification) to reduce model size, improve energy efficiency, and meet the specific constraints of embedded hardware.

  1. Save the model

Save your optimized model in ONNX whenever you need to.

  1. Export the model

Generate the source code of your optimized model tailored for your embedded hardware platform, making it ready for deployment.

  1. Benchmark the model

Evaluate the optimized model’s performance at runtime or at execution time to ensure it meets your performance targets on the target hardware.

Each of these functionalities is encapsulated within Aidge’s modular structure.