Aidge Export TensorRT API ========================= MAKE Options ------------ The export provides a Makefile with several options to utilize the export on your machine. You can generate either a C++ export or a Python export. Additionally, you have the option to compile the export and/or the Python library using Docker if your host machine lacks the necessary packages. The available commands are summarized in the following table: .. list-table:: :widths: 50 50 :header-rows: 1 * - Command - Description * - ``make / make help`` - Display the different options available * - ``make build_cpp`` - Compile the export on host for C++ apps (generate an executable in build/bin) * - ``make build_lib_python`` - Compile the export on host for Python apps (generate a python lib in build/lib) * - ``make build_image_docker`` - Generate the docker image of the tensorrt compiler * - ``make build_cpp_docker`` - Compile the export in a container for C++ apps (generate an executable in build/bin) * - ``make test_cpp_docker`` - Test the executable for C++ apps in a container * - ``make build_lib_python_docker`` - Compile the export in a container for Python apps (generate a python lib in build/lib) * - ``make test_lib_python_docker`` - Test the lib for Python apps in a container * - ``make clean`` - Clean up the build and bin folders Graph functions --------------- .. cpp:function:: device(id) :param id: (int) Set the ID of the device. .. cpp:function:: load(filepath) Load a graph from a file, either a `.onnx` file or a `.trt` engine. :param filepath: (str) The path to the file containing the graph. .. cpp:function:: save(filepath) Save the current graph as a `.trt` engine. :param filepath: (str) The path to save the graph to. .. cpp:function:: calibrate(calibration_folder_path="./calibration_folder/", cache_file_path="./calibration_cache", batch_size=1) Calibrate the graph using the calibration data found inside the `calibration` folder. This folder should include a `.info` file containing the dimensions of the calibration data, along with the data stored in a `.batch` file :param calibration_folder_path: (str) The path to the calibration folder. Default is "./calibration_folder/". :param cache_file_path: (str) The path to the calibration cache file. Default is "./calibration_cache". :param batch_size: (int) The batch size for calibration. Default is 1. .. cpp:function:: initialize() Initialize the graph. .. cpp:function:: profile(nb_iterations, mode=ExecutionMode_T.ASYNC) Profile the graph's execution by printing the average profiled tensorRT process time per stimulus. :param nb_iterations: (int) The number of iterations to run. :param mode: (ExecutionMode_T) The execution mode. Default is ExecutionMode_T.ASYNC. .. cpp:function:: run_sync(inputs) Run the graph synchronously. :param inputs: (list) A list of inputs. :return: (list) A list of outputs. Export function --------------- .. autofunction:: aidge_export_tensorrt.export Plugin helper ------------- The TensorRT export allow you to define plugins which will be automatically used when loadig the ONNX file. The export define an helper command to generate the template of a plugin which you can then fill up. **Usage example:** .. code-block:: python -m aidge_export_tensorrt.generate_plugin -n "test" -f "myExport" This will create the plugin ``test`` in the folder ``myExport`` .. code-block:: myExport +--- plugins | +--- test | | +--- test_plugin.hpp | | +--- test_plugin.cu *--- ...