Aidge Export TensorRT API#
Warning
doxygenclass: Cannot find class “Aidge::Graph” in doxygen xml output for project “aidge” from directory: xml/
MAKE Options#
The export provides a Makefile with several options to utilize the export on your machine. You can generate either a C++ export or a Python export. Additionally, you have the option to compile the export and/or the Python library using Docker if your host machine lacks the necessary packages.
The available commands are summarized in the following table:
Command |
Description |
---|---|
|
Display the different options available |
|
Compile the export on host for C++ apps (generate an executable in build/bin) |
|
Compile the export on host for Python apps (generate a python lib in build/lib) |
|
Generate the docker image of the tensorrt compiler |
|
Compile the export in a container for C++ apps (generate an executable in build/bin) |
|
Test the executable for C++ apps in a container |
|
Compile the export in a container for Python apps (generate a python lib in build/lib) |
|
Test the lib for Python apps in a container |
|
Clean up the build and bin folders |
Graph functions#
-
device(id)#
- Parameters:
id – (int) Set the ID of the device.
-
load(filepath)#
Load a graph from a file, either a .onnx file or a .trt engine.
- Parameters:
filepath – (str) The path to the file containing the graph.
-
save(filepath)#
Save the current graph as a .trt engine.
- Parameters:
filepath – (str) The path to save the graph to.
-
calibrate(calibration_folder_path =
'./calibration_folder/'
, cache_file_path ='./calibration_cache'
, batch_size = 1)# Calibrate the graph using the calibration data found inside the calibration folder. This folder should include a .info file containing the dimensions of the calibration data, along with the data stored in a .batch file
- Parameters:
calibration_folder_path – (str) The path to the calibration folder. Default is “./calibration_folder/”.
cache_file_path – (str) The path to the calibration cache file. Default is “./calibration_cache”.
batch_size – (int) The batch size for calibration. Default is 1.
-
initialize()#
Initialize the graph.
-
profile(nb_iterations, mode = ExecutionMode_T.ASYNC)#
Profile the graph’s execution by printing the average profiled tensorRT process time per stimulus.
- Parameters:
nb_iterations – (int) The number of iterations to run.
mode – (ExecutionMode_T) The execution mode. Default is ExecutionMode_T.ASYNC.
-
run_sync(inputs)#
Run the graph synchronously.
- Parameters:
inputs – (list) A list of inputs.
- Returns:
(list) A list of outputs.
Export function#
Plugin helper#
The TensorRT export allow you to define plugins which will be automatically used when loadig the ONNX file.
The export define an helper command to generate the template of a plugin which you can then fill up.
Usage example:
python -m aidge_export_tensorrt.generate_plugin -n "test" -f "myExport"
This will create the plugin test
in the folder myExport
myExport
+--- plugins
| +--- test
| | +--- test_plugin.hpp
| | +--- test_plugin.cu
*--- ...