Aidge ONNX tutorial#

Binder

In this tutorial, we will see how to extensively use the Aidge ONNX module.

The following points will be covered:

  • How to load an ONNX;

  • How to add the support for an ONNX operator;

  • How to support an unsupported operator for export purposes;

  • How to add an implementation to an unknown operator;

  • How to export a GraphView to ONNX.

Install requirements#

Ensure that the Aidge modules are properly installed in the current environment. If it is the case, the following setup steps can be skipped.
Note: When running this notebook on Binder, all required components are pre-installed.
[ ]:
%pip install aidge-core \
    aidge-backend-cpu \
    aidge-onnx \
    aidge-model-explorer

Setting up the notebook#

Import required modules#

[ ]:
import aidge_core
import aidge_onnx
import aidge_backend_cpu  # Required for Producer implementation
import numpy as np  # Required to load data

Retrieve the ONNX model#

In order to run this tutorial, we will use a simple ONNX composed of a single operator Swish. This operator is not supported by ONNX and is often decomposed in multiple operators.

If you don’t have git-lfs, you can download the model and data using code snippet:

[ ]:
import onnx
from onnx import TensorProto, helper


def generate_swish(filename):

    onnx_inputs = [helper.make_tensor_value_info("data", TensorProto.FLOAT, [1, 10])]
    onnx_outputs = [
        helper.make_tensor_value_info("swish_out", TensorProto.FLOAT, [1, 10])
    ]
    onnx_nodes = [
        helper.make_node(
            name="Swish0",
            op_type="Swish",
            inputs=["data"],
            outputs=["swish_out"],
        )
    ]
    onnx_nodes[-1].attribute.append(helper.make_attribute("beta", [1.0] * 10))

    # Create the graph (GraphProto)
    onnx_graph = onnx.helper.make_graph(
        nodes=onnx_nodes,
        initializer=[],
        name=filename,
        inputs=onnx_inputs,
        outputs=onnx_outputs,
    )
    # Create the model (ModelProto)
    onnx_model = onnx.helper.make_model(
        onnx_graph, producer_name="aidge_onnx", producer_version="0.2.0"
    )

    onnx.save(onnx_model, filename)


file_name = "test_swish.onnx"

generate_swish(file_name)

Importing an ONNX#

Importing an ONNX using Aidge is done using the function: aidge_onnx.load_onnx().

[ ]:
graph = aidge_onnx.load_onnx(file_name)

The Swish operator is not supported and thus is loaded as a GenericOperator. This mechanism allows to load an unsupported graph into the framework.

The aidge_onnx library has a coverage report tools in order to check how well the graph loaded is supported:

[ ]:
if not aidge_onnx.has_native_coverage(graph):
    print("The graph is not fully supported by Aidge!\n")
aidge_onnx.native_coverage_report(graph)

However, this does not mean we cannot work with this graph!

Working with Generic Operators#

Indeed, using the python library, we can work with a GenericOperator.

For this we will begin with retrieving the operator:

[ ]:
swish_node = graph.get_nodes().pop()  # get_nodes() returns a set
swish_op = swish_node.get_operator()  # Retrieving Operator from Node

Computing output dimensions#

In order to generate a scheduling, we need to specify how the operator will modify the data. This is required so that Aidge can propagate input/output dimensions.

Generating a scheduling is necessary to generate an export or make inference in the framework.

We can set a function to compute the dimensions using the set_forward_dims() method.

In our case, the Swish function does not modify the dimensions so we can just send the same dimensions as the input. For this, we will create an identity lambda function.

[ ]:
swish_op.set_forward_dims(lambda x: x)

Providing an implementation#

To perform inference, we need to provide an implementation; specifically, we must define the forward function.

The Swish function is defined as: \(swish(x)={{x}\over{1+e^{-\beta x}}}\).

Thus, we can create a simple implementation using the Numpy library:

[ ]:
class SwishImpl(aidge_core.OperatorImpl):
    def forward(self):
        data_input = np.array(self.get_operator().get_input(0))
        beta = np.array(
            self.get_operator().attr.get_attr("beta")
        )  # Attribute name is the same as the one in the ONNX
        output = data_input / (1 + np.exp(-data_input * beta))
        self.get_operator().set_output(
            0, aidge_core.Tensor(output)
        )  # setting operator output

This implementation can then be set using:

[ ]:
# Provide an implementation for the cpu backend only:
# aidge_core.register_GenericOperatorOp(["cpu", "Swish"], SwishImpl)

# Provide a generic implementation, usable for any backend:
swish_op.set_impl(SwishImpl(swish_op))

Once this is done, we can run an inference.

Let’s first create an input:

[ ]:
numpy_tensor = np.random.randn(1, 10).astype(np.float32)
in_tensor = aidge_core.Tensor(numpy_tensor)
print(f"Random input:\n{numpy_tensor}")

Then we can create a scheduler and run the inference:

[ ]:
graph.compile("cpu", aidge_core.dtype.float32, dims=[[1, 10]])

scheduler = aidge_core.SequentialScheduler(graph)
scheduler.forward(data=[in_tensor])

for outNode in graph.get_output_nodes():
    output_aidge = np.array(outNode.get_operator().get_output(0))
    print("Aidge prediction = ", output_aidge)

Updating ONNX import#

We have seen how to handle a GenericOperator in order to generate an export or run inference. However, this is not the only approach available when dealing with an operator that is not natively supported.

As mentioned earlier, the Swish function is composed of the Exp, Add, and Div operations. In this section, we will explore how to interact with the aidge_onnx library to add support for new operators. This section will also showcase the use of MetaNodes.

Creating a MetaNode#

The first step is to reproduce the Swish operation using a MetaOperator.

To achieve this, we need to create a Producer Node for each constant: exp, 1, and beta. Then, we define each operation: Exp, Add, and Div. Finally, we create a GraphView, which will be embedded within a MetaOperator.

Note: The Swish computation graph begins with a branch split. To ensure a single input to the graph, we use the Identity operator.

[ ]:
from math import exp


def gen_swish_metaop(nb_chan, name):

    # Declaring constant values
    e_prod = aidge_core.Producer(
        aidge_core.Tensor(np.array([exp(1)] * nb_chan, dtype=np.float32)), "exp"
    )
    one_prod = aidge_core.Producer(
        aidge_core.Tensor(np.array([1] * nb_chan, dtype=np.float32)), "one"
    )
    beta = 0.1
    beta_prod = aidge_core.Producer(
        aidge_core.Tensor(np.array([-beta] * nb_chan, dtype=np.float32)), "beta"
    )

    # Declaring operators
    mul_op = aidge_core.Mul(name=f"{name}_MUL")
    pow_op = aidge_core.Pow(name=f"{name}_POW")
    add_op = aidge_core.Add(name=f"{name}_ADD")
    div_op = aidge_core.Div(name=f"{name}_DIV")
    input_op = aidge_core.Identity(f"{name}_Input")

    # Declaring Connectors
    x = aidge_core.Connector(input_op)
    b = aidge_core.Connector(beta_prod)
    e = aidge_core.Connector(e_prod)
    o = aidge_core.Connector(one_prod)

    # Graph creation using functional declaration
    y = div_op(x, add_op(pow_op(e, mul_op(x, b)), o))
    swish_micro_graph = aidge_core.generate_graph([y])

    # Embedding GraphView in a MetaOperator
    swish_node = aidge_core.meta_operator("Swish", swish_micro_graph, name=name)
    return swish_node


# Testing Swish metaop
swish_node = gen_swish_metaop(10, "Test")

We can then visualize the MicroGraph of the MetaOperator Swish using Aidge Model Explorer:

[ ]:
import aidge_model_explorer

aidge_model_explorer.visualize(
    swish_node.get_operator().get_micro_graph(), "swish_micro", embed=True
)

We have successfully created a function which can create a MetaOperator for the Swish function! The next step is to register this function so that it is called by the ONNX import library.

Registering new node import#

Registering a new node to the ONNX import library can be easily done using the decorator function @aidge_onnx.node_import.auto_register_import.

This decorator registers the function to the dictionary of import functions aidge_onnx.node_converter.ONNX_NODE_CONVERTER_. Note that the key you should use is the ONNX name of the operator in lowercase.

[ ]:
NB_CHAN = 10  # TODO: Find a way to infer nb channel later

from onnx import NodeProto, ValueInfoProto


@aidge_onnx.node_import.auto_register_import("swish")
def import_swish(
    onnx_node: NodeProto,
    input_nodes: list[tuple[aidge_core.Node, int]],
    opset: int,
    inputs_tensor_info: list[ValueInfoProto | None],
):
    node_name = onnx_node.output[0]
    return gen_swish_metaop(NB_CHAN, node_name)

Once this is done, you can use aidge_onnx.node_import.supported_operators() and check that Swish is part of the supported operators:

[ ]:
if "swish" in aidge_onnx.node_import.supported_operators():
    print("Swish has been well registered to aidge_onnx import.")
else:
    assert True, "Something wrong happened, registration failed."

Since Swish is now supported, it is possible to load again the ONNX file:

[ ]:
supported_graph = aidge_onnx.load_onnx(file_name)

Since we have decomposed the Swish operation into atomic operators supported by Aidge, we do not need to provide a custom implementation. Instead, we can simply use the aidge_backend_cpu implementation to run inference:

[ ]:
data_input = aidge_core.Producer(
    aidge_core.Tensor(np.arange(NB_CHAN, dtype=np.float32) + 1.0), "data"
)

data_input.add_child(supported_graph)
supported_graph.add(data_input)

data_input.get_operator().set_datatype(aidge_core.dtype.float32)

data_input.get_operator().set_backend("cpu")

supported_graph.set_datatype(aidge_core.dtype.float32)
supported_graph.set_backend("cpu")

# Create SCHEDULER
scheduler = aidge_core.SequentialScheduler(supported_graph)

# Run inference !
scheduler.forward()

for outNode in supported_graph.get_output_nodes():
    output_aidge = np.array(outNode.get_operator().get_output(0))
    print("MetaOperator output:")
    print(output_aidge)

x = np.arange(NB_CHAN, dtype=np.float32) + 1.0

beta = 0.1
print("Reference output:")

print(x / (1.0 + np.exp(-beta * x)))

Export ONNX#

In this section, we will see how to export a graph to ONNX.

Let’s consider this simple graph:

[ ]:
g = aidge_core.sequential([aidge_core.ReLU()])

In order to export it to ONNX, you simply have to call aidge_onnx.export_onnx()

[ ]:
g.forward_dims(
    [[1, 3, 24, 24]]
)  # <- Mandatory forward dims since we don't provide manually inputs and outputs dims
aidge_onnx.export_onnx(
    g,
    "test_save.onnx",
    inputs_dims=None,
    outputs_dims=None,
    enable_custom_op=False,
    opset=None,
    ir_version=None,
)

Export of generic operator#

When loading an incomplete graph, Aidge load it as a GenericOperator as we have seen in the previous section.

Since a GenericOperator is only a data class that store the information from the ONNX, it means that Aidge can export back a GenericOperator!

Let’s export an operator not supported by Aidge but that is defined in the ONNX standard: Trilu

This operator is simple, it takes 1 input and 1 output and has an optional attribute.

Let’s build a graph that correspond to that operator:

[ ]:
trilu_node = aidge_core.GenericOperator("Trilu", 1, 0, 1, name="Trilu0")
trilu_node.get_operator().attr.add_attr("upper", 1)
g = aidge_core.sequential([trilu_node])

Since we cannot forward dims this graph because we have a GenericOperator, we can set the inputs and outputs dims manually to force the export!

[ ]:
aidge_onnx.export_onnx(
    g,
    "aidge_trilu.onnx",
    inputs_dims={"Trilu0": [[1, 10]]},
    outputs_dims={"Trilu0": [[1, 10]]},
    enable_custom_op=False,
    opset=None,
    ir_version=None,
)

The ONNX is valid, meaning that we exported our generic operator into an ONNX that is compatible with the ONNX standard!

Extanding ONNX standard#

If you are working with operator that does not exist in the ONNX standard you can export them by creating new ONNX domain.

[ ]:
g = aidge_core.sequential(
    [aidge_core.GenericOperator("MyAwesomeOperator", 1, 0, 1, name="name")]
)

If we try to export this operator without allowing to enable custom operator aidge export function will fail:

[ ]:
try:
    aidge_onnx.export_onnx(
        g,
        "test_save.onnx",
        inputs_dims={"name": [[1, 10]]},
        outputs_dims={"name": [[1, 10]]},
        enable_custom_op=False,  # Need to enable custom op!
        opset=None,
        ir_version=None,
    )
except RuntimeError as e:
    print(
        f"Aidge fail to export this graph as it is not compatible with ONNX standard!\n"
        "MyAwesomeOperator does not exist in ONNX domain."
    )

Setting enable_custom_op allow to export this operator by extending ONNX with the _AIDGE_DOMAIN:

[ ]:
aidge_onnx.export_onnx(
    g,
    "test_save.onnx",
    inputs_dims={"name": [[1, 10]]},
    outputs_dims={"name": [[1, 10]]},
    enable_custom_op=True,  # Need to enable custom op!
    opset=None,
    ir_version=None,
)