Recipes#

Recipes are relatively generic, built-in functionnalities for manipulating a compute graph in Aidge. Some are built with Aidge’s graph matching engine, do not hesitate to have a look at their source code to understand how they work and build similar functionnalities!

🚧 The list of recipes is still growing!

Adapt to backend#

Adapt a graph to the available kernels of a backend. The following transformations can be performed at the inputs and/or the outputs of operators: - Cast: change of data type; - Transpose: change of data format.

aidge_core.adapt_to_backend(graph_view: aidge_core.aidge_core.GraphView) None#

Adapt the graph to a specific backend.

Parameters:

graph_view (aidge_core.GraphView) – Graph view on which we want to apply the recipe

void Aidge::adaptToBackend(std::shared_ptr<GraphView> graph)#

Adapt a graph to the available kernels of a backend.

Parameters:

graph – Graph to manipulate

Constant folding#

Fold constant operators (like ONNX Simplifier).

bool Aidge::constantFolding(std::shared_ptr<GraphView> graph, bool constantShape = false)#

Retrieve part of the graph that can be pre-computed and replace them by a Producer.

Parameters:
  • graph – Graph to fold the constant

  • constant_shape – If true Shape operators are considered to be constant

Returns:

bool True if the graph has been modified

Convert Conv to MatMul#

Convert Conv operators to Unfold (im2col operation) + MatMul + Reshape.

size_t Aidge::convToMatMul(std::shared_ptr<GraphView> graph)#

Transform Conv layers with MatMul.

Parameters:

graph – Graph to manipulate

Returns:

size_t Number of replacement

Input graph:

        %%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
flowchart TB

Producer_3("conv2_w\n<sub><em>(Producer#3)</em></sub>"):::producerCls
Conv_1("conv2\n<sub><em>(Conv#1)</em></sub>")
Conv_0("conv1\n<sub><em>(Conv#0)</em></sub>")
Producer_2("conv1_b\n<sub><em>(Producer#2)</em></sub>"):::producerCls
Producer_1("conv1_w\n<sub><em>(Producer#1)</em></sub>"):::producerCls
Producer_4("conv3_w\n<sub><em>(Producer#4)</em></sub>"):::producerCls
Conv_2("conv3\n<sub><em>(Conv#2)</em></sub>")
Producer_5("conv3_b\n<sub><em>(Producer#5)</em></sub>"):::producerCls
Producer_0("dataProvider\n<sub><em>(Producer#0)</em></sub>"):::producerCls_rootCls
Producer_3-->|"0 [7, 4, 3, 3]&rarr;1"|Conv_1
Conv_1-->|"0 [2, 7, 9, 20]&rarr;0"|Conv_2
Conv_0-->|"0 [2, 4, 11, 22]&rarr;0"|Conv_1
Producer_2-->|"0 [4]&rarr;2"|Conv_0
Producer_1-->|"0 [4, 3, 3, 3]&rarr;1"|Conv_0
Producer_4-->|"0 [10, 7, 1, 1]&rarr;1"|Conv_2
Producer_5-->|"0 [10]&rarr;2"|Conv_2
Producer_0-->|"0 [2, 3, 13, 24]&rarr;0"|Conv_0
input0((in#0)):::inputCls--->|"&rarr;2"|Conv_1
Conv_2--->|"0 [2, 10, 5, 10]&rarr;"|output0((out#0)):::outputCls
classDef inputCls fill:#afa
classDef outputCls fill:#ffa
classDef externalCls fill:#ccc
classDef producerCls fill:#ccf
classDef genericCls fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5
classDef metaCls stroke-width:5px
classDef rootCls stroke:#f00
classDef producerCls_rootCls stroke:#f00,fill:#ccf
classDef genericCls_rootCls stroke:#f00,fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5
classDef metaCls_rootCls stroke:#f00,stroke-width:5px
    

Output graph:

        %%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
flowchart TB

Producer_0("dataProvider\n<sub><em>(Producer#0)</em></sub>"):::producerCls_rootCls
MatMul_2("conv3_matmul\n<sub><em>(MatMul#2)</em></sub>")
Producer_7("conv3_reshape_shape_prod\n<sub><em>(Producer#7)</em></sub>"):::producerCls
Reshape_2("conv3_reshape\n<sub><em>(Reshape#2)</em></sub>")
Add_1("conv3_add\n<sub><em>(Add#1)</em></sub>")
Producer_8("conv3_b_reshape_0\n<sub><em>(Producer#8)</em></sub>"):::producerCls
Producer_1("conv1_w_reshape_0\n<sub><em>(Producer#1)</em></sub>"):::producerCls
Unfold_2("conv3_unfold\n<sub><em>(Unfold#2)</em></sub>")
Producer_3("conv1_b_reshape_0\n<sub><em>(Producer#3)</em></sub>"):::producerCls
Unfold_0("conv1_unfold\n<sub><em>(Unfold#0)</em></sub>")
MatMul_0("conv1_matmul\n<sub><em>(MatMul#0)</em></sub>")
Producer_2("conv1_reshape_shape_prod\n<sub><em>(Producer#2)</em></sub>"):::producerCls
Reshape_0("conv1_reshape\n<sub><em>(Reshape#0)</em></sub>")
Add_0("conv1_add\n<sub><em>(Add#0)</em></sub>")
Unfold_1("conv2_unfold\n<sub><em>(Unfold#1)</em></sub>")
MatMul_1("conv2_matmul\n<sub><em>(MatMul#1)</em></sub>")
Producer_5("conv2_reshape_shape_prod\n<sub><em>(Producer#5)</em></sub>"):::producerCls
Reshape_1("conv2_reshape\n<sub><em>(Reshape#1)</em></sub>")
Producer_4("conv2_w_reshape_0\n<sub><em>(Producer#4)</em></sub>"):::producerCls
Producer_6("conv3_w_reshape_0\n<sub><em>(Producer#6)</em></sub>"):::producerCls
Producer_0-->|"0 [2, 3, 13, 24]&rarr;0"|Unfold_0
MatMul_2-->|"0 [2, 10, 50]&rarr;0"|Reshape_2
Producer_7-->|"0 [4]&rarr;1"|Reshape_2
Reshape_2-->|"0 [2, 10, 5, 10]&rarr;0"|Add_1
Producer_8-->|"0 [1, 10, 1, 1]&rarr;1"|Add_1
Producer_1-->|"0 [4, 27]&rarr;0"|MatMul_0
Unfold_2-->|"0 [2, 7, 50]&rarr;1"|MatMul_2
Producer_3-->|"0 [1, 4, 1, 1]&rarr;1"|Add_0
Unfold_0-->|"0 [2, 27, 242]&rarr;1"|MatMul_0
MatMul_0-->|"0 [2, 4, 242]&rarr;0"|Reshape_0
Producer_2-->|"0 [4]&rarr;1"|Reshape_0
Reshape_0-->|"0 [2, 4, 11, 22]&rarr;0"|Add_0
Add_0-->|"0 [2, 4, 11, 22]&rarr;0"|Unfold_1
Unfold_1-->|"0 [2, 36, 180]&rarr;1"|MatMul_1
MatMul_1-->|"0 [2, 7, 180]&rarr;0"|Reshape_1
Producer_5-->|"0 [4]&rarr;1"|Reshape_1
Reshape_1-->|"0 [2, 7, 9, 20]&rarr;0"|Unfold_2
Producer_4-->|"0 [7, 36]&rarr;0"|MatMul_1
Producer_6-->|"0 [10, 7]&rarr;0"|MatMul_2
Add_1--->|"0 [2, 10, 5, 10]&rarr;"|output0((out#0)):::outputCls
classDef inputCls fill:#afa
classDef outputCls fill:#ffa
classDef externalCls fill:#ccc
classDef producerCls fill:#ccf
classDef genericCls fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5
classDef metaCls stroke-width:5px
classDef rootCls stroke:#f00
classDef producerCls_rootCls stroke:#f00,fill:#ccf
classDef genericCls_rootCls stroke:#f00,fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5
classDef metaCls_rootCls stroke:#f00,stroke-width:5px
    

Expand meta operators#

Expand meta operators, replacing them with their inner graph (flatten the graph).

aidge_core.expand_metaops(graph_view: aidge_core.aidge_core.GraphView, recursive: bool = False) None#

Flatten the graph by replacing the meta operators by their micro graph.

Parameters:
  • graph_view (aidge_core.GraphView) – Graph view on which we want to apply the recipe

  • recursive (bool) – If true, recursively replace meta operators until there is no more meta operator in the graph.

void Aidge::expandMetaOps(std::shared_ptr<GraphView> graph, bool recursive = false)#

Flatten the graph by replacing the meta operators by their micro graph.

Parameters:

recursive – If true, recursively replace meta operators until there is no more meta operator in the graph.

Explicit Cast Move#

Insert Cast and Move operators where needed (thus removing all implicit data type conversion and backend change data movement).

void Aidge::explicitCastMove(std::shared_ptr<GraphView> graphView)#

Add Cast and Move operators where needed to ensure no conversion needs to be done at the Operator level.

Explicit Transpose#

Insert Transpose operators where needed to ensure no transposition needs to be done at the Operator level (thus removing all implicit data format conversion).

void Aidge::explicitTranspose(std::shared_ptr<GraphView> graphView)#

Add Transpose operators where needed to ensure no transposition needs to be done at the Operator level.

Fuse BatchNorm#

Fuse batch normalization with the preceding Conv or FC operator, if possible.

aidge_core.fuse_batchnorm(graph_view: aidge_core.aidge_core.GraphView) None#

Recipe to remove a flatten operator.

Parameters:

graph_view (aidge_core.GraphView) – Graph view on which we want to apply the recipe

void Aidge::fuseBatchNorm(std::shared_ptr<GraphView> graphView)#

Fuse :cpp:function:Aidge::BatchNorm with :cpp:function:Aidge::Conv or :cpp:function:Aidge::FC Nodes. Ref: https://nenadmarkus.com/p/fusing-batchnorm-and-conv/.

Parameters:

graphView – Graph view to use graph matching on, in order to apply transformations.

Fuse MatMul and Add to FC#

Fuse MatMul optionnally followed by Add operator into a FC operator.

aidge_core.matmul_to_fc(graph_view: aidge_core.aidge_core.GraphView) None#

Recipe to Fuse MatMul and Add operators into an aidge_core.FC operator.

Parameters:

graph_view (aidge_core.GraphView) – Graph view on which we want to apply the recipe

void Aidge::matMulToFC(std::shared_ptr<GraphView> graphView)#

Merge MatMul and :cpp:function:Aidge::Add Node into a :cpp:function:Aidge::FC Node.

Parameters:

graphView – Graph view to use graph matching on, in order to apply transformations.

Fuse to meta operator#

Fuse each sub-graph matching a query in a Meta Operator.

aidge_core.fuse_to_metaops(*args, **kwargs)#

Overloaded function.

  1. fuse_to_metaops(gm: aidge_core.aidge_core.SinglePassGraphMatching, query: str, type: str = ‘’) -> int

    Fuse each sub-graph matching a query in a Meta Operator.

    param gm:

    SinglePassGraphMatching containing the graph to manipulate

    type gm:

    aidge_core.SinglePassGraphMatching

    param query:

    Sub-graph matching query

    type query:

    str

    param type:

    Type name of the resulting meta operators

    type type:

    str, optional

    return:

    Number of sub-graph actually fused in a Meta Operator.

    rtype:

    int

  2. fuse_to_metaops(graph_view: aidge_core.aidge_core.GraphView, query: str, type: str = ‘’) -> int

    Fuse each sub-graph matching a query in a Meta Operator.

    param graph_view:

    Graph view on which we want to apply the recipe

    type graph_view:

    aidge_core.GraphView

    param query:

    Sub-graph matching query

    type query:

    str

    param type:

    Type name of the resulting meta operators

    type type:

    str, optional

    return:

    Number of sub-graph actually fused in a Meta Operator.

    rtype:

    int

size_t Aidge::fuseToMetaOps(SinglePassGraphMatching &gm, const std::string &query, const std::string &type = "")#

Fuse each sub-graph matching a query in a Meta Operator.

Parameters:
  • gm – SinglePassGraphMatching containing the graph to manipulate

  • query – Sub-graph matching query

  • type – Type name of the resulting meta operators

Returns:

size_t Number of replacement

MatMul tiling#

Tile any MatMul operator to several fixed size matrix multiplications. For instance, for a MatMul of size 80x80 and a tiling of 16x16, this will tile the MatMul operator to 25 (5 by 5) MatMul operators of size 16x16, with Slice operators inserted at the inputs and Concat operators inserted at the outputs.

This is especially useful when matrix multiplication must be mapped to fixed maximum size hardware TPU (Tensor Processing Unit) or MMA (Matrix Multiplication Accelerator). This recipe can be combined with the convToMatMul recipe in order to convert convolutions to matrix multiplication beforehand, and constantFolding recipe to fold sliced constant tensors.

void Aidge::matMulTiling(NodePtr matMul, const std::vector<DimSize_t> &maxDims)#

Tile any :cpp:function:Aidge::MatMul operator to several fixed size matrix multiplications. For instance, for a MatMul of size 80x80 and a tiling of 16x16, this will tile the MatMul operator to 25 (5 by 5) MatMul operators of size 16x16, with Slice operators inserted at the inputs and Concat operators inserted at the outputs.

This is especially useful when matrix multiplication must be mapped to fixed maximum size hardware TPU (Tensor Processing Unit) or MMA (Matrix Multiplication Accelerator). This recipe can be combined with the :cpp:function:Aidge::convToMatMul recipe in order to convert convolutions to matrix multiplication beforehand, and :cpp:function:Aidge::constantFolding recipe to fold sliced constant tensors.

Parameters:
  • matMul – MatMul operator to be tiled.

  • maxDims – Maximum output dimensions of the tiled MatMul operators.

Initial graph:

        %%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
flowchart TB

MatMul_0("matmul1<br/><sub><em>(MatMul#0)</em></sub>"):::rootCls
Producer_1("w1<br/><sub><em>(Producer#1)</em></sub>"):::producerCls
Producer_0("dataProvider<br/><sub><em>(Producer#0)</em></sub>"):::producerCls
MatMul_0--->|"0 [2, 3, 80, 80]&rarr;"|output0((out#0)):::outputCls
Producer_1-->|"0 [2, 3, 80, 80]&rarr;1"|MatMul_0
Producer_0-->|"0 [2, 3, 80, 80]&rarr;0"|MatMul_0
classDef inputCls fill:#afa
classDef outputCls fill:#ffa
classDef externalCls fill:#ccc
classDef producerCls fill:#ccf
classDef genericCls fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5
classDef metaCls stroke-width:5px
classDef rootCls stroke:#f00
classDef producerCls_rootCls stroke:#f00,fill:#ccf
classDef genericCls_rootCls stroke:#f00,fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5
classDef metaCls_rootCls stroke:#f00,stroke-width:5px
    

Graph generated by a single step of the matMulTiling recipe (after the very first matrix multiplication split):

        %%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
flowchart TB

Producer_7(<em>Producer#7</em>):::producerCls
MatMul_1(<em>MatMul#1</em>)
Concat_0(<em>Concat#0</em>)
Producer_1(<em>Producer#1</em>):::producerCls
Producer_2(<em>Producer#2</em>):::producerCls
Producer_3(<em>Producer#3</em>):::producerCls
Producer_4(<em>Producer#4</em>):::producerCls
Producer_5(<em>Producer#5</em>):::producerCls
Producer_6(<em>Producer#6</em>):::producerCls
Identity_0(<em>Identity#0</em>):::rootCls
Slice_0(<em>Slice#0</em>)
Producer_0(<em>Producer#0</em>):::producerCls
MatMul_0(<em>MatMul#0</em>)
Identity_1(<em>Identity#1</em>)
Slice_1(<em>Slice#1</em>)
Producer_7-->|"0 [2]&rarr;4"|Slice_1
MatMul_1-->|"0 [2, 3, 64, 80]&rarr;1"|Concat_0
Producer_1-->|"0 [2]&rarr;2"|Slice_0
Producer_2-->|"0 [2]&rarr;3"|Slice_0
Producer_3-->|"0 [2]&rarr;4"|Slice_0
Producer_4-->|"0 [2]&rarr;1"|Slice_1
Producer_5-->|"0 [2]&rarr;2"|Slice_1
Producer_6-->|"0 [2]&rarr;3"|Slice_1
Identity_0-->|"0 [2, 3, 80, 80]&rarr;0"|Slice_0
Identity_0-->|"0 [2, 3, 80, 80]&rarr;0"|Slice_1
Slice_0-->|"0 [2, 3, 16, 80]&rarr;0"|MatMul_0
Producer_0-->|"0 [2]&rarr;1"|Slice_0
MatMul_0-->|"0 [2, 3, 16, 80]&rarr;0"|Concat_0
Identity_1-->|"0 [2, 3, 80, 80]&rarr;1"|MatMul_1
Identity_1-->|"0 [2, 3, 80, 80]&rarr;1"|MatMul_0
Slice_1-->|"0 [2, 3, 64, 80]&rarr;0"|MatMul_1
input0((in#0)):::inputCls--->|"&rarr;0[2, 3, 80, 80]"|Identity_0
input1((in#1)):::inputCls--->|"&rarr;0[2, 3, 80, 80]"|Identity_1
Concat_0--->|"0 [2, 3, 80, 80]&rarr;"|output0((out#0)):::outputCls
classDef inputCls fill:#afa
classDef outputCls fill:#ffa
classDef externalCls fill:#ccc
classDef producerCls fill:#ccf
classDef genericCls fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5
classDef metaCls stroke-width:5px
classDef rootCls stroke:#f00
classDef producerCls_rootCls stroke:#f00,fill:#ccf
classDef genericCls_rootCls stroke:#f00,fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5
classDef metaCls_rootCls stroke:#f00,stroke-width:5px
    

Remove Dropout#

Remove Dropout operators.

size_t Aidge::removeDropout(std::shared_ptr<GraphView> graphView)#

Remove Dropout Node.

Parameters:

graphView – Graph view to use graph matching on, in order to apply transfomrations.

Returns:

size_t Number of identity nodes removed

Remove Flatten#

Remove Flatten operators.

aidge_core.remove_flatten(graph_view: aidge_core.aidge_core.GraphView) None#

Recipe to remove a Flatten operator if it is followed by a FC or a MatMul. The recipe can remove multiple Flatten operator if they are one after the other.

Parameters:

graph_view (aidge_core.GraphView) – Graph view on which we want to apply the recipe.

void Aidge::removeFlatten(std::shared_ptr<GraphView> graphView)#

Remove Flatten before :cpp:function:Aidge::FC Node.

Parameters:

graphView – Graph view to use graph matching on, in order to apply transformations.