Recipes#

Recipes are relatively generic, built-in functionalities for manipulating a compute graph in Aidge. Some are built with Aidge’s graph matching engine, do not hesitate to have a look at their source code to understand how they work and build similar functionalities!

⚠️ The recipe list is auto-generated for easier maintenance but still lacks proper categorization and naming conventions.

Graph Recipes#

adapt_fc_params_format#

aidge_core.adapt_fc_params_format(graph_view: aidge_core.aidge_core.GraphView, constant_fold: bool = True) None#

Adapt the format of the parameters of a FC layer to be compatible with the input format. i.e. if the input is in NHWC format, the weights will be adapted to NHWC format.

Parameters:
  • graph_view (aidge_core.GraphView) – Graph view on which we want to apply the recipe

  • constant_fold (bool, optional) – If true, the adapted Producer will be constant folded

void Aidge::adaptFCParamsFormat(std::shared_ptr<GraphView> graph, bool constantFold = true)#

Adapt the format of the parameters of a FC layer to be compatible with the input format. i.e. if the input is in NHWC format, the weights will be adapted to NHWC format.

Parameters:
  • graph – Graph to adapt

  • constantFold – If true, the adapted Producer will be constant folded

adapt_to_backend#

aidge_core.adapt_to_backend(graph_view: aidge_core.aidge_core.GraphView) None#

Adapt the graph to a specific backend.

Parameters:

graph_view (aidge_core.GraphView) – Graph view on which we want to apply the recipe

void Aidge::adaptToBackend(std::shared_ptr<GraphView> graph)#

Adapt a graph to the available kernels of a backend.

Parameters:

graph – Graph to manipulate

constant_folding#

aidge_core.constant_folding(graph_view: aidge_core.aidge_core.GraphView, constant_shape: bool = False) bool#

Retrieve part of the graph that can be pre-computed and replace them by a Producer.

Parameters:
  • graph_view (aidge_core.GraphView) – Graph view on which we want to apply the recipe

  • constant_shape (bool, optional) – If true, Shape operator are considered constant, default=False

Returns:

True if the graph has been modified

Return type:

bool

bool Aidge::constantFolding(std::shared_ptr<GraphView> graph, bool constantShape = false)#

Retrieve part of the graph that can be pre-computed and replace them by a Producer.

Parameters:
  • graph – Graph to fold the constant

  • constant_shape – If true Shape operators are considered to be constant

Returns:

bool True if the graph has been modified

constant_shape_folding#

aidge_core.constant_shape_folding(graph_view: aidge_core.aidge_core.GraphView, dims: collections.abc.Sequence[collections.abc.Sequence[SupportsInt]] = []) bool#

Retrieve part of the graph that can be pre-computed by setting Shape as constant and replace them by a Producer.

Parameters:
  • graph_view (aidge_core.GraphView) – Graph view on which we want to apply the recipe

  • constant_shape (bool, optional) – If true, Shape operator are considered constant, default=False

Returns:

True if the graph has been modified

Return type:

bool

bool Aidge::constantShapeFolding(std::shared_ptr<GraphView> graph, const std::vector<std::vector<DimSize_t>> &dims = {})#

Retrieve part of the graph that can be pre-computed by setting Shape as constant and replace them by a Producer.

Parameters:

graph – Graph to fold the constant

Returns:

bool True if the graph has been modified

conv_to_matMul#

aidge_core.conv_to_matMul(graph: aidge_core.aidge_core.GraphView) int#

Convert Conv operators to Unfold (im2col operation) + MatMul + Reshape.

Input graph:

        %%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
flowchart TB

      Producer_3("conv2_w\n<sub><em>(Producer#3)</em></sub>"):::producerCls
      Conv_1("conv2\n<sub><em>(Conv#1)</em></sub>")
      Conv_0("conv1\n<sub><em>(Conv#0)</em></sub>")
      Producer_2("conv1_b\n<sub><em>(Producer#2)</em></sub>"):::producerCls
      Producer_1("conv1_w\n<sub><em>(Producer#1)</em></sub>"):::producerCls
      Producer_4("conv3_w\n<sub><em>(Producer#4)</em></sub>"):::producerCls
      Conv_2("conv3\n<sub><em>(Conv#2)</em></sub>")
      Producer_5("conv3_b\n<sub><em>(Producer#5)</em></sub>"):::producerCls
      Producer_0("dataProvider\n<sub><em>(Producer#0)</em></sub>"):::producerCls_rootCls
      Producer_3-->|"0 [7, 4, 3, 3]&rarr;1"|Conv_1
      Conv_1-->|"0 [2, 7, 9, 20]&rarr;0"|Conv_2
      Conv_0-->|"0 [2, 4, 11, 22]&rarr;0"|Conv_1
      Producer_2-->|"0 [4]&rarr;2"|Conv_0
      Producer_1-->|"0 [4, 3, 3, 3]&rarr;1"|Conv_0
      Producer_4-->|"0 [10, 7, 1, 1]&rarr;1"|Conv_2
      Producer_5-->|"0 [10]&rarr;2"|Conv_2
      Producer_0-->|"0 [2, 3, 13, 24]&rarr;0"|Conv_0
      input0((in#0)):::inputCls--->|"&rarr;2"|Conv_1
      Conv_2--->|"0 [2, 10, 5, 10]&rarr;"|output0((out#0)):::outputCls
      classDef inputCls fill:#afa
      classDef outputCls fill:#ffa
      classDef externalCls fill:#ccc
      classDef producerCls fill:#ccf
      classDef genericCls fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5
      classDef metaCls stroke-width:5px
      classDef rootCls stroke:#f00
      classDef producerCls_rootCls stroke:#f00,fill:#ccf
      classDef genericCls_rootCls stroke:#f00,fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5
      classDef metaCls_rootCls stroke:#f00,stroke-width:5px
    

Output graph:

        %%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
flowchart TB

      Producer_0("dataProvider\n<sub><em>(Producer#0)</em></sub>"):::producerCls_rootCls
      MatMul_2("conv3_matmul\n<sub><em>(MatMul#2)</em></sub>")
      Producer_7("conv3_reshape_shape_prod\n<sub><em>(Producer#7)</em></sub>"):::producerCls
      Reshape_2("conv3_reshape\n<sub><em>(Reshape#2)</em></sub>")
      Add_1("conv3_add\n<sub><em>(Add#1)</em></sub>")
      Producer_8("conv3_b_reshape_0\n<sub><em>(Producer#8)</em></sub>"):::producerCls
      Producer_1("conv1_w_reshape_0\n<sub><em>(Producer#1)</em></sub>"):::producerCls
      Unfold_2("conv3_unfold\n<sub><em>(Unfold#2)</em></sub>")
      Producer_3("conv1_b_reshape_0\n<sub><em>(Producer#3)</em></sub>"):::producerCls
      Unfold_0("conv1_unfold\n<sub><em>(Unfold#0)</em></sub>")
      MatMul_0("conv1_matmul\n<sub><em>(MatMul#0)</em></sub>")
      Producer_2("conv1_reshape_shape_prod\n<sub><em>(Producer#2)</em></sub>"):::producerCls
      Reshape_0("conv1_reshape\n<sub><em>(Reshape#0)</em></sub>")
      Add_0("conv1_add\n<sub><em>(Add#0)</em></sub>")
      Unfold_1("conv2_unfold\n<sub><em>(Unfold#1)</em></sub>")
      MatMul_1("conv2_matmul\n<sub><em>(MatMul#1)</em></sub>")
      Producer_5("conv2_reshape_shape_prod\n<sub><em>(Producer#5)</em></sub>"):::producerCls
      Reshape_1("conv2_reshape\n<sub><em>(Reshape#1)</em></sub>")
      Producer_4("conv2_w_reshape_0\n<sub><em>(Producer#4)</em></sub>"):::producerCls
      Producer_6("conv3_w_reshape_0\n<sub><em>(Producer#6)</em></sub>"):::producerCls
      Producer_0-->|"0 [2, 3, 13, 24]&rarr;0"|Unfold_0
      MatMul_2-->|"0 [2, 10, 50]&rarr;0"|Reshape_2
      Producer_7-->|"0 [4]&rarr;1"|Reshape_2
      Reshape_2-->|"0 [2, 10, 5, 10]&rarr;0"|Add_1
      Producer_8-->|"0 [1, 10, 1, 1]&rarr;1"|Add_1
      Producer_1-->|"0 [4, 27]&rarr;0"|MatMul_0
      Unfold_2-->|"0 [2, 7, 50]&rarr;1"|MatMul_2
      Producer_3-->|"0 [1, 4, 1, 1]&rarr;1"|Add_0
      Unfold_0-->|"0 [2, 27, 242]&rarr;1"|MatMul_0
      MatMul_0-->|"0 [2, 4, 242]&rarr;0"|Reshape_0
      Producer_2-->|"0 [4]&rarr;1"|Reshape_0
      Reshape_0-->|"0 [2, 4, 11, 22]&rarr;0"|Add_0
      Add_0-->|"0 [2, 4, 11, 22]&rarr;0"|Unfold_1
      Unfold_1-->|"0 [2, 36, 180]&rarr;1"|MatMul_1
      MatMul_1-->|"0 [2, 7, 180]&rarr;0"|Reshape_1
      Producer_5-->|"0 [4]&rarr;1"|Reshape_1
      Reshape_1-->|"0 [2, 7, 9, 20]&rarr;0"|Unfold_2
      Producer_4-->|"0 [7, 36]&rarr;0"|MatMul_1
      Producer_6-->|"0 [10, 7]&rarr;0"|MatMul_2
      Add_1--->|"0 [2, 10, 5, 10]&rarr;"|output0((out#0)):::outputCls
      classDef inputCls fill:#afa
      classDef outputCls fill:#ffa
      classDef externalCls fill:#ccc
      classDef producerCls fill:#ccf
      classDef genericCls fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5
      classDef metaCls stroke-width:5px
      classDef rootCls stroke:#f00
      classDef producerCls_rootCls stroke:#f00,fill:#ccf
      classDef genericCls_rootCls stroke:#f00,fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5
      classDef metaCls_rootCls stroke:#f00,stroke-width:5px
    
Parameters:

graph (aidge_core.GraphView) – Graph to manipulate

size_t Aidge::convToMatMul(std::shared_ptr<GraphView> graph)#

Transform Conv layers with MatMul.

Parameters:

graph – Graph to manipulate

Returns:

size_t Number of replacement

expand_metaops#

aidge_core.expand_metaops(graph_view: aidge_core.aidge_core.GraphView, recursive: bool = False, name_format: str = '{0}', unique_name: bool = False) None#

Flatten the graph by replacing the meta operators by their micro graph.

Parameters:
  • graph_view (aidge_core.GraphView) – Graph view on which we want to apply the recipe

  • recursive (bool) – If true, recursively replace meta operators until there is no more meta operator in the graph.

  • name_format (str) – The formatting string to be used with fmt::format() for naming the nodes from the meta-op (inner nodes) in the expanded graph. The usable positional arguments are the following: {0} inner node name, {1} inner node type, {2} meta-node name, {3} meta-node type. Default is {0} (inner node name).

  • unique_name (bool) – If True, ensure that the expanded nodes name are unique in the expanded graph.

void Aidge::expandMetaOps(std::shared_ptr<GraphView> graph, bool recursive = false, const std::string &nameFormat = "{0}", bool uniqueName = false)#

Flatten the graph by replacing the meta operators by their micro graph.

The usable positional arguments are the following: {0} inner node name, {1} inner node type, {2} meta-node name, {3} meta-node type Default is {0} (inner node name).

Parameters:
  • recursive – If true, recursively replace meta operators until there is no more meta operator in the graph.

  • string – nameFormat: The formatting string to be used with fmt::format() for naming the nodes from the meta-op (inner nodes) in the expanded graph.

  • bool – uniqueName: If true, ensure that the expanded nodes name are unique in the expanded graph.

explicit_cast_move#

aidge_core.explicit_cast_move(graph_view: aidge_core.aidge_core.GraphView) None#

Insert Cast and Move operators where needed (thus removing all implicit data type conversion and backend change data movement).

Parameters:

graph_view (aidge_core.GraphView) – Graph view on which we want to apply the recipe

void Aidge::explicitCastMove(std::shared_ptr<GraphView> graphView)#

Add Cast and Move operators where needed to ensure no conversion needs to be done at the Operator level.

explicit_transpose#

aidge_core.explicit_transpose(graph_view: aidge_core.aidge_core.GraphView) None#

Insert Transpose operators where needed to ensure no transposition needs to be done at the Operator level (thus removing all implicit data format conversion).

Parameters:

graph_view (aidge_core.GraphView) – Graph view on which we want to apply the recipe

void Aidge::explicitTranspose(std::shared_ptr<GraphView> graphView)#

Add Transpose operators where needed to ensure no transposition needs to be done at the Operator level.

fibnlr#

aidge_core.fibnlr(graph_view: aidge_core.aidge_core.GraphView, provider: aidge_core.aidge_core.DataProvider, params: aidge_core.aidge_core.fibnlr_Parameters) set[aidge_core.aidge_core.Node]#

Fast Input-Based Non-Linearity pruning. Model must be compiled with dims forwarded !

std::unordered_set<NodePtr> Aidge::fibnlr::fibnlr(const std::shared_ptr<GraphView> graph, Aidge::DataProvider &provider, const Parameters &params)#

fibnlr_compute#

aidge_core.fibnlr_compute(graph_view: aidge_core.aidge_core.GraphView, provider: aidge_core.aidge_core.DataProvider, params: aidge_core.aidge_core.fibnlr_Parameters) list[float]#

Computes NNPR value by running inferences over a dataset.

std::vector<double> Aidge::fibnlr::compute(const std::shared_ptr<GraphView> graph, Aidge::DataProvider &provider, const Parameters &params)#

STEP 2 - Compute NNPR value by running inferences over a dataset.

Returns:

A vector of all convergence value after each iteration.

fibnlr_compute_NNPR#

aidge_core.fibnlr_compute_NNPR(graph: aidge_core.aidge_core.GraphView, params: aidge_core.aidge_core.fibnlr_Parameters) dict[aidge_core.aidge_core.Node, float]#

Computes the Normalized Negative/Positive Ratio (NNPR) for each node. Useful if you wish to recompute NNPR using a different normalization method without having to recompute NPR.

size_t Aidge::fibnlr::computeNNPR(std::shared_ptr<GraphView> graphView, const std::string &type, bool incProducers = false)#

Fast Input-Based Non-Linearity Pruning is a two-step recipes.

This function is the 1st step. It computes a metric called Normalized Negative per Positive ratio. For now, this metric is only called for ReLU.

Parameters:
  • graphView – Graph view to use graph matching on, in order to apply transfomrations.

  • type – Type of the nodes to remove

  • incProducers – If true, also remove the producers attached to the removed nodes

Returns:

A set of ReLU node

std::unordered_map<NodePtr, double> Aidge::fibnlr::computeNNPR(const std::shared_ptr<GraphView> graph, const Parameters &params)#

Computes the Normalized Negative / Positive Ratio(NNPR) for each node. Useful if you wish to recompute NNPR using a different normalization method without having to recompute NPR.

fibnlr_find_groups#

aidge_core.fibnlr_find_groups(graph: aidge_core.aidge_core.GraphView) set[aidge_core.aidge_core.Group]#

Returns a set of all groups found in a graph (using the Group attributes).

std::unordered_set<Group> Aidge::fibnlr::findGroups(const std::shared_ptr<GraphView> graph)#

Returns a set of all groups find in a graph (using the Group attributes)

fibnlr_find_node_by_group#

aidge_core.fibnlr_find_node_by_group(graph: aidge_core.aidge_core.GraphView, group: aidge_core.aidge_core.Group) set[aidge_core.aidge_core.Node]#

Returns a set of nodes belonging to the given group.

std::unordered_set<NodePtr> Aidge::fibnlr::findNodeByGroup(const std::shared_ptr<GraphView> graph, const Group group)#

Returns a set of nodes belong to the given group

fibnlr_prepare#

aidge_core.fibnlr_prepare(graph_view: aidge_core.aidge_core.GraphView, params: aidge_core.aidge_core.fibnlr_Parameters) set[aidge_core.aidge_core.Node]#

Prepares non-linearity nodes by adding a hook to compute NPR value.

std::unordered_set<NodePtr> Aidge::fibnlr::prepare(const std::shared_ptr<GraphView> graph, const Parameters &params)#

STEP 1 - Prepare non - linearity nodes, by adding a hook to compute NNPR value.

Returns:

Set of prepared nodes

fibnlr_prune#

aidge_core.fibnlr_prune(graph_view: aidge_core.aidge_core.GraphView, params: aidge_core.aidge_core.fibnlr_Parameters) set[aidge_core.aidge_core.Node]#

Prunes relevant non-linearity nodes and replaces them with an identity node.

std::unordered_set<NodePtr> Aidge::fibnlr::prune(const std::shared_ptr<GraphView> graph, const Parameters &params)#

fold_constantOfShape#

aidge_core.fold_constantOfShape(graph_view: aidge_core.aidge_core.GraphView) int#

Fuses constant => Generic | constantOfShape and transforms it into a Producer

Parameters:

graph_view (aidge_core.GraphView) – Graph view on which we want to apply the recipe.

std::size_t Aidge::foldConstantOfShape(std::shared_ptr<GraphView> view)#

Compute the output of any ConstantOfShape with a constant input and replaces it with a Producer.

Replace “constant->GenericOperator|ConstantOfShape” with “Producer”.

Note

Currently, this function only matches the query “Producer->ConstantOfShape”.

Parameters:

graphGraphView to transform.

Returns:

std::size_t Number of replacements.

fuse_batchnorm#

aidge_core.fuse_batchnorm(graph_view: aidge_core.aidge_core.GraphView) None#

Recipe to remove a flatten operator.

Parameters:

graph_view (aidge_core.GraphView) – Graph view on which we want to apply the recipe

void Aidge::fuseBatchNorm(std::shared_ptr<GraphView> graphView)#

Fuse :cpp:function:Aidge::BatchNorm with :cpp:function:Aidge::Conv or :cpp:function:Aidge::FC Nodes. Ref: https://nenadmarkus.com/p/fusing-batchnorm-and-conv/.

Parameters:

graphView – Graph view to use graph matching on, in order to apply transformations.

void Aidge::fuseBatchNorm(std::shared_ptr<Node> conv, std::shared_ptr<Node> batchnorm)#

Fuse :cpp:function:Aidge::BatchNorm with :cpp:function:Aidge::Conv or :cpp:function:Aidge::FC Nodes. Ref: https://nenadmarkus.com/p/fusing-batchnorm-and-conv/.

Parameters:

nodes – Strict set of Node to merge.

init_producer#

aidge_core.init_producer(graph: aidge_core.aidge_core.GraphView, pattern: str, filler: collections.abc.Callable[[aidge_core.aidge_core.Tensor], None]) int#
size_t Aidge::initProducer(std::shared_ptr<GraphView> graph, const std::string &pattern, std::function<void(std::shared_ptr<Tensor>)> filler)#

inject_ber_to_layer#

aidge_core.inject_ber_to_layer(graph_view: aidge_core.aidge_core.GraphView, bit_error_rate: SupportsFloat, layer_name: str) None#

Inject bitflips to specific layer in the network.

Parameters:
  • graph_view (aidge_core.GraphView) – Graph view on which we want to apply the recipe

  • bit_error_rate (float) – bit error rate

  • layer_name (str) – layer name to inject noise into

void Aidge::injectBerToLayer(std::shared_ptr<GraphView> graph, float bitErrorRate, std::string layer_name)#

TODO.

Parameters:
  • fault_node – TODO

  • bitErrorRate – TODO

  • layer_name – TODO

inject_ber_weights#

aidge_core.inject_ber_weights(graph_view: aidge_core.aidge_core.GraphView, bit_error_rate: SupportsFloat) None#

Inject BER to weights.

void Aidge::injectBerToWeights(std::shared_ptr<GraphView> graph, float bitErrorRate)#

inject_bitflip#

aidge_core.inject_bitflip(graph_view: aidge_core.aidge_core.GraphView, nb_bits: SupportsInt) None#

Inject faults in the network using the N-bit flip model.

This function updates the given GraphView by inserting fault nodes between weight producers and their consuming nodes. Fault nodes randomly flip bits in the weights during the forward pass, simulating single event upsets (SEUs) based on the total number of bits specified by nb_bits.

Parameters:
  • graph_view (aidge_core.GraphView) – The computational graph to be modified.

  • nb_bits (int) – Total number of bits to flip in the network weights.

Returns:

None. The graph is modified in place.

void Aidge::injectNBitflipToWeights(std::shared_ptr<GraphView> graph, std::size_t nAffectedBits)#

Inject faults in the network using the N-bit flip model.

This function updates the given GraphView by inserting fault nodes between weight producers and their consuming nodes. Fault nodes randomly flip bits in the weights during the forward pass, simulating single event upsets (SEUs) based on the total number of bits specified by nb_bits.

Parameters:
  • graph – Graph to manipulate

  • nAffectedBits – The total number of bits to flip

inject_fixed_bitflip#

aidge_core.inject_fixed_bitflip(graph_view: aidge_core.aidge_core.GraphView, nb_bits: SupportsInt) None#

Inject faults in the network using a fixed N-bit flip model.

This function updates the given GraphView by inserting fault nodes that apply a fixed fault pattern during the forward pass. The affected weight indices and bit positions are determined at node creation based on random sampling. This precomputed fault pattern remains unchanged during execution.

Parameters:
  • graph_view (aidge_core.GraphView) – The computational graph to be modified.

  • nb_bits (int) – Total number of bits to flip in the network weights.

Returns:

None. The graph is modified in place.

void Aidge::injectFixedNBitflipToWeights(std::shared_ptr<GraphView> graph, std::size_t nAffectedBits)#

Inject faults in the network using a fixed N-bit flip model.

This function updates the given GraphView by inserting fault nodes that apply a fixed fault pattern during the forward pass. The affected weight indices and bit positions are determined at node creation based on random sampling. This precomputed fault pattern remains unchanged during execution.

Parameters:
  • graph – Graph to manipulate

  • nAffectedBits – The total number of bits to flip

matmul_to_fc#

aidge_core.matmul_to_fc(graph_view: aidge_core.aidge_core.GraphView) None#

Recipe to Fuse MatMul and Add operators into an aidge_core.FC operator.

Parameters:

graph_view (aidge_core.GraphView) – Graph view on which we want to apply the recipe

void Aidge::matMulToFC(std::shared_ptr<GraphView> graphView)#

Merge MatMul and :cpp:function:Aidge::Add Node into a :cpp:function:Aidge::FC Node.

Parameters:

graphView – Graph view to use graph matching on, in order to apply transformations.

void Aidge::matMulToFC(std::shared_ptr<Node> matmul, std::shared_ptr<Node> add = nullptr)#

Merge MatMul and :cpp:function:Aidge::Add Node into a :cpp:function:Aidge::FC Node.

Parameters:

nodes – Strict set of Node to merge.

producers#

aidge_core.producers(graphview: aidge_core.aidge_core.GraphView, constant: bool = False) list[aidge_core.aidge_core.Tensor]#
std::set<std::shared_ptr<Tensor>> Aidge::producers(std::shared_ptr<GraphView> graphview, bool constant = false)#

Getter for every Tensor held by a Producer operator in a GraphView.

Parameters:
  • graphviewGraphView instance where Producers should be searched.

  • constant – If true, Producer with attribute constant=true are also included in the returned set, default=false

Returns:

std::set<std::shared_ptr<Node>>

remove_dropout#

aidge_core.remove_dropout(graph_view: aidge_core.aidge_core.GraphView) int#

Recipe to remove dropout operators.

Parameters:

graph_view (aidge_core.GraphView) – Graph view on which we want to apply the recipe

Returns:

Number of removed operators.

Return type:

int

size_t Aidge::removeDropout(std::shared_ptr<GraphView> graphView)#

Remove Dropout Node.

Parameters:

graphView – Graph view to use graph matching on, in order to apply transfomrations.

Returns:

size_t Number of identity nodes removed

remove_flatten#

aidge_core.remove_flatten(graph_view: aidge_core.aidge_core.GraphView) None#

Recipe to remove a Flatten operator if it is followed by a FC or a MatMul. The recipe can remove multiple Flatten operator if they are one after the other.

Parameters:

graph_view (aidge_core.GraphView) – Graph view on which we want to apply the recipe.

void Aidge::removeFlatten(std::shared_ptr<GraphView> graphView)#

Remove Flatten before :cpp:function:Aidge::FC Node.

Parameters:

graphView – Graph view to use graph matching on, in order to apply transformations.

void Aidge::removeFlatten(std::shared_ptr<Node> flatten)#

Remove Flatten before :cpp:function:Aidge::FC Node.

Parameters:

nodes – Strict set of Node to merge.

remove_identity#

aidge_core.remove_identity(graph_view: aidge_core.aidge_core.GraphView) int#

Recipe to remove identity operators.

Parameters:

graph_view (aidge_core.GraphView) – Graph view on which we want to apply the recipe

Returns:

Number of removed operators.

Return type:

int

size_t Aidge::removeIdentity(std::shared_ptr<GraphView> graph)#

Remove all identity nodes

Parameters:

graph – Graph to manipulate

Returns:

size_t Number of identity nodes removed

remove_node#

aidge_core.remove_node(graph_view: aidge_core.aidge_core.GraphView, type: str, incProducers: bool = False) int#

Recipe to remove operators of a given type.

Parameters:
  • graph_view (aidge_core.GraphView) – Graph view on which we want to apply the recipe

  • type (str) – Type of the operators to remove

  • incProducers (bool) – If true, also removed attached Producers

Returns:

Number of removed operators.

Return type:

int

size_t Aidge::removeNode(std::shared_ptr<GraphView> graphView, const std::string &type, bool incProducers = false)#

Remove a node type.

Parameters:
  • graphView – Graph view to use graph matching on, in order to apply transfomrations.

  • type – Type of the nodes to remove

  • incProducers – If true, also remove the producers attached to the removed nodes

Returns:

size_t Number of identity nodes removed

set_number_of_steps#

aidge_core.set_number_of_steps(graph_view: aidge_core.aidge_core.GraphView, n_steps: SupportsInt) None#

Set the maximum number of steps for all Leaky meta-operators in the graph.

This recipe traverses the graph. For every Leaky meta-operator found, it updates all internal Memorize operators so their end_step equals the provided n_steps value.

Parameters:
  • graph_view (aidge_core.GraphView) – Graph view on which we want to apply the recipe

  • n_steps (int) – Number of time steps to set

void Aidge::setNumberOfSteps(std::shared_ptr<GraphView> graph, unsigned int nSteps)#

Set the maximum number of steps for all Leaky meta-operators in the graph.

This recipe traverses the graph. For every Leaky meta-operator found, it updates all internal Memorize operators so their endStep equals the provided nSteps value.

Parameters:
  • graph – The Graph to update

  • nSteps – The number of steps that will be set

Node Recipes#

apply_weight_interleaving#

aidge_core.apply_weight_interleaving(node: aidge_core.aidge_core.Node) None#

Replace weight Producer linked to the given node with a weight producer with interleaving and format NHWC. This recipe is specific to the ARM cortex-m export for low bit integer support.

Parameters:

node – Node which linked weights will receive interleaving

void Aidge::applyWeightInterleaving(std::shared_ptr<Node> node)#

The node passed contains an operator which input of index 1 is supposed be be weights of type Int4, Int3, Int2, binary. This recipe only operates memory transformations on the weight tensor. First, permutes the dimensions to match the dataformat NHWC Second, compact the last dimension of the weights (Channel dimension) into 8bits.

Parameters:

nodeNode

expand_metaop#

aidge_core.expand_metaop(node: aidge_core.aidge_core.Node, name_format: str = '{0}', unique_name: bool = False) bool#

Flatten the graph by replacing the meta operators by their micro graph.

Parameters:
  • node (aidge_core.Node) – Node to expand

  • name_format (str) – The formatting string to be used with fmt::format() for naming the nodes from the meta-op (inner nodes) in the expanded graph. The usable positional arguments are the following: {0} inner node name, {1} inner node type, {2} meta-node name, {3} meta-node type. Default is {0} (inner node name).

  • unique_name (bool) – If True, ensure that the expanded nodes name are unique in the expanded graph.

bool Aidge::expandMetaOp(std::shared_ptr<Node> node, const std::string &nameFormat = "{0}", bool uniqueName = false)#

Replace a single meta operator by its micro graph.

The usable positional arguments are the following: {0} inner node name, {1} inner node type, {2} meta-node name, {3} meta-node type Default is {0} (inner node name).

Parameters:
  • string – nameFormat: The formatting string to be used with fmt::format() for naming the nodes from the meta-op (inner nodes) in the expanded graph.

  • bool – uniqueName: If true, ensure that the expanded nodes name are unique in the expanded graph.

Returns:

true if node is indeed a meta operator and could be expanded.

get_connected_graph_view#

aidge_core.get_connected_graph_view(arg0: aidge_core.aidge_core.Node) aidge_core.aidge_core.GraphView#

Create a GraphView containing all nodes with a path to given Node.

Parameters:

node (Node) – Initial node to construct the graph.

Returns:

GraphView GraphView containing all nodes with a path to node.

Return type:

GraphView

std::shared_ptr<GraphView> Aidge::getConnectedGraphView(std::shared_ptr<Node> node)#

Create a GraphView containing all nodes with a path to given Node.

Parameters:

std::shared_ptr<Node> – node: Initial node to construct the graph.

Returns:

GraphView GraphView containing all nodes with a path to node.

get_conv_horizontal_tiling#

aidge_core.get_conv_horizontal_tiling(node: aidge_core.aidge_core.Node, axis: SupportsInt, nb_slices: SupportsInt) set[aidge_core.aidge_core.Node]#
std::set<std::shared_ptr<Node>> Aidge::getConvHorizontalTiling(const std::shared_ptr<Node> &node, const DimIdx_t axis, const std::size_t nbSlices)#

matMul_tiling#

aidge_core.matMul_tiling(node: aidge_core.aidge_core.Node, max_dims: collections.abc.Sequence[SupportsInt]) None#

Tile any aidge_core.MatMul operator to several fixed size matrix multiplications. For instance, for a MatMul of size 80x80 and a tiling of 16x16, this will tile the MatMul operator to 25 (5 by 5) MatMul operators of size 16x16, with Slice operators inserted at the inputs and Concat operators inserted at the outputs.

This is especially useful when matrix multiplication must be mapped to fixed maximum size hardware TPU (Tensor Processing Unit) or MMA (Matrix Multiplication Accelerator). This recipe can be combined with the aidge_core.conv_to_matMul recipe in order to convert convolutions to matrix multiplication beforehand, and aidge_core.constant_folding recipe to fold sliced constant tensors.

Initial graph:

        %%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
flowchart TB

      MatMul_0("matmul1<br/><sub><em>(MatMul#0)</em></sub>"):::rootCls
      Producer_1("w1<br/><sub><em>(Producer#1)</em></sub>"):::producerCls
      Producer_0("dataProvider<br/><sub><em>(Producer#0)</em></sub>"):::producerCls
      MatMul_0--->|"0 [2, 3, 80, 80]&rarr;"|output0((out#0)):::outputCls
      Producer_1-->|"0 [2, 3, 80, 80]&rarr;1"|MatMul_0
      Producer_0-->|"0 [2, 3, 80, 80]&rarr;0"|MatMul_0
      classDef inputCls fill:#afa
      classDef outputCls fill:#ffa
      classDef externalCls fill:#ccc
      classDef producerCls fill:#ccf
      classDef genericCls fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5
      classDef metaCls stroke-width:5px
      classDef rootCls stroke:#f00
      classDef producerCls_rootCls stroke:#f00,fill:#ccf
      classDef genericCls_rootCls stroke:#f00,fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5
      classDef metaCls_rootCls stroke:#f00,stroke-width:5px
    

Graph generated by a single step of the matMulTiling recipe (after the very first matrix multiplication split):

        %%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
flowchart TB

      Producer_7(<em>Producer#7</em>):::producerCls
      MatMul_1(<em>MatMul#1</em>)
      Concat_0(<em>Concat#0</em>)
      Producer_1(<em>Producer#1</em>):::producerCls
      Producer_2(<em>Producer#2</em>):::producerCls
      Producer_3(<em>Producer#3</em>):::producerCls
      Producer_4(<em>Producer#4</em>):::producerCls
      Producer_5(<em>Producer#5</em>):::producerCls
      Producer_6(<em>Producer#6</em>):::producerCls
      Identity_0(<em>Identity#0</em>):::rootCls
      Slice_0(<em>Slice#0</em>)
      Producer_0(<em>Producer#0</em>):::producerCls
      MatMul_0(<em>MatMul#0</em>)
      Identity_1(<em>Identity#1</em>)
      Slice_1(<em>Slice#1</em>)
      Producer_7-->|"0 [2]&rarr;4"|Slice_1
      MatMul_1-->|"0 [2, 3, 64, 80]&rarr;1"|Concat_0
      Producer_1-->|"0 [2]&rarr;2"|Slice_0
      Producer_2-->|"0 [2]&rarr;3"|Slice_0
      Producer_3-->|"0 [2]&rarr;4"|Slice_0
      Producer_4-->|"0 [2]&rarr;1"|Slice_1
      Producer_5-->|"0 [2]&rarr;2"|Slice_1
      Producer_6-->|"0 [2]&rarr;3"|Slice_1
      Identity_0-->|"0 [2, 3, 80, 80]&rarr;0"|Slice_0
      Identity_0-->|"0 [2, 3, 80, 80]&rarr;0"|Slice_1
      Slice_0-->|"0 [2, 3, 16, 80]&rarr;0"|MatMul_0
      Producer_0-->|"0 [2]&rarr;1"|Slice_0
      MatMul_0-->|"0 [2, 3, 16, 80]&rarr;0"|Concat_0
      Identity_1-->|"0 [2, 3, 80, 80]&rarr;1"|MatMul_1
      Identity_1-->|"0 [2, 3, 80, 80]&rarr;1"|MatMul_0
      Slice_1-->|"0 [2, 3, 64, 80]&rarr;0"|MatMul_1
      input0((in#0)):::inputCls--->|"&rarr;0[2, 3, 80, 80]"|Identity_0
      input1((in#1)):::inputCls--->|"&rarr;0[2, 3, 80, 80]"|Identity_1
      Concat_0--->|"0 [2, 3, 80, 80]&rarr;"|output0((out#0)):::outputCls
      classDef inputCls fill:#afa
      classDef outputCls fill:#ffa
      classDef externalCls fill:#ccc
      classDef producerCls fill:#ccf
      classDef genericCls fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5
      classDef metaCls stroke-width:5px
      classDef rootCls stroke:#f00
      classDef producerCls_rootCls stroke:#f00,fill:#ccf
      classDef genericCls_rootCls stroke:#f00,fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5
      classDef metaCls_rootCls stroke:#f00,stroke-width:5px
    
Parameters:
  • node (aidge_core.Node) – Operator to be tiled

  • max_dims (List) – Maximum output dimensions of the tiled MatMul operators

void Aidge::matMulTiling(NodePtr matMul, const std::vector<DimSize_t> &maxDims)#

Tile any :cpp:function:Aidge::MatMul operator to several fixed size matrix multiplications. For instance, for a MatMul of size 80x80 and a tiling of 16x16, this will tile the MatMul operator to 25 (5 by 5) MatMul operators of size 16x16, with Slice operators inserted at the inputs and Concat operators inserted at the outputs.

This is especially useful when matrix multiplication must be mapped to fixed maximum size hardware TPU (Tensor Processing Unit) or MMA (Matrix Multiplication Accelerator). This recipe can be combined with the :cpp:function:Aidge::convToMatMul recipe in order to convert convolutions to matrix multiplication beforehand, and :cpp:function:Aidge::constantFolding recipe to fold sliced constant tensors.

Parameters:
  • matMul – MatMul operator to be tiled.

  • maxDims – Maximum output dimensions of the tiled MatMul operators.

to_generic_op#

aidge_core.to_generic_op(node: aidge_core.aidge_core.Node) None#

Transform to a Generic Operator.

Parameters:

node – Node which Operator will turn into a Generic Operator

void Aidge::toGenericOp(std::shared_ptr<Node> node)#

Create a GenericOp from an Operator and replace it.

Parameters:

nodeNode which Operator will be changed into a generic Operator