Recipes#
Recipes are relatively generic, built-in functionalities for manipulating a compute graph in Aidge. Some are built with Aidge’s graph matching engine, do not hesitate to have a look at their source code to understand how they work and build similar functionalities!
⚠️ The recipe list is auto-generated for easier maintenance but still lacks proper categorization and naming conventions.
Graph Recipes#
adapt_fc_params_format#
- aidge_core.adapt_fc_params_format(graph_view: aidge_core.aidge_core.GraphView, constant_fold: bool = True) None#
Adapt the format of the parameters of a FC layer to be compatible with the input format. i.e. if the input is in NHWC format, the weights will be adapted to NHWC format.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipeconstant_fold (bool, optional) – If true, the adapted Producer will be constant folded
Adapt the format of the parameters of a FC layer to be compatible with the input format. i.e. if the input is in NHWC format, the weights will be adapted to NHWC format.
- Parameters:
graph – Graph to adapt
constantFold – If true, the adapted Producer will be constant folded
adapt_to_backend#
- aidge_core.adapt_to_backend(graph_view: aidge_core.aidge_core.GraphView) None#
Adapt the graph to a specific backend.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipe
Adapt a graph to the available kernels of a backend.
- Parameters:
graph – Graph to manipulate
constant_folding#
- aidge_core.constant_folding(graph_view: aidge_core.aidge_core.GraphView, constant_shape: bool = False) bool#
Retrieve part of the graph that can be pre-computed and replace them by a Producer.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipeconstant_shape (bool, optional) – If true,
Shapeoperator are considered constant, default=False
- Returns:
True if the graph has been modified
- Return type:
bool
Retrieve part of the graph that can be pre-computed and replace them by a Producer.
- Parameters:
graph – Graph to fold the constant
constant_shape – If true Shape operators are considered to be constant
- Returns:
bool True if the graph has been modified
constant_shape_folding#
- aidge_core.constant_shape_folding(graph_view: aidge_core.aidge_core.GraphView, dims: collections.abc.Sequence[collections.abc.Sequence[SupportsInt]] = []) bool#
Retrieve part of the graph that can be pre-computed by setting Shape as constant and replace them by a Producer.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipeconstant_shape (bool, optional) – If true,
Shapeoperator are considered constant, default=False
- Returns:
True if the graph has been modified
- Return type:
bool
Retrieve part of the graph that can be pre-computed by setting Shape as constant and replace them by a Producer.
- Parameters:
graph – Graph to fold the constant
- Returns:
bool True if the graph has been modified
conv_to_matMul#
- aidge_core.conv_to_matMul(graph: aidge_core.aidge_core.GraphView) int#
Convert
Convoperators toUnfold(im2col operation) +MatMul+Reshape.Input graph:
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%% flowchart TB Producer_3("conv2_w\n<sub><em>(Producer#3)</em></sub>"):::producerCls Conv_1("conv2\n<sub><em>(Conv#1)</em></sub>") Conv_0("conv1\n<sub><em>(Conv#0)</em></sub>") Producer_2("conv1_b\n<sub><em>(Producer#2)</em></sub>"):::producerCls Producer_1("conv1_w\n<sub><em>(Producer#1)</em></sub>"):::producerCls Producer_4("conv3_w\n<sub><em>(Producer#4)</em></sub>"):::producerCls Conv_2("conv3\n<sub><em>(Conv#2)</em></sub>") Producer_5("conv3_b\n<sub><em>(Producer#5)</em></sub>"):::producerCls Producer_0("dataProvider\n<sub><em>(Producer#0)</em></sub>"):::producerCls_rootCls Producer_3-->|"0 [7, 4, 3, 3]→1"|Conv_1 Conv_1-->|"0 [2, 7, 9, 20]→0"|Conv_2 Conv_0-->|"0 [2, 4, 11, 22]→0"|Conv_1 Producer_2-->|"0 [4]→2"|Conv_0 Producer_1-->|"0 [4, 3, 3, 3]→1"|Conv_0 Producer_4-->|"0 [10, 7, 1, 1]→1"|Conv_2 Producer_5-->|"0 [10]→2"|Conv_2 Producer_0-->|"0 [2, 3, 13, 24]→0"|Conv_0 input0((in#0)):::inputCls--->|"→2"|Conv_1 Conv_2--->|"0 [2, 10, 5, 10]→"|output0((out#0)):::outputCls classDef inputCls fill:#afa classDef outputCls fill:#ffa classDef externalCls fill:#ccc classDef producerCls fill:#ccf classDef genericCls fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5 classDef metaCls stroke-width:5px classDef rootCls stroke:#f00 classDef producerCls_rootCls stroke:#f00,fill:#ccf classDef genericCls_rootCls stroke:#f00,fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5 classDef metaCls_rootCls stroke:#f00,stroke-width:5pxOutput graph:
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%% flowchart TB Producer_0("dataProvider\n<sub><em>(Producer#0)</em></sub>"):::producerCls_rootCls MatMul_2("conv3_matmul\n<sub><em>(MatMul#2)</em></sub>") Producer_7("conv3_reshape_shape_prod\n<sub><em>(Producer#7)</em></sub>"):::producerCls Reshape_2("conv3_reshape\n<sub><em>(Reshape#2)</em></sub>") Add_1("conv3_add\n<sub><em>(Add#1)</em></sub>") Producer_8("conv3_b_reshape_0\n<sub><em>(Producer#8)</em></sub>"):::producerCls Producer_1("conv1_w_reshape_0\n<sub><em>(Producer#1)</em></sub>"):::producerCls Unfold_2("conv3_unfold\n<sub><em>(Unfold#2)</em></sub>") Producer_3("conv1_b_reshape_0\n<sub><em>(Producer#3)</em></sub>"):::producerCls Unfold_0("conv1_unfold\n<sub><em>(Unfold#0)</em></sub>") MatMul_0("conv1_matmul\n<sub><em>(MatMul#0)</em></sub>") Producer_2("conv1_reshape_shape_prod\n<sub><em>(Producer#2)</em></sub>"):::producerCls Reshape_0("conv1_reshape\n<sub><em>(Reshape#0)</em></sub>") Add_0("conv1_add\n<sub><em>(Add#0)</em></sub>") Unfold_1("conv2_unfold\n<sub><em>(Unfold#1)</em></sub>") MatMul_1("conv2_matmul\n<sub><em>(MatMul#1)</em></sub>") Producer_5("conv2_reshape_shape_prod\n<sub><em>(Producer#5)</em></sub>"):::producerCls Reshape_1("conv2_reshape\n<sub><em>(Reshape#1)</em></sub>") Producer_4("conv2_w_reshape_0\n<sub><em>(Producer#4)</em></sub>"):::producerCls Producer_6("conv3_w_reshape_0\n<sub><em>(Producer#6)</em></sub>"):::producerCls Producer_0-->|"0 [2, 3, 13, 24]→0"|Unfold_0 MatMul_2-->|"0 [2, 10, 50]→0"|Reshape_2 Producer_7-->|"0 [4]→1"|Reshape_2 Reshape_2-->|"0 [2, 10, 5, 10]→0"|Add_1 Producer_8-->|"0 [1, 10, 1, 1]→1"|Add_1 Producer_1-->|"0 [4, 27]→0"|MatMul_0 Unfold_2-->|"0 [2, 7, 50]→1"|MatMul_2 Producer_3-->|"0 [1, 4, 1, 1]→1"|Add_0 Unfold_0-->|"0 [2, 27, 242]→1"|MatMul_0 MatMul_0-->|"0 [2, 4, 242]→0"|Reshape_0 Producer_2-->|"0 [4]→1"|Reshape_0 Reshape_0-->|"0 [2, 4, 11, 22]→0"|Add_0 Add_0-->|"0 [2, 4, 11, 22]→0"|Unfold_1 Unfold_1-->|"0 [2, 36, 180]→1"|MatMul_1 MatMul_1-->|"0 [2, 7, 180]→0"|Reshape_1 Producer_5-->|"0 [4]→1"|Reshape_1 Reshape_1-->|"0 [2, 7, 9, 20]→0"|Unfold_2 Producer_4-->|"0 [7, 36]→0"|MatMul_1 Producer_6-->|"0 [10, 7]→0"|MatMul_2 Add_1--->|"0 [2, 10, 5, 10]→"|output0((out#0)):::outputCls classDef inputCls fill:#afa classDef outputCls fill:#ffa classDef externalCls fill:#ccc classDef producerCls fill:#ccf classDef genericCls fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5 classDef metaCls stroke-width:5px classDef rootCls stroke:#f00 classDef producerCls_rootCls stroke:#f00,fill:#ccf classDef genericCls_rootCls stroke:#f00,fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5 classDef metaCls_rootCls stroke:#f00,stroke-width:5px- Parameters:
graph (
aidge_core.GraphView) – Graph to manipulate
Transform Conv layers with MatMul.
- Parameters:
graph – Graph to manipulate
- Returns:
size_t Number of replacement
expand_metaops#
- aidge_core.expand_metaops(graph_view: aidge_core.aidge_core.GraphView, recursive: bool = False, name_format: str = '{0}', unique_name: bool = False) None#
Flatten the graph by replacing the meta operators by their micro graph.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the reciperecursive (bool) – If true, recursively replace meta operators until there is no more meta operator in the graph.
name_format (str) – The formatting string to be used with fmt::format() for naming the nodes from the meta-op (inner nodes) in the expanded graph. The usable positional arguments are the following: {0} inner node name, {1} inner node type, {2} meta-node name, {3} meta-node type. Default is {0} (inner node name).
unique_name (bool) – If True, ensure that the expanded nodes name are unique in the expanded graph.
Flatten the graph by replacing the meta operators by their micro graph.
The usable positional arguments are the following: {0} inner node name, {1} inner node type, {2} meta-node name, {3} meta-node type Default is {0} (inner node name).
- Parameters:
recursive – If true, recursively replace meta operators until there is no more meta operator in the graph.
string – nameFormat: The formatting string to be used with fmt::format() for naming the nodes from the meta-op (inner nodes) in the expanded graph.
bool – uniqueName: If true, ensure that the expanded nodes name are unique in the expanded graph.
explicit_cast_move#
- aidge_core.explicit_cast_move(graph_view: aidge_core.aidge_core.GraphView) None#
Insert Cast and Move operators where needed (thus removing all implicit data type conversion and backend change data movement).
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipe
Add Cast and Move operators where needed to ensure no conversion needs to be done at the Operator level.
explicit_transpose#
- aidge_core.explicit_transpose(graph_view: aidge_core.aidge_core.GraphView) None#
Insert Transpose operators where needed to ensure no transposition needs to be done at the Operator level (thus removing all implicit data format conversion).
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipe
Add Transpose operators where needed to ensure no transposition needs to be done at the Operator level.
fibnlr#
- aidge_core.fibnlr(graph_view: aidge_core.aidge_core.GraphView, provider: aidge_core.aidge_core.DataProvider, params: aidge_core.aidge_core.fibnlr_Parameters) set[aidge_core.aidge_core.Node]#
Fast Input-Based Non-Linearity pruning. Model must be compiled with dims forwarded !
fibnlr_compute#
- aidge_core.fibnlr_compute(graph_view: aidge_core.aidge_core.GraphView, provider: aidge_core.aidge_core.DataProvider, params: aidge_core.aidge_core.fibnlr_Parameters) list[float]#
Computes NNPR value by running inferences over a dataset.
STEP 2 - Compute NNPR value by running inferences over a dataset.
- Returns:
A vector of all convergence value after each iteration.
fibnlr_compute_NNPR#
- aidge_core.fibnlr_compute_NNPR(graph: aidge_core.aidge_core.GraphView, params: aidge_core.aidge_core.fibnlr_Parameters) dict[aidge_core.aidge_core.Node, float]#
Computes the Normalized Negative/Positive Ratio (NNPR) for each node. Useful if you wish to recompute NNPR using a different normalization method without having to recompute NPR.
Fast Input-Based Non-Linearity Pruning is a two-step recipes.
This function is the 1st step. It computes a metric called Normalized Negative per Positive ratio. For now, this metric is only called for ReLU.
- Parameters:
graphView – Graph view to use graph matching on, in order to apply transfomrations.
type – Type of the nodes to remove
incProducers – If true, also remove the producers attached to the removed nodes
- Returns:
A set of ReLU node
Computes the Normalized Negative / Positive Ratio(NNPR) for each node. Useful if you wish to recompute NNPR using a different normalization method without having to recompute NPR.
fibnlr_find_groups#
- aidge_core.fibnlr_find_groups(graph: aidge_core.aidge_core.GraphView) set[aidge_core.aidge_core.Group]#
Returns a set of all groups found in a graph (using the Group attributes).
Returns a set of all groups find in a graph (using the Group attributes)
fibnlr_find_node_by_group#
- aidge_core.fibnlr_find_node_by_group(graph: aidge_core.aidge_core.GraphView, group: aidge_core.aidge_core.Group) set[aidge_core.aidge_core.Node]#
Returns a set of nodes belonging to the given group.
Returns a set of nodes belong to the given group
fibnlr_prepare#
- aidge_core.fibnlr_prepare(graph_view: aidge_core.aidge_core.GraphView, params: aidge_core.aidge_core.fibnlr_Parameters) set[aidge_core.aidge_core.Node]#
Prepares non-linearity nodes by adding a hook to compute NPR value.
STEP 1 - Prepare non - linearity nodes, by adding a hook to compute NNPR value.
- Returns:
Set of prepared nodes
fibnlr_prune#
- aidge_core.fibnlr_prune(graph_view: aidge_core.aidge_core.GraphView, params: aidge_core.aidge_core.fibnlr_Parameters) set[aidge_core.aidge_core.Node]#
Prunes relevant non-linearity nodes and replaces them with an identity node.
fold_constantOfShape#
- aidge_core.fold_constantOfShape(graph_view: aidge_core.aidge_core.GraphView) int#
Fuses constant => Generic | constantOfShape and transforms it into a Producer
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipe.
Compute the output of any ConstantOfShape with a constant input and replaces it with a Producer.
Replace “constant->GenericOperator|ConstantOfShape” with “Producer”.
Note
Currently, this function only matches the query “Producer->ConstantOfShape”.
- Parameters:
graph – GraphView to transform.
- Returns:
std::size_t Number of replacements.
fuse_batchnorm#
- aidge_core.fuse_batchnorm(graph_view: aidge_core.aidge_core.GraphView) None#
Recipe to remove a flatten operator.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipe
Fuse :cpp:function:
Aidge::BatchNormwith :cpp:function:Aidge::Convor :cpp:function:Aidge::FCNodes. Ref: https://nenadmarkus.com/p/fusing-batchnorm-and-conv/.- Parameters:
graphView – Graph view to use graph matching on, in order to apply transformations.
Fuse :cpp:function:
Aidge::BatchNormwith :cpp:function:Aidge::Convor :cpp:function:Aidge::FCNodes. Ref: https://nenadmarkus.com/p/fusing-batchnorm-and-conv/.- Parameters:
nodes – Strict set of Node to merge.
init_producer#
- aidge_core.init_producer(graph: aidge_core.aidge_core.GraphView, pattern: str, filler: collections.abc.Callable[[aidge_core.aidge_core.Tensor], None]) int#
inject_ber_to_layer#
- aidge_core.inject_ber_to_layer(graph_view: aidge_core.aidge_core.GraphView, bit_error_rate: SupportsFloat, layer_name: str) None#
Inject bitflips to specific layer in the network.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipebit_error_rate (float) – bit error rate
layer_name (str) – layer name to inject noise into
TODO.
- Parameters:
fault_node – TODO
bitErrorRate – TODO
layer_name – TODO
inject_ber_weights#
- aidge_core.inject_ber_weights(graph_view: aidge_core.aidge_core.GraphView, bit_error_rate: SupportsFloat) None#
Inject BER to weights.
inject_bitflip#
- aidge_core.inject_bitflip(graph_view: aidge_core.aidge_core.GraphView, nb_bits: SupportsInt) None#
Inject faults in the network using the N-bit flip model.
This function updates the given GraphView by inserting fault nodes between weight producers and their consuming nodes. Fault nodes randomly flip bits in the weights during the forward pass, simulating single event upsets (SEUs) based on the total number of bits specified by nb_bits.
- Parameters:
graph_view (
aidge_core.GraphView) – The computational graph to be modified.nb_bits (int) – Total number of bits to flip in the network weights.
- Returns:
None. The graph is modified in place.
Inject faults in the network using the N-bit flip model.
This function updates the given GraphView by inserting fault nodes between weight producers and their consuming nodes. Fault nodes randomly flip bits in the weights during the forward pass, simulating single event upsets (SEUs) based on the total number of bits specified by nb_bits.
- Parameters:
graph – Graph to manipulate
nAffectedBits – The total number of bits to flip
inject_fixed_bitflip#
- aidge_core.inject_fixed_bitflip(graph_view: aidge_core.aidge_core.GraphView, nb_bits: SupportsInt) None#
Inject faults in the network using a fixed N-bit flip model.
This function updates the given GraphView by inserting fault nodes that apply a fixed fault pattern during the forward pass. The affected weight indices and bit positions are determined at node creation based on random sampling. This precomputed fault pattern remains unchanged during execution.
- Parameters:
graph_view (
aidge_core.GraphView) – The computational graph to be modified.nb_bits (int) – Total number of bits to flip in the network weights.
- Returns:
None. The graph is modified in place.
Inject faults in the network using a fixed N-bit flip model.
This function updates the given GraphView by inserting fault nodes that apply a fixed fault pattern during the forward pass. The affected weight indices and bit positions are determined at node creation based on random sampling. This precomputed fault pattern remains unchanged during execution.
- Parameters:
graph – Graph to manipulate
nAffectedBits – The total number of bits to flip
matmul_to_fc#
- aidge_core.matmul_to_fc(graph_view: aidge_core.aidge_core.GraphView) None#
Recipe to Fuse MatMul and Add operators into an
aidge_core.FCoperator.- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipe
producers#
- aidge_core.producers(graphview: aidge_core.aidge_core.GraphView, constant: bool = False) list[aidge_core.aidge_core.Tensor]#
remove_dropout#
- aidge_core.remove_dropout(graph_view: aidge_core.aidge_core.GraphView) int#
Recipe to remove dropout operators.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipe- Returns:
Number of removed operators.
- Return type:
int
Remove
DropoutNode.- Parameters:
graphView – Graph view to use graph matching on, in order to apply transfomrations.
- Returns:
size_t Number of identity nodes removed
remove_flatten#
- aidge_core.remove_flatten(graph_view: aidge_core.aidge_core.GraphView) None#
Recipe to remove a Flatten operator if it is followed by a FC or a MatMul. The recipe can remove multiple Flatten operator if they are one after the other.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipe.
remove_identity#
- aidge_core.remove_identity(graph_view: aidge_core.aidge_core.GraphView) int#
Recipe to remove identity operators.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipe- Returns:
Number of removed operators.
- Return type:
int
Remove all identity nodes
- Parameters:
graph – Graph to manipulate
- Returns:
size_t Number of identity nodes removed
remove_node#
- aidge_core.remove_node(graph_view: aidge_core.aidge_core.GraphView, type: str, incProducers: bool = False) int#
Recipe to remove operators of a given type.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipetype (str) – Type of the operators to remove
incProducers (bool) – If true, also removed attached Producers
- Returns:
Number of removed operators.
- Return type:
int
Remove a node type.
- Parameters:
graphView – Graph view to use graph matching on, in order to apply transfomrations.
type – Type of the nodes to remove
incProducers – If true, also remove the producers attached to the removed nodes
- Returns:
size_t Number of identity nodes removed
set_number_of_steps#
- aidge_core.set_number_of_steps(graph_view: aidge_core.aidge_core.GraphView, n_steps: SupportsInt) None#
Set the maximum number of steps for all Leaky meta-operators in the graph.
This recipe traverses the graph. For every Leaky meta-operator found, it updates all internal Memorize operators so their end_step equals the provided n_steps value.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipen_steps (int) – Number of time steps to set
Set the maximum number of steps for all Leaky meta-operators in the graph.
This recipe traverses the graph. For every Leaky meta-operator found, it updates all internal Memorize operators so their endStep equals the provided nSteps value.
- Parameters:
graph – The Graph to update
nSteps – The number of steps that will be set
Node Recipes#
apply_weight_interleaving#
- aidge_core.apply_weight_interleaving(node: aidge_core.aidge_core.Node) None#
Replace weight Producer linked to the given node with a weight producer with interleaving and format NHWC. This recipe is specific to the ARM cortex-m export for low bit integer support.
- Parameters:
node – Node which linked weights will receive interleaving
The node passed contains an operator which input of index 1 is supposed be be weights of type Int4, Int3, Int2, binary. This recipe only operates memory transformations on the weight tensor. First, permutes the dimensions to match the dataformat NHWC Second, compact the last dimension of the weights (Channel dimension) into 8bits.
- Parameters:
node – Node
expand_metaop#
- aidge_core.expand_metaop(node: aidge_core.aidge_core.Node, name_format: str = '{0}', unique_name: bool = False) bool#
Flatten the graph by replacing the meta operators by their micro graph.
- Parameters:
node (
aidge_core.Node) – Node to expandname_format (str) – The formatting string to be used with fmt::format() for naming the nodes from the meta-op (inner nodes) in the expanded graph. The usable positional arguments are the following: {0} inner node name, {1} inner node type, {2} meta-node name, {3} meta-node type. Default is {0} (inner node name).
unique_name (bool) – If True, ensure that the expanded nodes name are unique in the expanded graph.
Replace a single meta operator by its micro graph.
The usable positional arguments are the following: {0} inner node name, {1} inner node type, {2} meta-node name, {3} meta-node type Default is {0} (inner node name).
- Parameters:
string – nameFormat: The formatting string to be used with fmt::format() for naming the nodes from the meta-op (inner nodes) in the expanded graph.
bool – uniqueName: If true, ensure that the expanded nodes name are unique in the expanded graph.
- Returns:
true if node is indeed a meta operator and could be expanded.
get_connected_graph_view#
- aidge_core.get_connected_graph_view(arg0: aidge_core.aidge_core.Node) aidge_core.aidge_core.GraphView#
Create a GraphView containing all nodes with a path to given Node.
get_conv_horizontal_tiling#
- aidge_core.get_conv_horizontal_tiling(node: aidge_core.aidge_core.Node, axis: SupportsInt, nb_slices: SupportsInt) set[aidge_core.aidge_core.Node]#
matMul_tiling#
- aidge_core.matMul_tiling(node: aidge_core.aidge_core.Node, max_dims: collections.abc.Sequence[SupportsInt]) None#
Tile any
aidge_core.MatMuloperator to several fixed size matrix multiplications. For instance, for a MatMul of size 80x80 and a tiling of 16x16, this will tile the MatMul operator to 25 (5 by 5) MatMul operators of size 16x16, with Slice operators inserted at the inputs and Concat operators inserted at the outputs.This is especially useful when matrix multiplication must be mapped to fixed maximum size hardware TPU (Tensor Processing Unit) or MMA (Matrix Multiplication Accelerator). This recipe can be combined with the
aidge_core.conv_to_matMulrecipe in order to convert convolutions to matrix multiplication beforehand, andaidge_core.constant_foldingrecipe to fold sliced constant tensors.Initial graph:
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%% flowchart TB MatMul_0("matmul1<br/><sub><em>(MatMul#0)</em></sub>"):::rootCls Producer_1("w1<br/><sub><em>(Producer#1)</em></sub>"):::producerCls Producer_0("dataProvider<br/><sub><em>(Producer#0)</em></sub>"):::producerCls MatMul_0--->|"0 [2, 3, 80, 80]→"|output0((out#0)):::outputCls Producer_1-->|"0 [2, 3, 80, 80]→1"|MatMul_0 Producer_0-->|"0 [2, 3, 80, 80]→0"|MatMul_0 classDef inputCls fill:#afa classDef outputCls fill:#ffa classDef externalCls fill:#ccc classDef producerCls fill:#ccf classDef genericCls fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5 classDef metaCls stroke-width:5px classDef rootCls stroke:#f00 classDef producerCls_rootCls stroke:#f00,fill:#ccf classDef genericCls_rootCls stroke:#f00,fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5 classDef metaCls_rootCls stroke:#f00,stroke-width:5pxGraph generated by a single step of the
matMulTilingrecipe (after the very first matrix multiplication split):%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%% flowchart TB Producer_7(<em>Producer#7</em>):::producerCls MatMul_1(<em>MatMul#1</em>) Concat_0(<em>Concat#0</em>) Producer_1(<em>Producer#1</em>):::producerCls Producer_2(<em>Producer#2</em>):::producerCls Producer_3(<em>Producer#3</em>):::producerCls Producer_4(<em>Producer#4</em>):::producerCls Producer_5(<em>Producer#5</em>):::producerCls Producer_6(<em>Producer#6</em>):::producerCls Identity_0(<em>Identity#0</em>):::rootCls Slice_0(<em>Slice#0</em>) Producer_0(<em>Producer#0</em>):::producerCls MatMul_0(<em>MatMul#0</em>) Identity_1(<em>Identity#1</em>) Slice_1(<em>Slice#1</em>) Producer_7-->|"0 [2]→4"|Slice_1 MatMul_1-->|"0 [2, 3, 64, 80]→1"|Concat_0 Producer_1-->|"0 [2]→2"|Slice_0 Producer_2-->|"0 [2]→3"|Slice_0 Producer_3-->|"0 [2]→4"|Slice_0 Producer_4-->|"0 [2]→1"|Slice_1 Producer_5-->|"0 [2]→2"|Slice_1 Producer_6-->|"0 [2]→3"|Slice_1 Identity_0-->|"0 [2, 3, 80, 80]→0"|Slice_0 Identity_0-->|"0 [2, 3, 80, 80]→0"|Slice_1 Slice_0-->|"0 [2, 3, 16, 80]→0"|MatMul_0 Producer_0-->|"0 [2]→1"|Slice_0 MatMul_0-->|"0 [2, 3, 16, 80]→0"|Concat_0 Identity_1-->|"0 [2, 3, 80, 80]→1"|MatMul_1 Identity_1-->|"0 [2, 3, 80, 80]→1"|MatMul_0 Slice_1-->|"0 [2, 3, 64, 80]→0"|MatMul_1 input0((in#0)):::inputCls--->|"→0[2, 3, 80, 80]"|Identity_0 input1((in#1)):::inputCls--->|"→0[2, 3, 80, 80]"|Identity_1 Concat_0--->|"0 [2, 3, 80, 80]→"|output0((out#0)):::outputCls classDef inputCls fill:#afa classDef outputCls fill:#ffa classDef externalCls fill:#ccc classDef producerCls fill:#ccf classDef genericCls fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5 classDef metaCls stroke-width:5px classDef rootCls stroke:#f00 classDef producerCls_rootCls stroke:#f00,fill:#ccf classDef genericCls_rootCls stroke:#f00,fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5 classDef metaCls_rootCls stroke:#f00,stroke-width:5px- Parameters:
node (
aidge_core.Node) – Operator to be tiledmax_dims (List) – Maximum output dimensions of the tiled MatMul operators
-
void Aidge::matMulTiling(NodePtr matMul, const std::vector<DimSize_t> &maxDims)#
Tile any :cpp:function:
Aidge::MatMuloperator to several fixed size matrix multiplications. For instance, for a MatMul of size 80x80 and a tiling of 16x16, this will tile the MatMul operator to 25 (5 by 5) MatMul operators of size 16x16, with Slice operators inserted at the inputs and Concat operators inserted at the outputs.This is especially useful when matrix multiplication must be mapped to fixed maximum size hardware TPU (Tensor Processing Unit) or MMA (Matrix Multiplication Accelerator). This recipe can be combined with the :cpp:function:
Aidge::convToMatMulrecipe in order to convert convolutions to matrix multiplication beforehand, and :cpp:function:Aidge::constantFoldingrecipe to fold sliced constant tensors.- Parameters:
matMul – MatMul operator to be tiled.
maxDims – Maximum output dimensions of the tiled MatMul operators.
to_generic_op#
- aidge_core.to_generic_op(node: aidge_core.aidge_core.Node) None#
Transform to a Generic Operator.
- Parameters:
node – Node which Operator will turn into a Generic Operator