Recipes#
Recipes are relatively generic, built-in functionalities for manipulating a compute graph in Aidge. Some are built with Aidge’s graph matching engine, do not hesitate to have a look at their source code to understand how they work and build similar functionalities!
⚠️ The recipe list is auto-generated for easier maintenance but still lacks proper categorization and naming conventions.
Graph Recipes#
adapt_fc_params_format#
- aidge_core.adapt_fc_params_format(graph_view: aidge_core.aidge_core.GraphView, constant_fold: bool = True) None#
Adapt the format of the parameters of a FC layer to be compatible with the input format. i.e. if the input is in NHWC format, the weights will be adapted to NHWC format.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipeconstant_fold (bool, optional) – If true, the adapted Producer will be constant folded
Adapt the format of the parameters of a FC layer to be compatible with the input format. i.e. if the input is in NHWC format, the weights will be adapted to NHWC format.
- Parameters:
graph – Graph to adapt
constantFold – If true, the adapted Producer will be constant folded
adapt_to_backend#
- aidge_core.adapt_to_backend(graph_view: aidge_core.aidge_core.GraphView) None#
Adapt the graph to a specific backend.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipe
Adapt a graph to the available kernels of a backend.
- Parameters:
graph – Graph to manipulate
constant_folding#
- aidge_core.constant_folding(graph_view: aidge_core.aidge_core.GraphView, constant_shape: bool = False) bool#
Retrieve part of the graph that can be pre-computed and replace them by a Producer.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipeconstant_shape (bool, optional) – If true,
Shapeoperator are considered constant, default=False
- Returns:
True if the graph has been modified
- Return type:
bool
Retrieve part of the graph that can be pre-computed and replace them by a Producer.
- Parameters:
graph – Graph to fold the constant
constant_shape – If true Shape operators are considered to be constant
- Returns:
bool True if the graph has been modified
constant_shape_folding#
- aidge_core.constant_shape_folding(graph_view: aidge_core.aidge_core.GraphView, dims: collections.abc.Sequence[collections.abc.Sequence[SupportsInt | SupportsIndex]] = []) bool#
Retrieve part of the graph that can be pre-computed by setting Shape as constant and replace them by a Producer.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipeconstant_shape (bool, optional) – If true,
Shapeoperator are considered constant, default=False
- Returns:
True if the graph has been modified
- Return type:
bool
Retrieve part of the graph that can be pre-computed by setting Shape as constant and replace them by a Producer.
- Parameters:
graph – Graph to fold the constant
- Returns:
bool True if the graph has been modified
expand_metaops#
- aidge_core.expand_metaops(graph_view: aidge_core.aidge_core.GraphView, recursive: bool = False, name_format: str = '{0}', unique_name: bool = False) None#
Flatten the graph by replacing the meta operators by their micro graph.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the reciperecursive (bool) – If true, recursively replace meta operators until there is no more meta operator in the graph.
name_format (str) – The formatting string to be used with fmt::format() for naming the nodes from the meta-op (inner nodes) in the expanded graph. The usable positional arguments are the following: {0} inner node name, {1} inner node type, {2} meta-node name, {3} meta-node type. Default is {0} (inner node name).
unique_name (bool) – If True, ensure that the expanded nodes name are unique in the expanded graph.
Flatten the graph by replacing the meta operators by their micro graph.
The usable positional arguments are the following: {0} inner node name, {1} inner node type, {2} meta-node name, {3} meta-node type Default is {0} (inner node name).
- Parameters:
recursive – If true, recursively replace meta operators until there is no more meta operator in the graph.
string – nameFormat: The formatting string to be used with fmt::format() for naming the nodes from the meta-op (inner nodes) in the expanded graph.
bool – uniqueName: If true, ensure that the expanded nodes name are unique in the expanded graph.
explicit_cast_move#
- aidge_core.explicit_cast_move(graph_view: aidge_core.aidge_core.GraphView) None#
Insert Cast and Move operators where needed (thus removing all implicit data type conversion and backend change data movement).
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipe
Add Cast and Move operators where needed to ensure no conversion needs to be done at the Operator level.
explicit_transpose#
- aidge_core.explicit_transpose(graph_view: aidge_core.aidge_core.GraphView) None#
Insert Transpose operators where needed to ensure no transposition needs to be done at the Operator level (thus removing all implicit data format conversion).
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipe
Add Transpose operators where needed to ensure no transposition needs to be done at the Operator level.
fibnlr#
- aidge_core.fibnlr(graph_view: aidge_core.aidge_core.GraphView, provider: aidge_core.aidge_core.DataProvider, params: aidge_core.aidge_core.fibnlr_Parameters) set[aidge_core.aidge_core.Node]#
Fast Input-Based Non-Linearity pruning. Model must be compiled with dims forwarded !
fibnlr_compute#
- aidge_core.fibnlr_compute(graph_view: aidge_core.aidge_core.GraphView, provider: aidge_core.aidge_core.DataProvider, params: aidge_core.aidge_core.fibnlr_Parameters) list[float]#
Computes NNPR value by running inferences over a dataset.
STEP 2 - Compute NNPR value by running inferences over a dataset.
- Returns:
A vector of all convergence value after each iteration.
fibnlr_compute_NNPR#
- aidge_core.fibnlr_compute_NNPR(graph: aidge_core.aidge_core.GraphView, params: aidge_core.aidge_core.fibnlr_Parameters) dict[aidge_core.aidge_core.Node, float]#
Computes the Normalized Negative/Positive Ratio (NNPR) for each node. Useful if you wish to recompute NNPR using a different normalization method without having to recompute NPR.
Fast Input-Based Non-Linearity Pruning is a two-step recipes.
This function is the 1st step. It computes a metric called Normalized Negative per Positive ratio. For now, this metric is only called for ReLU.
- Parameters:
graphView – Graph view to use graph matching on, in order to apply transfomrations.
type – Type of the nodes to remove
incProducers – If true, also remove the producers attached to the removed nodes
- Returns:
A set of ReLU node
Computes the Normalized Negative / Positive Ratio(NNPR) for each node. Useful if you wish to recompute NNPR using a different normalization method without having to recompute NPR.
fibnlr_find_groups#
- aidge_core.fibnlr_find_groups(graph: aidge_core.aidge_core.GraphView) set[aidge_core.aidge_core.Group]#
Returns a set of all groups found in a graph (using the Group attributes).
Returns a set of all groups find in a graph (using the Group attributes)
fibnlr_find_node_by_group#
- aidge_core.fibnlr_find_node_by_group(graph: aidge_core.aidge_core.GraphView, group: aidge_core.aidge_core.Group) set[aidge_core.aidge_core.Node]#
Returns a set of nodes belonging to the given group.
Returns a set of nodes belong to the given group
fibnlr_prepare#
- aidge_core.fibnlr_prepare(graph_view: aidge_core.aidge_core.GraphView, params: aidge_core.aidge_core.fibnlr_Parameters) set[aidge_core.aidge_core.Node]#
Prepares non-linearity nodes by adding a hook to compute NPR value.
STEP 1 - Prepare non - linearity nodes, by adding a hook to compute NNPR value.
- Returns:
Set of prepared nodes
fibnlr_prune#
- aidge_core.fibnlr_prune(graph_view: aidge_core.aidge_core.GraphView, params: aidge_core.aidge_core.fibnlr_Parameters) set[aidge_core.aidge_core.Node]#
Prunes relevant non-linearity nodes and replaces them with an identity node.
fold_constantOfShape#
- aidge_core.fold_constantOfShape(graph_view: aidge_core.aidge_core.GraphView) int#
Fuses constant => Generic | constantOfShape and transforms it into a Producer
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipe.
Compute the output of any ConstantOfShape with a constant input and replaces it with a Producer.
Replace “constant->GenericOperator|ConstantOfShape” with “Producer”.
Note
Currently, this function only matches the query “Producer->ConstantOfShape”.
- Parameters:
graph – GraphView to transform.
- Returns:
std::size_t Number of replacements.
fuse_batchnorm#
- aidge_core.fuse_batchnorm(graph_view: aidge_core.aidge_core.GraphView) None#
Recipe to fuse BatchNorm with Conv or FC.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipe
Fuse :cpp:function:
Aidge::BatchNormwith :cpp:function:Aidge::Convor :cpp:function:Aidge::FCNodes. Ref: https://nenadmarkus.com/p/fusing-batchnorm-and-conv/.- Parameters:
graphView – Graph view to use graph matching on, in order to apply transformations.
Fuse :cpp:function:
Aidge::BatchNormwith :cpp:function:Aidge::Convor :cpp:function:Aidge::FCNodes. Ref: https://nenadmarkus.com/p/fusing-batchnorm-and-conv/.- Parameters:
nodes – Strict set of Node to merge.
fuse_convolution#
- aidge_core.fuse_convolution(graph_view: aidge_core.aidge_core.GraphView) None#
Recipe to fuse a 1x1 convolution with another convolution layer. You may first apply fibnlr recipes, remove_identity depending on which pruning strategy used and then this one.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipe- Returns:
True if at least one fusion happened.
Merge “pointwise” convolutions with any other preceding or following convolution with no non-linear activations in between them.
Said otherwise, the following patterns:
NxN -> 1x1 or
1x1 -> NxN. are valid candidates for this recipe. Dilated convolutions are not supported.
Multiple passes are done to to fuse all convolution matching this pattern.
The fused node will always reuse the first node’s name.
fuse_to_metaops#
- aidge_core.fuse_to_metaops(*args, **kwargs)#
Overloaded function.
fuse_to_metaops(gm: aidge_core.aidge_core.SinglePassGraphMatching, query: str, type: str = ‘’, graph_func: collections.abc.Callable[[aidge_core.aidge_core.GraphView], None] = None) -> int
Fuse each sub-graph matching a query in a Meta Operator.
- param gm:
SinglePassGraphMatching containing the graph to manipulate
- type gm:
- param query:
Sub-graph matching query
- type query:
str
- param type:
Type name of the resulting meta operators
- type type:
str, optional
- param graph_func:
Function to apply to the matched graph before building the meta-op
- type graph_func:
function, optional
- return:
Number of sub-graph actually fused in a Meta Operator.
- rtype:
int
fuse_to_metaops(graph_view: aidge_core.aidge_core.GraphView, query: str, type: str = ‘’, graph_func: collections.abc.Callable[[aidge_core.aidge_core.GraphView], None] = None) -> int
Fuse each sub-graph matching a query in a Meta Operator.
- param graph_view:
Graph view on which we want to apply the recipe
- type graph_view:
- param query:
Sub-graph matching query
- type query:
str
- param type:
Type name of the resulting meta operators
- type type:
str, optional
- param graph_func:
Function to apply to the matched graph before building the meta-op
- type graph_func:
function, optional
- return:
Number of sub-graph actually fused in a Meta Operator.
- rtype:
int
Fuse each sub-graph matching a query in a Meta Operator.
- Parameters:
gm – SinglePassGraphMatching containing the graph to manipulate
query – Sub-graph matching query
type – Type name of the resulting meta operators
- Returns:
size_t Number of replacement
Fuse each sub-graph matching a query in a Meta Operator.
- Parameters:
graph – Graph to manipulate
query – Sub-graph matching query
type – Type name of the resulting meta operators
- Returns:
size_t Number of replacement
init_producer#
- aidge_core.init_producer(graph: aidge_core.aidge_core.GraphView, pattern: str, filler: collections.abc.Callable[[aidge_core.aidge_core.Tensor], None]) int#
inject_ber_to_layer#
- aidge_core.inject_ber_to_layer(graph_view: aidge_core.aidge_core.GraphView, bit_error_rate: SupportsFloat | SupportsIndex, layer_name: str) None#
Inject bitflips to specific layer in the network.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipebit_error_rate (float) – bit error rate
layer_name (str) – layer name to inject noise into
Inject BER faults to specific layer by name.
- Parameters:
graph – Graph to manipulate
bitErrorRate – Bit error rate to apply
layer_name – Name of the layer to inject faults into
inject_ber_weights#
- aidge_core.inject_ber_weights(graph_view: aidge_core.aidge_core.GraphView, bit_error_rate: SupportsFloat | SupportsIndex) None#
Inject BER to weights.
inject_bitflip#
- aidge_core.inject_bitflip(graph_view: aidge_core.aidge_core.GraphView, nb_bits: SupportsInt | SupportsIndex) None#
Inject faults in the network using the N-bit flip model.
This function updates the given GraphView by inserting fault nodes between weight producers and their consuming nodes. Fault nodes randomly flip bits in the weights during the forward pass, simulating single event upsets (SEUs) based on the total number of bits specified by nb_bits.
- Parameters:
graph_view (
aidge_core.GraphView) – The computational graph to be modified.nb_bits (int) – Total number of bits to flip in the network weights.
- Returns:
None. The graph is modified in place.
Inject faults in the network using the N-bit flip model.
This function updates the given GraphView by inserting fault nodes between weight producers and their consuming nodes. Fault nodes randomly flip bits in the weights during the forward pass, simulating single event upsets (SEUs) based on the total number of bits specified by nb_bits.
- Parameters:
graph – Graph to manipulate
nAffectedBits – The total number of bits to flip
inject_bitflips_to_layers#
- aidge_core.inject_bitflips_to_layers(graph_view: aidge_core.aidge_core.GraphView, layer_names: collections.abc.Sequence[str], bit_error_rate: SupportsFloat | SupportsIndex) None#
Inject BER faults to multiple layers by name.
This function calls inject_ber_to_layer for each layer name in the list, inserting BitErrorRate fault nodes between weight producers and the specified layers.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipelayer_names (list of str) – List of layer names to inject faults into (e.g., [“layer1”, “layer2”])
bit_error_rate (float) – Bit error rate to apply (e.g., 1e-9)
Inject BER faults to multiple layers by name.
This function calls injectBerToLayer for each layer name in the list, inserting BitErrorRate fault nodes between weight producers and the specified layers.
- Parameters:
graph – Graph to manipulate
layerNames – List of layer names to inject faults into
bitErrorRate – Bit error rate to apply (e.g., 1e-9)
inject_bitflips_to_operators#
- aidge_core.inject_bitflips_to_operators(graph_view: aidge_core.aidge_core.GraphView, operator_types: collections.abc.Sequence[str], bit_error_rate: SupportsFloat | SupportsIndex) None#
Inject BER faults to weights of specified operator types.
This function inserts BitErrorRate fault nodes between weight producers and operators of the specified types. For example, calling with operator_types=[‘FC’] will inject faults between all weight producers and FC (Fully Connected) nodes in the graph.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipeoperator_types (list of str) – List of operator type names to inject faults into (e.g., [“FC”, “Conv”])
bit_error_rate (float) – Bit error rate to apply (e.g., 1e-9)
- Example:
>>> # Inject faults to FC layer weights >>> aidge_core.inject_ber_to_operators(graph, ['FC'], 1e-9) >>> >>> # Inject faults to both FC and Conv layer weights >>> aidge_core.inject_ber_to_operators(graph, ['FC', 'Conv'], 1e-9)
Inject BER faults to weights of specified operator types.
This function inserts BitErrorRate fault nodes between weight producers and operators of the specified types. For example, calling with operatorTypes=[‘FC’] will inject faults between all weight producers and FC (Fully Connected) nodes in the graph.
- Parameters:
graph – Graph to manipulate
operatorTypes – List of operator type names to inject faults into
bitErrorRate – Bit error rate to apply (e.g., 1e-9)
inject_fixed_bitflip#
- aidge_core.inject_fixed_bitflip(graph_view: aidge_core.aidge_core.GraphView, nb_bits: SupportsInt | SupportsIndex) None#
Inject faults in the network using a fixed N-bit flip model.
This function updates the given GraphView by inserting fault nodes that apply a fixed fault pattern during the forward pass. The affected weight indices and bit positions are determined at node creation based on random sampling. This precomputed fault pattern remains unchanged during execution.
- Parameters:
graph_view (
aidge_core.GraphView) – The computational graph to be modified.nb_bits (int) – Total number of bits to flip in the network weights.
- Returns:
None. The graph is modified in place.
Inject faults in the network using a fixed N-bit flip model.
This function updates the given GraphView by inserting fault nodes that apply a fixed fault pattern during the forward pass. The affected weight indices and bit positions are determined at node creation based on random sampling. This precomputed fault pattern remains unchanged during execution.
- Parameters:
graph – Graph to manipulate
nAffectedBits – The total number of bits to flip
inject_noise_to_weights#
- aidge_core.inject_noise_to_weights(graph_view: aidge_core.aidge_core.GraphView, stddev: SupportsFloat | SupportsIndex) None#
Inject static faults to weights by adding normal distribution noise.
This function modifies the values of producer parameters (weights) in the network by adding noise sampled from a normal distribution (mean=0.0, stddev=0.01). The function finds all producer parameter edges in the graph and applies the noise directly to the weight tensors. Only Tensor data of type Float32 or Float64 is modified; other data types are skipped.
Unlike dynamic fault injection methods that insert fault nodes into the graph, this function permanently modifies the weight values themselves, simulating static faults or weight degradation.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipe
- Returns:
None. The graph weights are modified in place.
Inject static faults to weights by adding normal distribution noise.
This function modifies the values of producer parameters (weights) in the network by adding noise sampled from a normal distribution (mean=0.0, stddev=0.01). The function finds all producer parameter edges in the graph and applies the noise directly to the weight tensors. Only Tensor data of type Float32 or Float64 is modified; other data types are skipped.
Unlike dynamic fault injection methods that insert fault nodes into the graph, this function permanently modifies the weight values themselves, simulating static faults or weight degradation.
- Parameters:
graph – Graph to manipulate
matmul_to_fc#
- aidge_core.matmul_to_fc(graph_view: aidge_core.aidge_core.GraphView) None#
Recipe to Fuse MatMul and Add operators into an
aidge_core.FCoperator.- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipe
producers#
- aidge_core.producers(graphview: aidge_core.aidge_core.GraphView, constant: bool = False) list[aidge_core.aidge_core.Node]#
remove_dropout#
- aidge_core.remove_dropout(graph_view: aidge_core.aidge_core.GraphView) int#
Recipe to remove dropout operators.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipe- Returns:
Number of removed operators.
- Return type:
int
Remove
DropoutNode.- Parameters:
graphView – Graph view to use graph matching on, in order to apply transfomrations.
- Returns:
size_t Number of identity nodes removed
remove_flatten#
- aidge_core.remove_flatten(graph_view: aidge_core.aidge_core.GraphView) None#
Recipe to remove a Flatten operator if it is followed by a FC or a MatMul. The recipe can remove multiple Flatten operator if they are one after the other.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipe.
remove_identity#
- aidge_core.remove_identity(graph_view: aidge_core.aidge_core.GraphView) int#
Recipe to remove identity operators.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipe- Returns:
Number of removed operators.
- Return type:
int
Remove all identity nodes
- Parameters:
graph – Graph to manipulate
- Returns:
size_t Number of identity nodes removed
remove_node#
- aidge_core.remove_node(graph_view: aidge_core.aidge_core.GraphView, type: str, incProducers: bool = False) int#
Recipe to remove operators of a given type.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipetype (str) – Type of the operators to remove
incProducers (bool) – If true, also removed attached Producers
- Returns:
Number of removed operators.
- Return type:
int
Remove a node type.
- Parameters:
graphView – Graph view to use graph matching on, in order to apply transfomrations.
type – Type of the nodes to remove
incProducers – If true, also remove the producers attached to the removed nodes
- Returns:
size_t Number of identity nodes removed
set_number_of_steps#
- aidge_core.set_number_of_steps(graph_view: aidge_core.aidge_core.GraphView, n_steps: SupportsInt | SupportsIndex) None#
Set the maximum number of steps for all Leaky meta-operators in the graph.
This recipe traverses the graph. For every Leaky meta-operator found, it updates all internal Memorize operators so their end_step equals the provided n_steps value.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipen_steps (int) – Number of time steps to set
Set the maximum number of steps for all Leaky meta-operators in the graph.
This recipe traverses the graph. For every Leaky meta-operator found, it updates all internal Memorize operators so their endStep equals the provided nSteps value.
- Parameters:
graph – The Graph to update
nSteps – The number of steps that will be set
split_multi_devices#
- aidge_core.split_multi_devices(graph_view: aidge_core.aidge_core.GraphView, dims: collections.abc.Sequence[SupportsInt | SupportsIndex], devices: collections.abc.Sequence[SupportsInt | SupportsIndex], split_axis: SupportsInt | SupportsIndex = 0) aidge_core.aidge_core.GraphView#
Split the graph over the given axis on multiple devices.
- Parameters:
graph_view (
aidge_core.GraphView) – Graph view on which we want to apply the recipedims (List) – Graph input dimensions
devices (List) – List of devices to use
split_axis (int, optional) – Splitting axis
- Returns:
A multi-devices graph
- Return type:
stft_to_dft#
- aidge_core.stft_to_dft(*args, **kwargs)#
Overloaded function.
stft_to_dft(node: aidge_core.aidge_core.Node) -> None
Expand
aidge_core.STFToperator to a meta operator containing onlyaidge_core.DFT. It requires that the graph dimensions are forwarded first and will not change afterwards.This recipe is useful when there is no STFT implementation on target.
- param node:
The
aidge_core.STFTnode to be replaced- type node:
stft_to_dft(graph: aidge_core.aidge_core.GraphView) -> int
Expand
aidge_core.STFToperator to a meta operator containing onlyaidge_core.DFT. It requires that the graph dimensions are forwarded first and will not change afterwards.This recipe is useful when there is no STFT implementation on target.
- param node:
The
aidge_core.STFTnode to be replaced- type node:
Node Recipes#
apply_weight_interleaving#
- aidge_core.apply_weight_interleaving(node: aidge_core.aidge_core.Node) None#
Replace weight Producer linked to the given node with a weight producer with interleaving and format NHWC. This recipe is specific to the ARM cortex-m export for low bit integer support.
- Parameters:
node – Node which linked weights will receive interleaving
The node passed contains an operator which input of index 1 is supposed be be weights of type Int4, Int3, Int2, binary. This recipe only operates memory transformations on the weight tensor. First, permutes the dimensions to match the dataformat NHWC Second, compact the last dimension of the weights (Channel dimension) into 8bits.
- Parameters:
node – Node
expand_metaop#
- aidge_core.expand_metaop(node: aidge_core.aidge_core.Node, name_format: str = '{0}', unique_name: bool = False) bool#
Flatten the graph by replacing the meta operators by their micro graph.
- Parameters:
node (
aidge_core.Node) – Node to expandname_format (str) – The formatting string to be used with fmt::format() for naming the nodes from the meta-op (inner nodes) in the expanded graph. The usable positional arguments are the following: {0} inner node name, {1} inner node type, {2} meta-node name, {3} meta-node type. Default is {0} (inner node name).
unique_name (bool) – If True, ensure that the expanded nodes name are unique in the expanded graph.
Replace a single meta operator by its micro graph.
The usable positional arguments are the following: {0} inner node name, {1} inner node type, {2} meta-node name, {3} meta-node type Default is {0} (inner node name).
- Parameters:
string – nameFormat: The formatting string to be used with fmt::format() for naming the nodes from the meta-op (inner nodes) in the expanded graph.
bool – uniqueName: If true, ensure that the expanded nodes name are unique in the expanded graph.
- Returns:
true if node is indeed a meta operator and could be expanded.
get_connected_graph_view#
- aidge_core.get_connected_graph_view(arg0: aidge_core.aidge_core.Node) aidge_core.aidge_core.GraphView#
Create a GraphView containing all nodes with a path to given Node.
get_conv_horizontal_tiling#
- aidge_core.get_conv_horizontal_tiling(node: aidge_core.aidge_core.Node, axis: SupportsInt | SupportsIndex, nb_slices: SupportsInt | SupportsIndex) set[aidge_core.aidge_core.Node]#
matMul_tiling#
- aidge_core.matMul_tiling(node: aidge_core.aidge_core.Node, max_dims: collections.abc.Sequence[SupportsInt | SupportsIndex]) None#
Tile any
aidge_core.MatMuloperator to several fixed size matrix multiplications. For instance, for a MatMul of size 80x80 and a tiling of 16x16, this will tile the MatMul operator to 25 (5 by 5) MatMul operators of size 16x16, with Slice operators inserted at the inputs and Concat operators inserted at the outputs.This is especially useful when matrix multiplication must be mapped to fixed maximum size hardware TPU (Tensor Processing Unit) or MMA (Matrix Multiplication Accelerator). This recipe can be combined with the
aidge_core.conv_to_matMulrecipe in order to convert convolutions to matrix multiplication beforehand, andaidge_core.constant_foldingrecipe to fold sliced constant tensors.Initial graph:
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%% flowchart TB MatMul_0("matmul1<br/><sub><em>(MatMul#0)</em></sub>"):::rootCls Producer_1("w1<br/><sub><em>(Producer#1)</em></sub>"):::producerCls Producer_0("dataProvider<br/><sub><em>(Producer#0)</em></sub>"):::producerCls MatMul_0--->|"0 [2, 3, 80, 80]→"|output0((out#0)):::outputCls Producer_1-->|"0 [2, 3, 80, 80]→1"|MatMul_0 Producer_0-->|"0 [2, 3, 80, 80]→0"|MatMul_0 classDef inputCls fill:#afa classDef outputCls fill:#ffa classDef externalCls fill:#ccc classDef producerCls fill:#ccf classDef genericCls fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5 classDef metaCls stroke-width:5px classDef rootCls stroke:#f00 classDef producerCls_rootCls stroke:#f00,fill:#ccf classDef genericCls_rootCls stroke:#f00,fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5 classDef metaCls_rootCls stroke:#f00,stroke-width:5pxGraph generated by a single step of the
matMulTilingrecipe (after the very first matrix multiplication split):%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%% flowchart TB Producer_7(<em>Producer#7</em>):::producerCls MatMul_1(<em>MatMul#1</em>) Concat_0(<em>Concat#0</em>) Producer_1(<em>Producer#1</em>):::producerCls Producer_2(<em>Producer#2</em>):::producerCls Producer_3(<em>Producer#3</em>):::producerCls Producer_4(<em>Producer#4</em>):::producerCls Producer_5(<em>Producer#5</em>):::producerCls Producer_6(<em>Producer#6</em>):::producerCls Identity_0(<em>Identity#0</em>):::rootCls Slice_0(<em>Slice#0</em>) Producer_0(<em>Producer#0</em>):::producerCls MatMul_0(<em>MatMul#0</em>) Identity_1(<em>Identity#1</em>) Slice_1(<em>Slice#1</em>) Producer_7-->|"0 [2]→4"|Slice_1 MatMul_1-->|"0 [2, 3, 64, 80]→1"|Concat_0 Producer_1-->|"0 [2]→2"|Slice_0 Producer_2-->|"0 [2]→3"|Slice_0 Producer_3-->|"0 [2]→4"|Slice_0 Producer_4-->|"0 [2]→1"|Slice_1 Producer_5-->|"0 [2]→2"|Slice_1 Producer_6-->|"0 [2]→3"|Slice_1 Identity_0-->|"0 [2, 3, 80, 80]→0"|Slice_0 Identity_0-->|"0 [2, 3, 80, 80]→0"|Slice_1 Slice_0-->|"0 [2, 3, 16, 80]→0"|MatMul_0 Producer_0-->|"0 [2]→1"|Slice_0 MatMul_0-->|"0 [2, 3, 16, 80]→0"|Concat_0 Identity_1-->|"0 [2, 3, 80, 80]→1"|MatMul_1 Identity_1-->|"0 [2, 3, 80, 80]→1"|MatMul_0 Slice_1-->|"0 [2, 3, 64, 80]→0"|MatMul_1 input0((in#0)):::inputCls--->|"→0[2, 3, 80, 80]"|Identity_0 input1((in#1)):::inputCls--->|"→0[2, 3, 80, 80]"|Identity_1 Concat_0--->|"0 [2, 3, 80, 80]→"|output0((out#0)):::outputCls classDef inputCls fill:#afa classDef outputCls fill:#ffa classDef externalCls fill:#ccc classDef producerCls fill:#ccf classDef genericCls fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5 classDef metaCls stroke-width:5px classDef rootCls stroke:#f00 classDef producerCls_rootCls stroke:#f00,fill:#ccf classDef genericCls_rootCls stroke:#f00,fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5 classDef metaCls_rootCls stroke:#f00,stroke-width:5px- Parameters:
node (
aidge_core.Node) – Operator to be tiledmax_dims (List) – Maximum output dimensions of the tiled MatMul operators
-
void Aidge::matMulTiling(NodePtr matMul, const std::vector<DimSize_t> &maxDims)#
Tile any :cpp:function:
Aidge::MatMuloperator to several fixed size matrix multiplications. For instance, for a MatMul of size 80x80 and a tiling of 16x16, this will tile the MatMul operator to 25 (5 by 5) MatMul operators of size 16x16, with Slice operators inserted at the inputs and Concat operators inserted at the outputs.This is especially useful when matrix multiplication must be mapped to fixed maximum size hardware TPU (Tensor Processing Unit) or MMA (Matrix Multiplication Accelerator). This recipe can be combined with the :cpp:function:
Aidge::convToMatMulrecipe in order to convert convolutions to matrix multiplication beforehand, and :cpp:function:Aidge::constantFoldingrecipe to fold sliced constant tensors.- Parameters:
matMul – MatMul operator to be tiled.
maxDims – Maximum output dimensions of the tiled MatMul operators.
to_generic_op#
- aidge_core.to_generic_op(node: aidge_core.aidge_core.Node) None#
Transform to a Generic Operator.
- Parameters:
node – Node which Operator will turn into a Generic Operator