Operators#
Operator base class#
aidge_core.Operator is Aidge’s base class for describing a mathematical operator. It does not make any assumption on the data coding.
- class aidge_core.Operator#
- __init__(*args, **kwargs)#
- associate_input(self: aidge_core.aidge_core.Operator, inputIdx: SupportsInt, data: aidge_core.aidge_core.Data) None#
- backend(self: aidge_core.aidge_core.Operator) str#
- forward(*args, **kwargs)#
Overloaded function.
forward(self: aidge_core.aidge_core.Operator) -> None
forward(self: aidge_core.aidge_core.Operator, input: aidge_core.aidge_core.Data) -> list[aidge_core.aidge_core.Data]
forward(self: aidge_core.aidge_core.Operator, inputs: collections.abc.Sequence[aidge_core.aidge_core.Data]) -> list[aidge_core.aidge_core.Data]
- get_impl(self: aidge_core.aidge_core.Operator) aidge_core.aidge_core.OperatorImpl#
- get_raw_input(self: aidge_core.aidge_core.Operator, inputIdx: SupportsInt) aidge_core.aidge_core.Data#
- get_raw_output(self: aidge_core.aidge_core.Operator, outputIdx: SupportsInt) aidge_core.aidge_core.Data#
- input_category(*args, **kwargs)#
Overloaded function.
input_category(self: aidge_core.aidge_core.Operator) -> list[aidge_core.aidge_core.InputCategory]
Category of the inputs (Data or Param, optional or not). Data inputs exclude inputs expecting parameters (weights or bias).
- rtype:
list(InputCategory)
input_category(self: aidge_core.aidge_core.Operator, idx: typing.SupportsInt) -> aidge_core.aidge_core.InputCategory
Category of a specific input (Data or Param, optional or not). Data inputs exclude inputs expecting parameters (weights or bias).
- rtype:
InputCategory
- is_atomic(self: aidge_core.aidge_core.Operator) bool#
- is_back_edge(self: aidge_core.aidge_core.Operator, input_index: SupportsInt) bool#
- is_optional_input(self: aidge_core.aidge_core.Operator, inputIdx: SupportsInt) bool#
- nb_inputs(self: aidge_core.aidge_core.Operator) int#
- nb_outputs(self: aidge_core.aidge_core.Operator) int#
- set_back_edges(self: aidge_core.aidge_core.Operator, input_indexes: collections.abc.Set[SupportsInt]) None#
- set_backend(*args, **kwargs)#
Overloaded function.
set_backend(self: aidge_core.aidge_core.Operator, name: str, device: typing.SupportsInt = 0) -> None
set_backend(self: aidge_core.aidge_core.Operator, backends: collections.abc.Sequence[tuple[str, typing.SupportsInt]], allow_default_impl: bool = True, check_available_specs: bool = False) -> tuple[str, int]
- set_dataformat(self: aidge_core.aidge_core.Operator, dataFormat: aidge_core.aidge_core.dformat) None#
- set_datatype(self: aidge_core.aidge_core.Operator, dataType: aidge_core.aidge_core.dtype) None#
- set_impl(self: aidge_core.aidge_core.Operator, implementation: aidge_core.aidge_core.OperatorImpl) None#
- set_input(*args, **kwargs)#
Overloaded function.
set_input(self: aidge_core.aidge_core.Operator, inputIdx: typing.SupportsInt, data: aidge_core.aidge_core.Data) -> None
set_input(self: aidge_core.aidge_core.Operator, inputIdx: typing.SupportsInt, data: aidge_core.aidge_core.Data) -> None
- set_output(self: aidge_core.aidge_core.Operator, outputIdx: SupportsInt, data: aidge_core.aidge_core.Data) None#
- type(self: aidge_core.aidge_core.Operator) str#
-
class Operator : public std::enable_shared_from_this<Operator>#
Base class for all operator types in the Aidge framework.
The
Operatorclass provides a foundation for implementing various operator types. Derived classes must implement specific behaviors for computation, attributes, and input/output handling.Subclassed by Aidge::OperatorROIs, Aidge::OperatorTensor
Public Functions
-
Operator() = delete#
Deleted default constructor.
-
inline Operator(const std::string &type, const std::vector<InputCategory> &inputsCategory, const IOIndex_t nbOut, const OperatorType operatorType = OperatorType::Data)#
Constructs an Operator instance.
- Parameters:
type – [in] The type of operator (e.g., “Add”, “AveragePool”).
inputsCategory – [in] Categories of each input.
nbOut – [in] Number of outputs.
operatorType – [in] Type of operator (Data or Tensor).
-
virtual ~Operator() noexcept#
Virtual destructor.
-
virtual std::shared_ptr<Operator> clone() const = 0#
Creates a clone of the current operator.
Derived classes must implement this method to provide a deep copy of the operator.
- Returns:
A shared pointer to the cloned operator.
-
inline virtual std::shared_ptr<Attributes> attributes() const#
Returns the attributes of the operator.
-
inline virtual std::shared_ptr<DynamicAttributes> inheritedAttributes() const#
Get the currently associated Node’s attributes.
If no Node as be associated to the Operator, returns a
nullptr.Note
As Operators have only been tested with a single associated Node, only attributes of the first associated Node are returned. This should be updated.
- Returns:
Shared pointer to the Attributes of the associated Node.
Associates a shallow copy of the specified input data with the operator.
Derived classes must implement this method.
- Parameters:
inputIdx – [in] Index of the input to associate.
data – [in] Data to associate.
-
virtual void resetInput(const IOIndex_t inputIdx) = 0#
Resets the specified input.
Sets the specified input with a deep copy of the given data.
Derived classes must implement this method.
- Parameters:
inputIdx – [in] Index of the input to set.
data – [in] Data to set.
-
virtual std::shared_ptr<Data> getRawInput(const IOIndex_t inputIdx) const = 0#
Retrieves the raw input data for the specified index.
Sets the specified output with a deep copy of the given data.
Derived classes must implement this method.
- Parameters:
outputIdx – [in] Index of the output to set.
data – [in] Data to set.
-
virtual std::shared_ptr<Data> getRawOutput(const IOIndex_t outputIdx) const = 0#
Retrieves the raw output data for the specified index.
-
inline virtual std::string backend() const noexcept#
Returns the backend implementation name.
- Returns:
The name of the backend implementation.
-
virtual void setBackend(const std::string &name, DeviceIdx_t device = 0) = 0#
Sets the backend implementation.
- Parameters:
name – [in] Name of the backend.
device – [in] Device index.
-
virtual std::pair<std::string, DeviceIdx_t> setBackend(const std::vector<std::pair<std::string, DeviceIdx_t>> &backends, bool allowDefaultImpl = true, bool checkAvailableSpecs = false) = 0#
Sets the backend implementation for multiple devices.
- Parameters:
backends – [in] A vector of backend and device index pairs.
allowDefaultImpl – [in] If true, allow using the default implementation if one exists, if no other backend is found.
checkAvailableSpecs – [in] If true, also check that there is a matching implementation spec in the backend.
-
virtual void setDataFormat(const DataFormat &dataFormat) const = 0#
-
virtual std::set<std::string> getAvailableBackends() const = 0#
Returns the available backend implementations.
Derived classes must implement this method.
- Returns:
A set of available backend names.
Set a new OperatorImpl to the Operator.
-
inline std::shared_ptr<OperatorImpl> getImpl() const noexcept#
Get the OperatorImpl of the Operator.
-
virtual Elts_t getNbRequiredData(const IOIndex_t inputIdx) const#
Minimum amount of data from a specific input for one computation pass.
- Parameters:
inputIdx – Index of the input analyzed.
- Returns:
Elts_t
-
virtual Elts_t getNbRequiredProtected(const IOIndex_t inputIdx) const#
-
virtual Elts_t getRequiredMemory(const IOIndex_t outputIdx, const std::vector<DimSize_t> &inputsSize) const#
-
virtual Elts_t getNbConsumedData(const IOIndex_t inputIdx) const#
Total amount of consumed data from a specific input.
- Parameters:
inputIdx – Index of the input analyzed.
- Returns:
Elts_t
-
virtual Elts_t getNbProducedData(const IOIndex_t outputIdx) const#
Total amount of produced data ready to be used on a specific output.
- Parameters:
outputIdx – Index of the output analyzed.
- Returns:
Elts_t
-
virtual void updateConsummerProducer()#
-
virtual void resetConsummerProducer()#
-
virtual void forward()#
-
virtual void backward()#
-
inline std::string type() const noexcept#
Returns the type of the operator.
- Returns:
The operator type as a string.
-
inline OperatorType operatorType() const noexcept#
Returns the type of the operator (Data or Tensor).
- Returns:
The operator type as an
OperatorTypeenum value.
-
inline std::vector<InputCategory> inputCategory() const#
Returns the categories of all inputs.
-
InputCategory inputCategory(IOIndex_t idx) const#
Returns the category of a specific input.
-
inline bool isOptionalInput(std::size_t inputIdx) const#
-
inline virtual bool isAtomic() const noexcept#
-
inline IOIndex_t nbInputs() const noexcept#
Returns the number of inputs.
-
inline IOIndex_t nbOutputs() const noexcept#
Returns the number of outputs.
-
inline void setBackEdges(const std::set<IOIndex_t> &backEdges)#
Sets the back edge input indexes for recurring operators.
- Parameters:
backEdges – [in] A set of input indexes representing back edges.
-
inline bool isBackEdge(IOIndex_t inputIdx) const#
Checks if a given input index is a back edge.
- Parameters:
inputIdx – [in] Index of the input to check.
- Returns:
True if the input index is a back edge, false otherwise.
Set the pointer of mOutput[outputIdx] to be equal to data. Warning this function should be use with great care as it may break the graph dataflow. You have been warned.
Public Static Attributes
-
static const std::pair<std::string, DeviceIdx_t> NoBackend#
-
Operator() = delete#
OperatorTensor base class#
aidge_core.OperatorTensor derives from the aidge_core.Operator base class and is the base class for any tensor-based operator.
- class aidge_core.OperatorTensor#
- __init__(*args, **kwargs)#
- associate_input(self: aidge_core.aidge_core.Operator, inputIdx: SupportsInt, data: aidge_core.aidge_core.Data) None#
- backend(self: aidge_core.aidge_core.Operator) str#
- dims_forwarded(self: aidge_core.aidge_core.OperatorTensor) bool#
- forward(self: aidge_core.aidge_core.OperatorTensor) None#
- forward_dims(self: aidge_core.aidge_core.OperatorTensor, allow_data_dependency: bool = False) bool#
- forward_dtype(self: aidge_core.aidge_core.OperatorTensor) bool#
- get_impl(self: aidge_core.aidge_core.Operator) aidge_core.aidge_core.OperatorImpl#
- get_input(self: aidge_core.aidge_core.OperatorTensor, inputIdx: SupportsInt) aidge_core.aidge_core.Tensor#
- get_inputs(self: aidge_core.aidge_core.OperatorTensor) list[aidge_core.aidge_core.Tensor]#
- get_output(self: aidge_core.aidge_core.OperatorTensor, outputIdx: SupportsInt) aidge_core.aidge_core.Tensor#
- get_outputs(self: aidge_core.aidge_core.OperatorTensor) list[aidge_core.aidge_core.Tensor]#
- get_raw_input(self: aidge_core.aidge_core.Operator, inputIdx: SupportsInt) aidge_core.aidge_core.Data#
- get_raw_output(self: aidge_core.aidge_core.Operator, outputIdx: SupportsInt) aidge_core.aidge_core.Data#
- input_category(*args, **kwargs)#
Overloaded function.
input_category(self: aidge_core.aidge_core.Operator) -> list[aidge_core.aidge_core.InputCategory]
Category of the inputs (Data or Param, optional or not). Data inputs exclude inputs expecting parameters (weights or bias).
- rtype:
list(InputCategory)
input_category(self: aidge_core.aidge_core.Operator, idx: typing.SupportsInt) -> aidge_core.aidge_core.InputCategory
Category of a specific input (Data or Param, optional or not). Data inputs exclude inputs expecting parameters (weights or bias).
- rtype:
InputCategory
- is_atomic(self: aidge_core.aidge_core.Operator) bool#
- is_back_edge(self: aidge_core.aidge_core.Operator, input_index: SupportsInt) bool#
- is_optional_input(self: aidge_core.aidge_core.Operator, inputIdx: SupportsInt) bool#
- nb_inputs(self: aidge_core.aidge_core.Operator) int#
- nb_outputs(self: aidge_core.aidge_core.Operator) int#
- set_back_edges(self: aidge_core.aidge_core.Operator, input_indexes: collections.abc.Set[SupportsInt]) None#
- set_backend(*args, **kwargs)#
Overloaded function.
set_backend(self: aidge_core.aidge_core.Operator, name: str, device: typing.SupportsInt = 0) -> None
set_backend(self: aidge_core.aidge_core.Operator, backends: collections.abc.Sequence[tuple[str, typing.SupportsInt]], allow_default_impl: bool = True, check_available_specs: bool = False) -> tuple[str, int]
- set_dataformat(self: aidge_core.aidge_core.Operator, dataFormat: aidge_core.aidge_core.dformat) None#
- set_datatype(self: aidge_core.aidge_core.Operator, dataType: aidge_core.aidge_core.dtype) None#
- set_impl(self: aidge_core.aidge_core.Operator, implementation: aidge_core.aidge_core.OperatorImpl) None#
- set_input(self: aidge_core.aidge_core.OperatorTensor, outputIdx: SupportsInt, data: aidge_core.aidge_core.Data) None#
- set_output(self: aidge_core.aidge_core.OperatorTensor, outputIdx: SupportsInt, data: aidge_core.aidge_core.Data) None#
- type(self: aidge_core.aidge_core.Operator) str#
-
class OperatorTensor : public Aidge::Operator#
Base class for all operators that work with tensor inputs and outputs.
The
OperatorTensorclass provides an abstraction for operations on tensors with features such as input/output management, dimension handling, and receptive field computation.See also
Subclassed by Aidge::OperatorTensorWithImpl< WeightInterleaving_Op >, Aidge::OperatorTensorWithImpl< Slice_Op, Slice_OpImpl >, Aidge::OperatorTensorWithImpl< ReLU_Op >, Aidge::OperatorTensorWithImpl< Pop_Op, Pop_OpImpl >, Aidge::OperatorTensorWithImpl< Mul_Op >, Aidge::OperatorTensorWithImpl< AvgPooling_Op< DIM > >, Aidge::OperatorTensorWithImpl< Atan_Op >, Aidge::OperatorTensorWithImpl< Split_Op, Split_OpImpl >, Aidge::OperatorTensorWithImpl< Sin_Op >, Aidge::OperatorTensorWithImpl< NBitFlip_Op >, Aidge::OperatorTensorWithImpl< Ln_Op >, Aidge::OperatorTensorWithImpl< LeakyReLU_Op >, Aidge::OperatorTensorWithImpl< DepthToSpace_Op, DepthToSpace_OpImpl >, Aidge::OperatorTensorWithImpl< Cos_Op >, Aidge::OperatorTensorWithImpl< ConvDepthWise_Op< DIM > >, Aidge::OperatorTensorWithImpl< CentroidCropTransformation_Op >, Aidge::OperatorTensorWithImpl< BitShift_Op >, Aidge::OperatorTensorWithImpl< Sinh_Op >, Aidge::OperatorTensorWithImpl< Mod_Op >, Aidge::OperatorTensorWithImpl< Floor_Op >, Aidge::OperatorTensorWithImpl< Flatten_Op, Flatten_OpImpl >, Aidge::OperatorTensorWithImpl< FC_Op >, Aidge::OperatorTensorWithImpl< ColorSpaceTransformation_Op >, Aidge::OperatorTensorWithImpl< Ceil_Op >, Aidge::OperatorTensorWithImpl< CGPACT_Op >, Aidge::OperatorTensorWithImpl< ArgMax_Op >, Aidge::OperatorTensorWithImpl< Transpose_Op, TransposeImpl >, Aidge::OperatorTensorWithImpl< Sqrt_Op >, Aidge::OperatorTensorWithImpl< SliceExtractionTransformation_Op >, Aidge::OperatorTensorWithImpl< Pow_Op >, Aidge::OperatorTensorWithImpl< Min_Op >, Aidge::OperatorTensorWithImpl< MaxPooling_Op< DIM > >, Aidge::OperatorTensorWithImpl< Heaviside_Op >, Aidge::OperatorTensorWithImpl< Equal_Op >, Aidge::OperatorTensorWithImpl< DFT_Op >, Aidge::OperatorTensorWithImpl< Conv_Op< DIM > >, Aidge::OperatorTensorWithImpl< ComplexToInnerPair_Op, ComplexToInnerPair_OpImpl >, Aidge::OperatorTensorWithImpl< Reshape_Op, Reshape_OpImpl >, Aidge::OperatorTensorWithImpl< GlobalAveragePooling_Op >, Aidge::OperatorTensorWithImpl< DoReFa_Op >, Aidge::OperatorTensorWithImpl< AffineTransformation_Op >, Aidge::OperatorTensorWithImpl< Unfold_Op< DIM >, Unfold_OpImpl< DIM > >, Aidge::OperatorTensorWithImpl< Tile_Op, Tile_OpImpl >, Aidge::OperatorTensorWithImpl< Shape_Op, Shape_OpImpl >, Aidge::OperatorTensorWithImpl< PadCropTransformation_Op >, Aidge::OperatorTensorWithImpl< Neg_Op >, Aidge::OperatorTensorWithImpl< ChannelExtractionTransformation_Op >, Aidge::OperatorTensorWithImpl< Sigmoid_Op >, Aidge::OperatorTensorWithImpl< Pad_Op >, Aidge::OperatorTensorWithImpl< Dropout_Op >, Aidge::OperatorTensorWithImpl< Cosh_Op >, Aidge::OperatorTensorWithImpl< STFT_Op >, Aidge::OperatorTensorWithImpl< InnerPairToComplex_Op, InnerPairToComplex_OpImpl >, Aidge::OperatorTensorWithImpl< Exp_Op >, Aidge::OperatorTensorWithImpl< CryptoHash_Op >, Aidge::OperatorTensorWithImpl< ConstantOfShape_Op >, Aidge::OperatorTensorWithImpl< Sub_Op >, Aidge::OperatorTensorWithImpl< Squeeze_Op, Squeeze_OpImpl >, Aidge::OperatorTensorWithImpl< Reciprocal_Op >, Aidge::OperatorTensorWithImpl< Range_Op, Range_OpImpl >, Aidge::OperatorTensorWithImpl< Max_Op >, Aidge::OperatorTensorWithImpl< GridSample_Op >, Aidge::OperatorTensorWithImpl< Tan_Op >, Aidge::OperatorTensorWithImpl< Softmax_Op >, Aidge::OperatorTensorWithImpl< MatMul_Op >, Aidge::OperatorTensorWithImpl< LayerNorm_Op >, Aidge::OperatorTensorWithImpl< FixedNBitFlipOp >, Aidge::OperatorTensorWithImpl< CompressionNoiseTransformation_Op >, Aidge::OperatorTensorWithImpl< Add_Op >, Aidge::OperatorTensorWithImpl< Abs_Op >, Aidge::OperatorTensorWithImpl< TrimTransformation_Op >, Aidge::OperatorTensorWithImpl< RescaleTransformation_Op >, Aidge::OperatorTensorWithImpl< Memorize_Op, Memorize_OpImpl >, Aidge::OperatorTensorWithImpl< Div_Op >, Aidge::OperatorTensorWithImpl< Clip_Op >, Aidge::OperatorTensorWithImpl< BatchNorm_Op< DIM > >, Aidge::OperatorTensorWithImpl< Tanh_Op >, Aidge::OperatorTensorWithImpl< Round_Op >, Aidge::OperatorTensorWithImpl< ReduceSum_Op >, Aidge::OperatorTensorWithImpl< ReduceMin_Op >, Aidge::OperatorTensorWithImpl< ReduceMax_Op >, Aidge::OperatorTensorWithImpl< OneHot_Op >, Aidge::OperatorTensorWithImpl< HardSigmoid_Op >, Aidge::OperatorTensorWithImpl< FixedQ_Op >, Aidge::OperatorTensorWithImpl< Erf_Op >, Aidge::OperatorTensorWithImpl< Concat_Op, Concat_OpImpl >, Aidge::OperatorTensorWithImpl< Scatter_Op, Scatter_OpImpl >, Aidge::OperatorTensorWithImpl< BitErrorRate_Op >, Aidge::OperatorTensorWithImpl< Where_Op >, Aidge::OperatorTensorWithImpl< TopK_Op >, Aidge::OperatorTensorWithImpl< ShiftGELU_Op >, Aidge::OperatorTensorWithImpl< ScaleAdjust_Op >, Aidge::OperatorTensorWithImpl< LSQ_Op >, Aidge::OperatorTensorWithImpl< And_Op >, Aidge::OperatorTensorWithImpl< Sum_Op >, Aidge::OperatorTensorWithImpl< RangeAffineTransformation_Op >, Aidge::OperatorTensorWithImpl< InstanceNorm_Op >, Aidge::OperatorTensorWithImpl< Hardmax_Op >, Aidge::OperatorTensorWithImpl< Gather_Op, Gather_OpImpl >, Aidge::OperatorTensorWithImpl< Cast_Op, Cast_OpImpl >, Aidge::OperatorTensorWithImpl< CastLike_Op, CastLike_OpImpl >, Aidge::OperatorTensorWithImpl< Unsqueeze_Op, Unsqueeze_OpImpl >, Aidge::OperatorTensorWithImpl< TanhClamp_Op >, Aidge::OperatorTensorWithImpl< StackOp, StackOpImpl >, Aidge::OperatorTensorWithImpl< Resize_Op >, Aidge::OperatorTensorWithImpl< ReduceMean_Op >, Aidge::OperatorTensorWithImpl< LRN_Op >, Aidge::OperatorTensorWithImpl< Expand_Op >, Aidge::OperatorTensorWithImpl< ShiftMax_Op >, Aidge::OperatorTensorWithImpl< Select_Op, Select_OpImpl >, Aidge::OperatorTensorWithImpl< RandomNormalLike_Op >, Aidge::OperatorTensorWithImpl< Identity_Op, Identity_OpImpl >, Aidge::OperatorTensorWithImpl< ILayerNorm_Op >, Aidge::OperatorTensorWithImpl< Fold_Op< DIM > >, Aidge::OperatorTensorWithImpl< ConvTranspose_Op< DIM > >, Aidge::GenericOperator_Op, Aidge::MetaOperator_Op, Aidge::Move_Op, Aidge::OperatorTensorWithImpl< T, DEF_IMPL >, Aidge::Producer_Op
Public Functions
-
OperatorTensor() = delete#
-
OperatorTensor(const std::string &type, const std::vector<InputCategory> &inputsCategory, const IOIndex_t nbOut)#
Operator tensor constructor. This function is not meant to be called directly but by a derived class constructor every operator class derive from this class.
- Parameters:
type – [in] : type of operator (i.e. “Add”, “AveragePool”,…)
inputsCategory – [in] : describes the type of each input.
nbOut – [in] : Number of tensors this operator will output
-
OperatorTensor(const OperatorTensor &other)#
Copy constructor.
- Parameters:
other – [in] Another
OperatorTensorinstance to copy.
-
~OperatorTensor()#
Destructor for the OperatorTensor class.
Associates an input tensor to the operator.
- Parameters:
inputIdx – [in] Index of the input to associate.
data – [in] Shared pointer to the data to associate.
-
virtual void resetInput(const IOIndex_t inputIdx) override#
Resets the input tensor at a given index.
- Parameters:
inputIdx – [in] Index of the input to reset.
Sets an input tensor for the operator.
- Parameters:
inputIdx – [in] Index of the input to set.
data – [in] Shared pointer to the data to set.
-
const std::shared_ptr<Tensor> &getInput(const IOIndex_t inputIdx) const#
Retrieves an input tensor.
- Parameters:
inputIdx – [in] Index of the input to retrieve.
- Returns:
Shared pointer to the input tensor.
-
virtual std::shared_ptr<Data> getRawInput(const IOIndex_t inputIdx) const final override#
Retrieves a raw input tensor.
- Parameters:
inputIdx – [in] Index of the input to retrieve.
- Returns:
Shared pointer to the raw input tensor.
Sets an output tensor for the operator.
- Parameters:
outputIdx – [in] Index of the output to set.
data – [in] Shared pointer to the data to set.
-
virtual const std::shared_ptr<Tensor> &getOutput(const IOIndex_t outputIdx) const#
Retrieves an output tensor.
- Parameters:
outputIdx – [in] Index of the output to retrieve.
- Returns:
Shared pointer to the output tensor.
-
virtual std::shared_ptr<Aidge::Data> getRawOutput(const Aidge::IOIndex_t outputIdx) const final override#
Retrieves a raw output tensor.
- Parameters:
outputIdx – [in] Index of the output to retrieve.
- Returns:
Shared pointer to the raw output tensor.
-
virtual std::vector<std::pair<std::vector<Aidge::DimSize_t>, std::vector<DimSize_t>>> computeReceptiveField(const std::vector<DimSize_t> &firstEltDims, const std::vector<DimSize_t> &outputDims, const IOIndex_t outputIdx = 0) const#
Computes the receptive field for a given output feature area.
- Parameters:
firstIdx – [in] First index of the output feature.
outputDims – [in] Dimensions of the output feature.
outputIdx – [in] Index of the output (default is 0).
- Returns:
Vector of pairs containing, for each data input tensor, the first index and dimensions of the feature area.
-
virtual bool forwardDims(bool allowDataDependency = false)#
Computes the dimensions of the operator’s output tensor based on input sizes.
If the output dimensions depend on undefined inputs, this function returns false and enters TOKEN mode.
TOKEN mode ensures that all inputs and outputs of the graph the node belongs to are connected.
- Parameters:
allowDataDependency – [in] Flag to indicate if output dimensions depend on optional parameter tensors.
- Returns:
True if dimensions are successfully computed, false otherwise.
-
virtual bool forwardDType()#
Computes the data type of the operator’s output tensor based on input data type.
For each operator inputs:
If input is an (optional) Param, the operator will forward
- Returns:
True if data types are successfully computed, false otherwise.
-
virtual bool dimsForwarded() const#
Checks if dimensions have been successfully forwarded.
- Returns:
True if dimensions are forwarded, false otherwise.
-
virtual void setDataType(const DataType &dataType) const override#
Sets the data type of the operator’s tensors.
Warning
Sets all outputs but only inputs of category
&InputCategory::Param
InputCategory::OptionnalParam
- Parameters:
dataType – Data type to set.
-
virtual void setDataFormat(const DataFormat &dataFormat) const override#
Sets the data format of the operator’s tensors.
- Parameters:
dataFormat – Data format to set.
-
virtual void forward() override#
Executes the forward operation for the operator.
Set the pointer of mOutput[outputIdx] to be equal to data. Warning this function should be use with great care as it may break the graph dataflow. You have been warned.
-
OperatorTensor() = delete#
Generic Operator#
A generic tensor-based operator can be used to model any kind of mathematical operator that takes a defined number of inputs, produces a defined number of outputs and can have some attributes. It is possible to provide a function that produces the output tensors size w.r.t. the inputs size. It has a default consumer-producer model (require and consume all inputs full tensors and produces output full tensors).
This is the default operator used for unsupported ONNX operators when loading an ONNX model. While it obviously cannot be executed, a generic operator has still some usefulness:
It allows loading any graph even with unknown operators. It is possible to identify exactly all the missing operator types and their position in the graph;
It can be searched and manipulated with graph matching, allowing for example to replace it with alternative operators;
It can be scheduled and included in the graph static scheduling;
🚧 A custom implementation may be provided in the future, even in pure Python, for rapid integration and prototyping.
- aidge_core.GenericOperator()#
GenericOperator(type: str, input_category: list[aidge_core.InputCategory], nb_out: int, name: str = '', **kwargs) -> aidge_core.aidge_core.Node
Creates a aidge_core.GenericOperatorOp with specified input and output counts.
- param type:
Type of the operator.
- type type:
str
- param inputCategory:
List of input categories.
- type inputCategory:
List[aidge_core.InputCategory]
- param nbOut:
Number of output tensors.
- type nbOut:
int
- param name:
Name of the operator, default=””
- type name:
str, Optional
- param kwargs:
Every kwargs provided will be interpreted as a :py:class: aidge_core.DynamicAttributes.
GenericOperator(type: str, nb_data: int, nb_param: int, nb_out: int, name: str = '', **kwargs) -> aidge_core.aidge_core.Node
Creates a aidge_core.GenericOperatorOp with specified input and output counts.
- param type:
Type of the operator.
- type type:
str
- param nbData:
Number of input data tensors.
- type nbData:
int
- param nbParam:
Number of parameter tensors.
- type nbParam:
int
- param nbOut:
Number of output tensors.
- type nbOut:
int
- param name:
Name of the operator, default=””
- type name:
str, Optional
- param kwargs:
Every kwargs provided will be interpreted as a :py:class: aidge_core.DynamicAttributes.
GenericOperator(type: str, op: aidge_core.aidge_core.OperatorTensor, name: str = '') -> aidge_core.aidge_core.Node
Creates a aidge_core.Node containing a aidge_core.GenericOperatorOp based on another aidge_core.Operator.
- Parameters:
type (str) – Type of the operator
op (aidge_core.Operator) – Existing operator to derive from.
name (str, Optional) – Name of the operator, default=””
-
std::shared_ptr<Node> Aidge::GenericOperator(const std::string &type, IOIndex_t nbData, IOIndex_t nbParam, IOIndex_t nbOut, const std::string &name = "")#
Creates a generic operator with specified input and output counts.
- Parameters:
type – [in] Type of the operator.
nbData – [in] Number of input data tensors.
nbParam – [in] Number of parameter tensors.
nbOut – [in] Number of output tensors.
name – [in] Optional name for the operator.
- Returns:
A shared pointer to the created operator node.
Creates a generic operator based on another operator.
- Parameters:
type – [in] Type of the generic operator.
op – [in] Existing operator to derive from.
name – [in] Optional name for the operator.
- Returns:
A shared pointer to the created operator node.
Meta Operator#
A meta-operator (or composite operator) is internally built from a sub-graph.
- aidge_core.meta_operator(type: str, graph: aidge_core.aidge_core.GraphView, forced_inputs_category: collections.abc.Sequence[aidge_core.aidge_core.InputCategory] = [], name: str = '') aidge_core.aidge_core.Node#
Helper function to create a MetaOperator node.
- Parameters:
type – The type of the meta-operator.
graph – The micro-graph defining the meta-operator.
forcedInputsCategory – Optional input categories to override default behavior.
name – Optional name for the operator.
- Returns:
A shared pointer to the created Node.
Building a new meta-operator is simple:
auto graph = Sequential({
Pad<2>(padding_dims, (!name.empty()) ? name + "_pad" : ""),
MaxPooling<2>(kernel_dims, (!name.empty()) ? name + "_maxpooling" : "", stride_dims, ceil_mode)
});
return MetaOperator("PaddedMaxPooling2D", graph, name);
You can use the aidge_core.expand_metaops to flatten the meta-operators in a graph.
Predefined operators#
Abs#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>AbsOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Abs(name: str = '') aidge_core.aidge_core.Node#
Add#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>AddOp</b>
"):::operator
In0[data_input_0]:::text-only -->|"In[0]"| Op
In1[data_input_n]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Add(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing an Add operator that performs element-wise addition between two tensors.
- The operation is defined as:
Output = Input1 + Input2
The output tensor shape is determined by taking the maximum size along each dimension of the input tensors after broadcasting.
- Examples:
Input A: (3, 4, 2), Input B: (2), Output: (3, 4, 2) Input A: (1, 5, 3), Input B: (2, 1, 3), Output: (2, 5, 3)
- Parameters:
name (str, optional) – Name of the node, default=””
- Returns:
A node containing the Add operator.
- Return type:
aidge_core.AddOp
And#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>AndOp</b>
"):::operator
In0[data_input_1]:::text-only -->|"In[0]"| Op
In1[data_input_2]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.And(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing an And operator that performs element-wise logical AND between two tensors.
The operation is defined as:
Output = Input1 AND Input2
The inputs must be boolean tensors (with values 0 or 1).
- Parameters:
name (str) – Name of the node (optional).
- Returns:
A node containing the And operator.
- Return type:
AndOp
ArgMax#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ArgMaxOp</b>
Attributes:
<sub><em>axis</em></sub>
<sub><em>keep_dims</em></sub>
<sub><em>select_last_index</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.ArgMax(axis: SupportsInt = 0, keep_dims: bool = True, select_last_index: bool = False, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing an ArgMax operator.
- Parameters:
axis (int) – The axis along which to compute the max element. The accepted range is [-r, r-1], where r is the rank of the input tensor.
keepdims (bool) – If True (default), retains the reduced dimensions with size 1. If False, the reduced dimensions are removed.
select_last_index (bool) – If True, selects the last index if there are multiple occurrences of the max value. If False (default), selects the first occurrence.
name – name of the node.
-
std::shared_ptr<Node> Aidge::ArgMax(std::int32_t axis = 0, bool keep_dims = true, bool select_last_index = false, const std::string &name = "")#
Creates an ArgMax operation node.
This function constructs a new Node containing an ArgMax_Op operator with the specified attributes.
- Parameters:
axis – [in] The axis along which the ArgMax operation is performed.
keep_dims – [in] Whether to retain reduced dimensions with size 1 (
true) or remove them (false).select_last_index – [in] Whether to select the last occurrence of the maximum value (
true) or the first (false).name – [in] The name of the Node (optional).
- Returns:
A shared pointer to the newly created Node.
Atan#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>AtanOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Atan(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing an Atan operator.
- Parameters:
name (
str) – Name of the node.
AvgPooling1D#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>AvgPooling1DOp</b>
Attributes:
<sub><em>kernel_dims</em></sub>
<sub><em>stride_dims</em></sub>
<sub><em>dilations</em></sub>
<sub><em>ceil_mode</em></sub>
<sub><em>rounding_mode</em></sub>
<sub><em>ignore_pads</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.AvgPooling1D(kernel_dims: collections.abc.Sequence[SupportsInt], name: str = '', stride_dims: collections.abc.Sequence[SupportsInt] = [1], dilations: collections.abc.Sequence[SupportsInt] = [1], ceil_mode: bool = False, rounding_mode: object = 'half_away_from_zero') aidge_core.aidge_core.Node#
Initialize a node containing an AvgPooling operator.
This function performs average pooling on the tensor with the given kernel and stride dimensions.
- Parameters:
kernel_dims (List[int]) – Size of the kernel applied during pooling.
dilations (List[int]) – The dilation value along each spatial axis of filter.
ceil_mode (bool) – Whether to use ceil or floor when calculating the output dimensions.
name (str) – Name of the operator node (optional).
stride_dims (List[int], optional) – Stride dimensions for the pooling operation.
-
std::shared_ptr<Node> Aidge::AvgPooling1D(const std::array<DimSize_t, 1> &kernel_dims, const std::string &name = "", const std::array<DimSize_t, 1> &stride_dims = create_array<DimSize_t, 1>(1), const std::array<DimSize_t, 1> &dilations = create_array<DimSize_t, 1>(1), bool ceil_mode = false, RoundingMode roundingMode = RoundingMode::HalfAwayFromZero)#
Creates an AvgPooling1D operator node.
- Parameters:
kernel_dims – [in] Size of the pooling window for each spatial dimension.
name – [in] Name of the operator node. Defaults to an empty string.
stride_dims – [in] Step size (stride) for sliding the pooling window across the input dimensions. Defaults to 1 for each dimension.
dilations – [in] Spatial dilations for the pooling operation.
ceil_mode – [in] Indicates whether to use ceil mode for output size calculation.
- Returns:
A shared pointer to the created operator node.
AvgPooling2D#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>AvgPooling2DOp</b>
Attributes:
<sub><em>kernel_dims</em></sub>
<sub><em>stride_dims</em></sub>
<sub><em>dilations</em></sub>
<sub><em>ceil_mode</em></sub>
<sub><em>rounding_mode</em></sub>
<sub><em>ignore_pads</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.AvgPooling2D(kernel_dims: collections.abc.Sequence[SupportsInt], name: str = '', stride_dims: collections.abc.Sequence[SupportsInt] = [1, 1], dilations: collections.abc.Sequence[SupportsInt] = [1, 1], ceil_mode: bool = False, rounding_mode: object = 'half_away_from_zero') aidge_core.aidge_core.Node#
Initialize a node containing an AvgPooling operator.
This function performs average pooling on the tensor with the given kernel and stride dimensions.
- Parameters:
kernel_dims (List[int]) – Size of the kernel applied during pooling.
dilations (List[int]) – The dilation value along each spatial axis of filter.
ceil_mode (bool) – Whether to use ceil or floor when calculating the output dimensions.
name (str) – Name of the operator node (optional).
stride_dims (List[int], optional) – Stride dimensions for the pooling operation.
-
std::shared_ptr<Node> Aidge::AvgPooling2D(const std::array<DimSize_t, 2> &kernel_dims, const std::string &name = "", const std::array<DimSize_t, 2> &stride_dims = create_array<DimSize_t, 2>(1), const std::array<DimSize_t, 2> &dilations = create_array<DimSize_t, 2>(1), bool ceil_mode = false, RoundingMode roundingMode = RoundingMode::HalfAwayFromZero)#
Creates an AvgPooling2D operator node.
- Parameters:
kernel_dims – [in] Size of the pooling window for each spatial dimension.
name – [in] Name of the operator node. Defaults to an empty string.
stride_dims – [in] Step size (stride) for sliding the pooling window across the input dimensions. Defaults to 1 for each dimension.
dilations – [in] Spatial dilations for the pooling operation.
ceil_mode – [in] Indicates whether to use ceil mode for output size calculation.
- Returns:
A shared pointer to the created operator node.
AvgPooling3D#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>AvgPooling3DOp</b>
Attributes:
<sub><em>kernel_dims</em></sub>
<sub><em>stride_dims</em></sub>
<sub><em>dilations</em></sub>
<sub><em>ceil_mode</em></sub>
<sub><em>rounding_mode</em></sub>
<sub><em>ignore_pads</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.AvgPooling3D(kernel_dims: collections.abc.Sequence[SupportsInt], name: str = '', stride_dims: collections.abc.Sequence[SupportsInt] = [1, 1, 1], dilations: collections.abc.Sequence[SupportsInt] = [1, 1, 1], ceil_mode: bool = False, rounding_mode: object = 'half_away_from_zero') aidge_core.aidge_core.Node#
Initialize a node containing an AvgPooling operator.
This function performs average pooling on the tensor with the given kernel and stride dimensions.
- Parameters:
kernel_dims (List[int]) – Size of the kernel applied during pooling.
dilations (List[int]) – The dilation value along each spatial axis of filter.
ceil_mode (bool) – Whether to use ceil or floor when calculating the output dimensions.
name (str) – Name of the operator node (optional).
stride_dims (List[int], optional) – Stride dimensions for the pooling operation.
-
std::shared_ptr<Node> Aidge::AvgPooling3D(const std::array<DimSize_t, 3> &kernel_dims, const std::string &name = "", const std::array<DimSize_t, 3> &stride_dims = create_array<DimSize_t, 3>(1), const std::array<DimSize_t, 3> &dilations = create_array<DimSize_t, 3>(1), bool ceil_mode = false, RoundingMode roundingMode = RoundingMode::HalfAwayFromZero)#
Creates an AvgPooling3D operator node.
- Parameters:
kernel_dims – [in] Size of the pooling window for each spatial dimension.
name – [in] Name of the operator node. Defaults to an empty string.
stride_dims – [in] Step size (stride) for sliding the pooling window across the input dimensions. Defaults to 1 for each dimension.
dilations – [in] Spatial dilations for the pooling operation.
ceil_mode – [in] Indicates whether to use ceil mode for output size calculation.
- Returns:
A shared pointer to the created operator node.
BatchNorm2D#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>BatchNorm2DOp</b>
Attributes:
<sub><em>epsilon</em></sub>
<sub><em>momentum</em></sub>
<sub><em>training_mode</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[scale]:::text-only -->|"In[1]"| Op
In2[shift]:::text-only -->|"In[2]"| Op
In3[mean]:::text-only -->|"In[3]"| Op
In4[variance]:::text-only -->|"In[4]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.BatchNorm2D(nb_features: SupportsInt, epsilon: SupportsFloat = 9.999999747378752e-06, momentum: SupportsFloat = 0.10000000149011612, training_mode: bool = False, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a BatchNorm operator.
- Parameters:
nb_features (
int) – The number of features in the input tensor.epsilon (
float) – A small value added to the denominator for numerical stability.momentum (
float) – The momentum factor for the moving averages.training_mode (
bool) – Whether the operator is in training mode or inference mode.name (
str) – Name of the node.
BitErrorRate#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>BitErrorRateOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.BitErrorRate(n_bits: SupportsFloat, name: str = '') aidge_core.aidge_core.Node#
Create a BitErrorRate node with the specified number of bits.
BitShift#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>BitShiftOp</b>
Attributes:
<sub><em>direction</em></sub>
<sub><em>rounding</em></sub>
"):::operator
In0[InputTensor]:::text-only -->|"In[0]"| Op
In1[ShiftAmount]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[OutputTensor]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.BitShift(direction: object = 'right', rounding: bool = False, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a
aidge_core.BitShiftOpoperator.Bit Shift operation shifts tensor elements either to the Left (multiply by a power of two) or to the right (divide by a power of two).
Left shift: equivalent to
x << non integers, multiplies by2**n.Right shift: equivalent to
x >> non integers, divides by2**n.
If
rounding=True, then values truncated by the right shift are rounded to the nearest integer instead of being simply floored.- Parameters:
direction (
aidge_core.BitShiftOp.directionorstr) – Direction of the bit shift. Acceptsaidge_core.BitShiftOp.directionor"left"/"right"(case-sensitive).rounding (bool) – Apply rounding when shifting to the right. Has no effect on left shifts.
name (str) – Optional name for the created node.
- Returns:
The created
aidge_core.BitShiftOpnode.- Return type:
aidge_core.BitShiftOp
- Raises:
ValueError – If
directionis a string other than"left"or"right".
-
std::shared_ptr<Node> Aidge::BitShift(const BitShift_Op::Direction direction, bool rounding = false, const std::string &name = "")#
Factory function to create a
BitShiftnode.- Parameters:
direction – [in] The direction of the bitwise shift (
leftorright).rounding – [in] Apply rounding
name – [in] (Optional) Name of the node.
- Returns:
A shared pointer to the created node.
CastLike#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>CastLikeOp</b>
"):::operator
In0[data_input_1]:::text-only -->|"In[0]"| Op
In1[target_type]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.CastLike(name: str = '') aidge_core.aidge_core.Node#
CastLikeOp is a tensor operator that casts the input tensor to a data type of the second input tensor.
- Parameters:
name – name of the node.
Cast#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>CastOp</b>
Attributes:
<sub><em>target_type</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Cast(target_type: aidge_core.aidge_core.dtype, name: str = '') aidge_core.aidge_core.Node#
CastOp is a tensor operator that casts the input tensor to a data type specified by the target_type argument.
- Parameters:
target_type (Datatype) – data type of the output tensor
name – name of the node.
Ceil#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>CeilOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Ceil(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing an Ceil operator.
- Parameters:
name (
str) – Name of the node.
Clip#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ClipOp</b>
Attributes:
<sub><em>min</em></sub>
<sub><em>max</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[min_empty_tensor]:::text-only -->|"In[1]"| Op
In2[max_empty_tensor]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Clip(name: str = '', min: SupportsFloat = -3.4028234663852886e+38, max: SupportsFloat = 3.4028234663852886e+38) aidge_core.aidge_core.Node#
ClipOp is a tensor operator that performs a clipping operation on tensor elements. This class allows limiting tensor values to a specified range, defined by the min and max parameters. Values outside this range are replaced by the corresponding limit values. When min is greater than max, the clip operator sets all the ‘input’ values to the value of max.
- Parameters:
min (
float) – Minimum clipping value.max (
float) – Maximum clipping value.name (
str) – Name of the node.
-
std::shared_ptr<Aidge::Node> Aidge::Clip(const std::string &name = "", float min = std::numeric_limits<float>::lowest(), float max = std::numeric_limits<float>::max())#
Factory function to create a Clip node.
- Parameters:
name – [in] Name of the node.
min – [in] Minimum clipping value (default: lowest float).
max – [in] Maximum clipping value (default: maximum float).
- Returns:
A shared pointer to the created Clip node.
ComplexToInnerPair#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ComplexToInnerPairOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.ComplexToInnerPair(name: str = '') aidge_core.aidge_core.Node#
Creates an ComplexToInnerPair operation.
- Parameters:
name – Name of the node.
Concat#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ConcatOp</b>
Attributes:
<sub><em>axis</em></sub>
"):::operator
In0[data_input_0]:::text-only -->|"In[0]"| Op
In1[data_input_n]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Concat(nb_inputs: SupportsInt, axis: SupportsInt = 0, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a Concat operator.
- Parameters:
nb_inputs (
int) – The number of input tensors to concatenate.axis (
int) – The axis along which to concatenate the tensors.name (
str) – Name of the node.
-
std::shared_ptr<Node> Aidge::Concat(const IOIndex_t nbIn, const std::int32_t axis = 0, const std::string &name = "")#
Factory function to create a Concat node.
- Parameters:
nbIn – [in] Number of input tensors.
axis – [in] Axis along which concatenation is performed (default: 0).
name – [in] (Optional) Name of the node.
- Returns:
A shared pointer to the created node.
ConstantOfShape#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ConstantOfShapeOp</b>
Attributes:
<sub><em>value</em></sub>
"):::operator
In0[input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[constant_of_shape]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.ConstantOfShape(value: object = 0.0, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a ConstantOfShape operator.
- Parameters:
value (
Tensor) – Tensor with a given datatype that contains the value that will fill the output tensor.name (
str) – Name of the node.
Conv1D#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>Conv1DOp</b>
Attributes:
<sub><em>kernel_dims</em></sub>
<sub><em>stride_dims</em></sub>
<sub><em>dilation_dims</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[weight]:::text-only -->|"In[1]"| Op
In2[bias]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Conv1D(in_channels: SupportsInt, out_channels: SupportsInt, kernel_dims: collections.abc.Sequence[SupportsInt], name: str = '', stride_dims: collections.abc.Sequence[SupportsInt] = [1], dilation_dims: collections.abc.Sequence[SupportsInt] = [1], no_bias: bool = False) aidge_core.aidge_core.Node#
Initialize a node containing a convolution operator.
- Parameters:
in_channels (int) – The number of input channels (depth of the input tensor).
out_channels (int) – The number of output channels (depth of the output tensor).
kernel_dims (List[int]) – The dimensions of the convolution kernel (filter size).
name (str) – The name of the operator (optional).
stride_dims (List[int]) – The stride size for the convolution (default is [1]).
dilation_dims (List[int]) – The dilation size for the convolution (default is [1]).
no_bias (bool) – Whether to disable bias (default is False).
- Returns:
A new Convolution operator node.
- Return type:
ConvOp
-
std::shared_ptr<Node> Aidge::Conv1D(DimSize_t inChannels, DimSize_t outChannels, const std::array<DimSize_t, 1> &kernelDims, const std::string &name = "", const std::array<DimSize_t, 1> &strideDims = create_array<DimSize_t, 1>(1), const std::array<DimSize_t, 1> &dilationDims = create_array<DimSize_t, 1>(1), bool noBias = false)#
Create a 1D Conv_Op operator for performing convolution.
This function constructs a Convolution operation by specifying the input and output channels, kernel dimensions, stride, and dilation dimensions.
- Parameters:
inChannels – [in] The number of input channels.
outChannels – [in] The number of output channels.
kernelDims – [in] The kernel dimensions.
name – [in] The name of the operator (optional).
strideDims – [in] The stride dimensions (optional).
dilationDims – [in] The dilation dimensions (optional).
noBias – [in] A flag indicating if no bias is used (default is false).
- Returns:
A shared pointer to the created Node containing the Conv_Op.
Conv2D#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>Conv2DOp</b>
Attributes:
<sub><em>kernel_dims</em></sub>
<sub><em>stride_dims</em></sub>
<sub><em>dilation_dims</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[weight]:::text-only -->|"In[1]"| Op
In2[bias]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Conv2D(in_channels: SupportsInt, out_channels: SupportsInt, kernel_dims: collections.abc.Sequence[SupportsInt], name: str = '', stride_dims: collections.abc.Sequence[SupportsInt] = [1, 1], dilation_dims: collections.abc.Sequence[SupportsInt] = [1, 1], no_bias: bool = False) aidge_core.aidge_core.Node#
Initialize a node containing a convolution operator.
- Parameters:
in_channels (int) – The number of input channels (depth of the input tensor).
out_channels (int) – The number of output channels (depth of the output tensor).
kernel_dims (List[int]) – The dimensions of the convolution kernel (filter size).
name (str) – The name of the operator (optional).
stride_dims (List[int]) – The stride size for the convolution (default is [1]).
dilation_dims (List[int]) – The dilation size for the convolution (default is [1]).
no_bias (bool) – Whether to disable bias (default is False).
- Returns:
A new Convolution operator node.
- Return type:
ConvOp
-
std::shared_ptr<Node> Aidge::Conv2D(DimSize_t inChannels, DimSize_t outChannels, const std::array<DimSize_t, 2> &kernelDims, const std::string &name = "", const std::array<DimSize_t, 2> &strideDims = create_array<DimSize_t, 2>(1), const std::array<DimSize_t, 2> &dilationDims = create_array<DimSize_t, 2>(1), bool noBias = false)#
Create a 2D Conv_Op operator for performing convolution.
This function constructs a Convolution operation by specifying the input and output channels, kernel dimensions, stride, and dilation dimensions.
- Parameters:
inChannels – [in] The number of input channels.
outChannels – [in] The number of output channels.
kernelDims – [in] The kernel dimensions.
name – [in] The name of the operator (optional).
strideDims – [in] The stride dimensions (optional).
dilationDims – [in] The dilation dimensions (optional).
noBias – [in] A flag indicating if no bias is used (default is false).
- Returns:
A shared pointer to the created Node containing the Conv_Op.
Conv3D#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>Conv3DOp</b>
Attributes:
<sub><em>kernel_dims</em></sub>
<sub><em>stride_dims</em></sub>
<sub><em>dilation_dims</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[weight]:::text-only -->|"In[1]"| Op
In2[bias]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Conv3D(in_channels: SupportsInt, out_channels: SupportsInt, kernel_dims: collections.abc.Sequence[SupportsInt], name: str = '', stride_dims: collections.abc.Sequence[SupportsInt] = [1, 1, 1], dilation_dims: collections.abc.Sequence[SupportsInt] = [1, 1, 1], no_bias: bool = False) aidge_core.aidge_core.Node#
Initialize a node containing a convolution operator.
- Parameters:
in_channels (int) – The number of input channels (depth of the input tensor).
out_channels (int) – The number of output channels (depth of the output tensor).
kernel_dims (List[int]) – The dimensions of the convolution kernel (filter size).
name (str) – The name of the operator (optional).
stride_dims (List[int]) – The stride size for the convolution (default is [1]).
dilation_dims (List[int]) – The dilation size for the convolution (default is [1]).
no_bias (bool) – Whether to disable bias (default is False).
- Returns:
A new Convolution operator node.
- Return type:
ConvOp
-
std::shared_ptr<Node> Aidge::Conv3D(DimSize_t inChannels, DimSize_t outChannels, const std::array<DimSize_t, 3> &kernelDims, const std::string &name = "", const std::array<DimSize_t, 3> &strideDims = create_array<DimSize_t, 3>(1), const std::array<DimSize_t, 3> &dilationDims = create_array<DimSize_t, 3>(1), bool noBias = false)#
Create a 3D Conv_Op operator for performing convolution.
This function constructs a Convolution operation by specifying the input and output channels, kernel dimensions, stride, and dilation dimensions.
- Parameters:
inChannels – [in] The number of input channels.
outChannels – [in] The number of output channels.
kernelDims – [in] The kernel dimensions.
name – [in] The name of the operator (optional).
strideDims – [in] The stride dimensions (optional).
dilationDims – [in] The dilation dimensions (optional).
noBias – [in] A flag indicating if no bias is used (default is false).
- Returns:
A shared pointer to the created Node containing the Conv_Op.
ConvDepthWise1D#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ConvDepthWise1DOp</b>
Attributes:
<sub><em>kernel_dims</em></sub>
<sub><em>stride_dims</em></sub>
<sub><em>dilation_dims</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[weight]:::text-only -->|"In[1]"| Op
In2[bias]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.ConvDepthWise1D(nb_channels: SupportsInt, kernel_dims: collections.abc.Sequence[SupportsInt], name: str = '', stride_dims: collections.abc.Sequence[SupportsInt] = [1], dilation_dims: collections.abc.Sequence[SupportsInt] = [1], no_bias: bool = False) aidge_core.aidge_core.Node#
Initialize a node containing a depthwise convolution operator.
- Parameters:
nb_channels (int) – The number of channels in the input tensor (i.e., depth of the tensor).
kernel_dims (List[int]) – The dimensions of the convolution kernel (filter size).
name (str) – The name of the operator node (optional).
stride_dims (List[int]) – The stride size for the convolution (default is [1]).
dilation_dims (List[int]) – The dilation size for the convolution (default is [1]).
no_bias (bool) – Whether to disable bias in the operation (default is False).
- Returns:
A new Depthwise Convolution operator node.
- Return type:
ConvDepthWiseOp
-
std::shared_ptr<Node> Aidge::ConvDepthWise1D(const DimSize_t nbChannels, const std::array<DimSize_t, 1> &kernelDims, const std::string &name = "", const std::array<DimSize_t, 1> &strideDims = create_array<DimSize_t, 1>(1), const std::array<DimSize_t, 1> &dilationDims = create_array<DimSize_t, 1>(1), bool noBias = false)#
Create a 1D ConvDepthWise_Op operator for performing depthwise convolution.
This function constructs a Depthwise Convolution operation by specifying the input channels, kernel dimensions, stride, and dilation dimensions.
- Parameters:
nbChannels – [in] The number of input channels.
kernelDims – [in] The kernel dimensions.
name – [in] The name of the operator (optional).
strideDims – [in] The stride dimensions (optional).
dilationDims – [in] The dilation dimensions (optional).
noBias – [in] A flag indicating if no bias is used (default is false).
- Returns:
A shared pointer to the created Node containing the ConvDepthWise_Op.
ConvDepthWise2D#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ConvDepthWise2DOp</b>
Attributes:
<sub><em>kernel_dims</em></sub>
<sub><em>stride_dims</em></sub>
<sub><em>dilation_dims</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[weight]:::text-only -->|"In[1]"| Op
In2[bias]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.ConvDepthWise2D(nb_channels: SupportsInt, kernel_dims: collections.abc.Sequence[SupportsInt], name: str = '', stride_dims: collections.abc.Sequence[SupportsInt] = [1, 1], dilation_dims: collections.abc.Sequence[SupportsInt] = [1, 1], no_bias: bool = False) aidge_core.aidge_core.Node#
Initialize a node containing a depthwise convolution operator.
- Parameters:
nb_channels (int) – The number of channels in the input tensor (i.e., depth of the tensor).
kernel_dims (List[int]) – The dimensions of the convolution kernel (filter size).
name (str) – The name of the operator node (optional).
stride_dims (List[int]) – The stride size for the convolution (default is [1]).
dilation_dims (List[int]) – The dilation size for the convolution (default is [1]).
no_bias (bool) – Whether to disable bias in the operation (default is False).
- Returns:
A new Depthwise Convolution operator node.
- Return type:
ConvDepthWiseOp
-
std::shared_ptr<Node> Aidge::ConvDepthWise2D(const DimSize_t nbChannels, const std::array<DimSize_t, 2> &kernelDims, const std::string &name = "", const std::array<DimSize_t, 2> &strideDims = create_array<DimSize_t, 2>(1), const std::array<DimSize_t, 2> &dilationDims = create_array<DimSize_t, 2>(1), bool noBias = false)#
Create a 2D ConvDepthWise_Op operator for performing depthwise convolution.
This function constructs a Depthwise Convolution operation by specifying the input channels, kernel dimensions, stride, and dilation dimensions.
- Parameters:
nbChannels – [in] The number of input channels.
kernelDims – [in] The kernel dimensions.
name – [in] The name of the operator (optional).
strideDims – [in] The stride dimensions (optional).
dilationDims – [in] The dilation dimensions (optional).
noBias – [in] A flag indicating if no bias is used (default is false).
- Returns:
A shared pointer to the created Node containing the ConvDepthWise_Op.
ConvTranspose1D#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ConvTranspose1DOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[weight]:::text-only -->|"In[1]"| Op
In2[bias]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.ConvTranspose1D(in_channels: SupportsInt, out_channels: SupportsInt, kernel_dims: collections.abc.Sequence[SupportsInt], stride_dims: collections.abc.Sequence[SupportsInt] = [1], dilation_dims: collections.abc.Sequence[SupportsInt] = [1], no_bias: bool = False, name: str = '') aidge_core.aidge_core.Node#
-
std::shared_ptr<Node> Aidge::ConvTranspose1D(const DimSize_t &inChannels, const DimSize_t &outChannels, const std::array<DimSize_t, 1> &kernelDims, const std::array<DimSize_t, 1> &strideDims = create_array<DimSize_t, 1>(1), const std::array<DimSize_t, 1> &dilationDims = create_array<DimSize_t, 1>(1), const bool noBias = false, const std::string &name = "")#
Perform a 1D convTranspose(/deconvolution) on the input Tensor.
- Parameters:
inChannels – Number of input channels.
outChannels – Number of output channels.
kernelDims – Dimensions of the kernel. Must be the same number of dimensions as the feature map.
name – Name of the operator.
strideDims – Dimensions of the stride attribute. Must be the same number of dimensions as the feature map.
dilationDims – Dimensions of the dilation attribute. Must be the same number of dimensions as the feature map.
- Returns:
std::shared_ptr<Node> A Node containing the operator.
ConvTranspose2D#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ConvTranspose2DOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[weight]:::text-only -->|"In[1]"| Op
In2[bias]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.ConvTranspose2D(in_channels: SupportsInt, out_channels: SupportsInt, kernel_dims: collections.abc.Sequence[SupportsInt], stride_dims: collections.abc.Sequence[SupportsInt] = [1, 1], dilation_dims: collections.abc.Sequence[SupportsInt] = [1, 1], no_bias: bool = False, name: str = '') aidge_core.aidge_core.Node#
-
std::shared_ptr<Node> Aidge::ConvTranspose2D(const DimSize_t &inChannels, const DimSize_t &outChannels, const std::array<DimSize_t, 2> &kernelDims, const std::array<DimSize_t, 2> &strideDims = create_array<DimSize_t, 2>(1), const std::array<DimSize_t, 2> &dilationDims = create_array<DimSize_t, 2>(1), const bool noBias = false, const std::string &name = "")#
Perform a 2D convTranspose(/deconvolution) on the input Tensor.
- Parameters:
inChannels – Number of input channels.
outChannels – Number of output channels.
kernelDims – Dimensions of the kernel. Must be the same number of dimensions as the feature map.
name – Name of the operator.
strideDims – Dimensions of the stride attribute. Must be the same number of dimensions as the feature map.
dilationDims – Dimensions of the dilation attribute. Must be the same number of dimensions as the feature map.
- Returns:
std::shared_ptr<Node> A Node containing the operator.
ConvTranspose3D#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ConvTranspose3DOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[weight]:::text-only -->|"In[1]"| Op
In2[bias]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.ConvTranspose3D(in_channels: SupportsInt, out_channels: SupportsInt, kernel_dims: collections.abc.Sequence[SupportsInt], stride_dims: collections.abc.Sequence[SupportsInt] = [1, 1, 1], dilation_dims: collections.abc.Sequence[SupportsInt] = [1, 1, 1], no_bias: bool = False, name: str = '') aidge_core.aidge_core.Node#
-
std::shared_ptr<Node> Aidge::ConvTranspose3D(const DimSize_t &inChannels, const DimSize_t &outChannels, const std::array<DimSize_t, 3> &kernelDims, const std::array<DimSize_t, 3> &strideDims = create_array<DimSize_t, 3>(1), const std::array<DimSize_t, 3> &dilationDims = create_array<DimSize_t, 3>(1), const bool noBias = false, const std::string &name = "")#
Perform a 3D convTranspose(/deconvolution) on the input Tensor.
- Parameters:
inChannels – Number of input channels.
outChannels – Number of output channels.
kernelDims – Dimensions of the kernel. Must be the same number of dimensions as the feature map.
name – Name of the operator.
strideDims – Dimensions of the stride attribute. Must be the same number of dimensions as the feature map.
dilationDims – Dimensions of the dilation attribute. Must be the same number of dimensions as the feature map.
- Returns:
std::shared_ptr<Node> A Node containing the operator.
Cosh#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>CoshOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Cosh(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing an Cosh operator.
- Parameters:
name (
str) – Name of the node.
Cos#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>CosOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Cos(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing an Cos operator.
- Parameters:
name (
str) – Name of the node.
CryptoHash#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>CryptoHashOp</b>
Attributes:
<sub><em>crypto_hash_function</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.CryptoHash(name: str = '') aidge_core.aidge_core.Node#
DepthToSpace#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>DepthToSpaceOp</b>
Attributes:
<sub><em>block_size</em></sub>
<sub><em>mode</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.DepthToSpace(block_size: SupportsInt, mode: str = 'CRD', name: str = '') aidge_core.aidge_core.Node#
-
std::shared_ptr<Node> Aidge::DepthToSpace(const std::uint32_t blockSize, const DepthToSpace_Op::Mode mode = DepthToSpace_Op::Mode::CRD, const std::string &name = "")#
Create a DepthToSpace node.
- Parameters:
blockSize – Size of the blocks to split depth into spatial dimensions.
mode – Depth-to-space transformation mode (DCR or CRD).
name – Name of the operator.
- Returns:
Shared pointer to the created DepthToSpace node.
DFT#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>DFTOp</b>
Attributes:
<sub><em>dft_length</em></sub>
<sub><em>axis</em></sub>
<sub><em>inverse</em></sub>
<sub><em>onesided</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[dft_length]:::text-only -->|"In[1]"| Op
In2[axis]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.DFT(name: str = '', dft_length: SupportsInt = 0, axis: SupportsInt = -1, inverse: bool = False, onesided: bool = False) aidge_core.aidge_core.Node#
Div#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>DivOp</b>
"):::operator
In0[data_input_1]:::text-only -->|"In[0]"| Op
In1[data_input_2]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Div(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a Div operator that performs element-wise division between two tensors.
- The operation is defined as:
Output = Input1 / Input2
The output tensor shape is determined by taking the maximum size along each dimension of the input tensors after broadcasting.
- Examples:
Input A: (3, 4, 2), Input B: (2), Output: (3, 4, 2) Input A: (1, 5, 3), Input B: (2, 1, 3), Output: (2, 5, 3)
- Parameters:
name (str, optional) – Name of the node, default=””.
- Returns:
A node containing the Div operator.
- Return type:
aidge_core.DivOp
Dropout#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>DropoutOp</b>
Attributes:
<sub><em>probability</em></sub>
<sub><em>training_mode</em></sub>
<sub><em>seed</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[probability]:::text-only -->|"In[1]"| Op
In2[training_mode]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
Op -->|"Out[1]"| Out1[mask]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Dropout(probability: SupportsFloat = 0.5, training_mode: bool = True, seed: SupportsInt = -9223372036854775808, name: str = '') aidge_core.aidge_core.Node#
Equal#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>EqualOp</b>
"):::operator
In0[data_input_1]:::text-only -->|"In[0]"| Op
In1[data_input_2]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Equal(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing an Equal operator.
- Parameters:
name – name of the node.
Erf#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ErfOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Erf(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing an Erf operator that computes the error function (erf) element-wise.
- The error function (erf) is computed element-wise as follows:
erf(x) = (2 / sqrt(pi)) * integral from 0 to x of exp(-t^2) dt
- Parameters:
name (str) – name of the node (optional).
- Returns:
A node containing the Erf operator.
- Return type:
aidge_core.ErfOp
Expand#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ExpandOp</b>
"):::operator
In0[data]:::text-only -->|"In[0]"| Op
In1[shape]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Expand(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing an Expand operator. This operator will broadcast values given via input#0 to a shape given via input#1’s values If one of the inputs has less dimensions than the other, dimension will be appended 1’s to the left.
Example:
input#0 = [[[[2, 1, 3]]]] => dims = [1, 1, 3, 1] input#1 = [2, 1, 1] => converted to [1,2,1,1] output = [[[[2, 1, 3], [2, 1, 3]]]] => dims = [1, 2, 3, 1]
See https://onnx.ai/onnx/repo-docs/Broadcasting.html for detailed ONNX broadcasting rules
- Parameters:
name (str) – name of the node.
Exp#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ExpOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Exp(name: str = '') aidge_core.aidge_core.Node#
FC#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>FCOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[weight]:::text-only -->|"In[1]"| Op
In2[bias]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.FC(in_channels: SupportsInt, out_channels: SupportsInt, no_bias: bool = False, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a Fully Connected (FC) operator.
- Parameters:
in_channels (
int) – The number of input channels (features).out_channels (
int) – The number of output channels (features).no_bias (
bool) – Whether to include bias in the operation. Defaults to False.name (
str) – Name of the node.
-
std::shared_ptr<Node> Aidge::FC(const DimSize_t inChannels, const DimSize_t outChannels, bool noBias = false, const std::string &name = "")#
Creates a Fully Connected operation node.
Constructs an FC operator node with the specified input and output channels.
- Parameters:
inChannels – [in] Number of input channels.
outChannels – [in] Number of output channels.
noBias – [in] Flag indicating whether to use a bias term (default is
false).name – [in] Name of the operator (optional).
- Returns:
A shared pointer to the Node containing the FC operator.
FixedNBitFlip#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>FixedNBitFlipOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.FixedNBitFlip(n_bits: SupportsInt, name: str = '') aidge_core.aidge_core.Node#
Create a FixedNBitFlip node with the specified number of bit flips.
Flatten#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>FlattenOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Flatten(axis: SupportsInt = 1, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a flatten operator.
- Parameters:
axis – up to which input dimensions (exclusive) should be flattened to the outer dimension of the output between [-r;r-1] with r = input_tensor.nbDims()
name – name of the node.
-
std::shared_ptr<Node> Aidge::Flatten(std::int64_t axis = 1, const std::string &name = "")#
Create a Flatten node.
Initializes a Flatten node that reshapes a tensor into a 2D matrix based on the specified axis.
- Parameters:
axis – [in] The axis at which to flatten the tensor.
name – [in] Optional. The name of the node.
- Returns:
A shared pointer to a Node representing the Flatten operation.
Floor#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>FloorOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Floor(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing an Floor operator.
- Parameters:
name (
str) – Name of the node.
Fold2D#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>Fold2DOp</b>
Attributes:
<sub><em>output_dims</em></sub>
<sub><em>stride_dims</em></sub>
<sub><em>dilation_dims</em></sub>
<sub><em>kernel_dims</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Fold2D(output_dims: collections.abc.Sequence[SupportsInt], kernel_dims: collections.abc.Sequence[SupportsInt], name: str = '', stride_dims: collections.abc.Sequence[SupportsInt] = [1, 1], dilation_dims: collections.abc.Sequence[SupportsInt] = [1, 1]) aidge_core.aidge_core.Node#
Initialize a node containing a Fold operator.
- Parameters:
output_dims (List[int]) – The dimensions of output.
kernel_dims (List[int]) – The dimensions of the fold kernel (filter size).
name (str) – The name of the operator (optional).
stride_dims (List[int]) – The stride size for the fold (default is [1]).
dilation_dims (List[int]) – The dilation size for the fold (default is [1]).
- Returns:
A new Fold operator node.
- Return type:
FoldOp
-
std::shared_ptr<Node> Aidge::Fold2D(const std::array<DimSize_t, 2> &outputDims, const std::array<DimSize_t, 2> &kernelDims, const std::string &name = "", const std::array<DimSize_t, 2> &strideDims = create_array<DimSize_t, 2>(1), const std::array<DimSize_t, 2> &dilationDims = create_array<DimSize_t, 2>(1))#
Create a 2D Fold operation node.
This function creates a Fold operation node that applies a fold transformation to a tensor based on the specified attributes.
- Parameters:
outputDims – [in] Output dimensions for the fold operation.
kernelDims – [in] Kernel dimensions.
name – [in] Name of the operator.
strideDims – [in] Stride dimensions for the fold operation.
dilationDims – [in] Dilation dimensions for the fold operation.
- Returns:
A shared pointer to the created Node.
Gather#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>GatherOp</b>
Attributes:
<sub><em>axis</em></sub>
<sub><em>indices</em></sub>
<sub><em>gathered_shape</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[indices]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Gather(axis: SupportsInt = 0, indices: collections.abc.Sequence[SupportsInt] = [], gathered_shape: collections.abc.Sequence[SupportsInt] = [], name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a Gather operator that extracts elements from a tensor along a specified axis.
This operation selects values along the specified axis using the provided indices. The resulting tensor will have the same shape as the input tensor except along the given axis, where the size will be determined by the indices.
- Parameters:
axis (int) – Axis along which to gather the elements (default is 0).
indices (
List[int]) – Indices to gather along the axis.gathered_shape (
List[int]) – Shape of the gathered result.name (str) – Name of the node (optional).
- Returns:
A node containing the Gather operator.
- Return type:
GatherOp
-
std::shared_ptr<Node> Aidge::Gather(std::int8_t axis = 0, const std::vector<std::int64_t> &indices = {}, const std::vector<DimSize_t> &gatheredShape = {}, const std::string &name = "")#
Create a Gather node.
Initializes a Gather node that extracts elements from an input tensor along a specified axis using a set of indices.
- Parameters:
axis – [in] The axis along which to gather elements. Default is 0.
indices – [in] A vector specifying which elements to gather. Default is an empty vector.
gatheredShape – [in] The shape of the resulting gathered tensor. Default is an empty vector.
name – [in] Optional. The name of the node.
- Returns:
A shared pointer to a Node representing the Gather operation.
Gemm (meta op.)#
☑️ This is a meta-operator.
- aidge_core.Gemm(in_channels: SupportsInt, out_channels: SupportsInt, alpha: SupportsFloat = 1.0, beta: SupportsFloat = 1.0, no_bias: bool = False, transA: bool = False, transB: bool = False, name: str = '') aidge_core.aidge_core.Node#
Creates a node that performs a General Matrix Multiplication (GEMM) operation.
- This operation computes:
Y = alpha * (A * B) + beta * C
where: - A and B are input matrices (with optional transposition). - C is a bias term (optional, omitted if no_bias is True). - alpha and beta are scalar coefficients. - transA and transB determine whether A and/or B are transposed before multiplication.
- Parameters:
in_channels (int) – Number of input channels.
out_channels (int) – Number of output channels.
alpha (float) – Scalar multiplier for the matrix product A * B.
beta (float) – Scalar multiplier for the bias term.
no_bias (bool) – If True, disables the addition of the bias term.
transA (bool) – If True, transposes input A before multiplication.
transB (bool) – If True, transposes input B before multiplication.
name (str) – Optional name for the node.
- Returns:
A node representing the GEMM operation.
- Return type:
GemmOp
-
std::shared_ptr<Node> Aidge::Gemm(DimSize_t in_channels, DimSize_t out_channels, float alpha = 1.0f, float beta = 1.0f, bool no_bias = false, bool transposeA = false, bool transposeB = false, const std::string &name = "")#
Creates a Gemm node.
This function creates a Gemm (General Matrix Multiplication) node, representing a linear transformation with optional bias addition and transposition of the input matrices.
The GEMM operation performs the following computation on an input tensor X:
Where:Y = alpha * (A * B) + beta * C
Ais the input tensor (often X).Bis a weight matrix with shape[inChannels, outChannels]or its transposition.Cis the bias vector (optional).alphaandbetaare scalar coefficients applied to the matrix product and the bias, respectively.transposeAandtransposeBcontrol whether matrices A and B are transposed before multiplication.If
no_biasis true, the bias termbeta * Cis omitted.
- Parameters:
inChannels – [in] Number of input channels (columns of A).
outChannels – [in] Number of output channels (columns of B).
alpha – [in] Scalar multiplier for the product of input tensors A * B.
beta – [in] Scalar multiplier for the bias.
no_bias – [in] Flag indicating whether to omit the bias term (default is
false).transposeA – [in] Flag indicating whether input A needs to be transposed (default is
false).transposeB – [in] Flag indicating whether input B needs to be transposed (default is
false).name – [in] Optional name for the operation.
- Returns:
A shared pointer to the Node representing the GEMM operation.
GlobalAveragePooling#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>GlobalAveragePoolingOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.GlobalAveragePooling(name: str = '', rounding_mode: object = 'half_away_from_zero') aidge_core.aidge_core.Node#
Initialize a node containing a Global Average Pooling operator.
This operation performs global average pooling on the input tensor. The result is a tensor where each channel of the input tensor is reduced to a single value by averaging all the elements in that channel.
- Parameters:
name (str) – Name of the node (optional).
- Returns:
A node containing the Global Average Pooling operator.
- Return type:
GlobalAveragePoolingOp
GridSample#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>GridSampleOp</b>
Attributes:
<sub><em>mode</em></sub>
<sub><em>padding_mode</em></sub>
<sub><em>align_corners</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[grid_field]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.GridSample(mode: str, padding_mode: str, align_corners: bool, name: str = '') aidge_core.aidge_core.Node#
Creates a GridSample operation.
- Parameters:
mode – Interpolation mode to use for sampling.
padding_mode – Padding mode for out-of-bound coordinates.
align_corners – Whether to align the corners of the grid.
name – Name of the node.
-
std::shared_ptr<Node> Aidge::GridSample(typename GridSample_Op::Mode mode = GridSample_Op::Mode::Linear, typename GridSample_Op::PaddingMode paddingMode = GridSample_Op::PaddingMode::Zeros, bool alignCorners = false, const std::string &name = "")#
Creates a GridSample operator node.
- Parameters:
mode – [in] Interpolation mode.
paddingMode – [in] Padding mode.
alignCorners – [in] Whether to align grid points with corners.
name – [in] Name of the operator.
- Returns:
Shared pointer to the GridSample node.
GRU (meta op.)#
☑️ This is a meta-operator.
- aidge_core.GRU(in_channels: SupportsInt, hidden_channels: SupportsInt, seq_length: SupportsInt, nobias: bool = False, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing an GRU (Long Short-Term Memory) operator.
The GRU operator is a recurrent neural network (RNN) variant designed to model sequential data while addressing the vanishing gradient problem. It includes gating mechanisms to control information flow through time.
- Parameters:
in_channels (int) – The number of input features per time step.
hidden_channels (int) – The number of hidden units in the GRU.
seq_length (int) – The number of time steps in the input sequence.
nobias (bool) – If set to True, no bias terms are included in the GRU computation.
name (str) – Name of the node (optional).
- Returns:
A node containing the GRU operator.
- Return type:
-
std::shared_ptr<Node> Aidge::GRU(DimSize_t in_channels, DimSize_t hidden_channels, DimSize_t seq_length, bool noBias = false, const std::string &name = "")#
Creates an GRU (Gated Recurrent Unit) operator.
This function creates an GRU operation which is a popular recurrent neural network (RNN) layer for sequence processing.
- Parameters:
in_channels – [in] The number of input channels.
hidden_channels – [in] The number of hidden channels in the GRU.
seq_length – [in] The length of the input sequence.
noBias – [in] Whether to disable the bias (default is false).
name – [in] Optional name for the operation.
- Returns:
A shared pointer to the Node representing the GRU operation.
HannWindow (meta op.)#
☑️ This is a meta-operator.
Hardmax#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>HardmaxOp</b>
Attributes:
<sub><em>axis</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[data_values]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Hardmax(axis: SupportsInt, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing an hardmax operator.
- Parameters:
axis – axis Axis along which the Hardmax operation is applied.
name (str) – name of the node.
-
inline std::shared_ptr<Node> Aidge::Hardmax(std::int32_t axis = 0, const std::string &name = "")#
Create a Hardmax operation node.
- Parameters:
axis – [in] Axis along which the Hardmax operation is applied.
name – [in] Name of the operator (optional).
- Returns:
A shared pointer to the Node containing the Hardmax operator.
HardSigmoid#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>HardSigmoidOp</b>
Attributes:
<sub><em>alpha</em></sub>
<sub><em>beta</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.HardSigmoid(name: str = '', alpha: SupportsFloat = 0.20000000298023224, beta: SupportsFloat = 0.5) aidge_core.aidge_core.Node#
Initialize a node containing a HardSigmoid operator that applies the HardSigmoid function element-wise.
- The HardSigmoid function is applied element-wise and is defined as:
HardSigmoid(x) = max(0, min(1, alpha * x + beta))
This operation provides a piecewise linear approximation of the sigmoid function, which is computationally more efficient than the standard sigmoid while maintaining similar properties. The function is bounded between 0 and 1, making it suitable for activation functions in neural networks.
The parameters alpha and beta control the shape of the function: - alpha: Controls the slope of the linear region - beta: Controls the horizontal shift of the function
- Parameters:
name (str) – Name of the node (optional).
alpha (float) – Alpha parameter for the hard sigmoid function (default: 0.2).
beta (float) – Beta parameter for the hard sigmoid function (default: 0.5).
- Returns:
A node containing the HardSigmoid operator.
- Return type:
HardSigmoidOp
-
std::shared_ptr<Aidge::Node> Aidge::HardSigmoid(const std::string &name = "", float alpha = 0.2f, float beta = 0.5f)#
Factory function to create a HardSigmoid node.
- Parameters:
name – [in] Name of the node.
alpha – [in] Alpha parameter for the hard sigmoid function (default: 0.2).
beta – [in] Beta parameter for the hard sigmoid function (default: 0.5).
- Returns:
A shared pointer to the created HardSigmoid node.
HardSwish (meta op.)#
☑️ This is a meta-operator.
- aidge_core.HardSwish(alpha: SupportsFloat = 0.1666666716337204, beta: SupportsFloat = 0.5, name: str = '') aidge_core.aidge_core.Node#
Initialize a HardSwish operator.
- Parameters:
alpha (float) – Alpha parameter for the HardSigmoid function (default: 1/6 ≈ 0.1667).
beta (float) – Beta parameter for the HardSigmoid function (default: 0.5).
name (str) – Optional name for the node.
- Returns:
A node containing the HardSwish operator.
- Return type:
-
std::shared_ptr<Node> Aidge::HardSwish(float alpha = 1.0f / 6.0f, float beta = 0.5f, const std::string &name = "")#
Creates a HardSwish operator.
This function creates a HardSwish operation which is defined as: f(x) = Mul(x, HardSigmoid(x)) where HardSigmoid(x) = max(0, min(1, alpha * x + beta))
- Parameters:
alpha – [in] Alpha parameter for the hard sigmoid function (default: 1/6 ≈ 0.1667).
beta – [in] Beta parameter for the hard sigmoid function (default: 0.5).
name – [in] Optional name for the operation.
- Returns:
A shared pointer to the Node representing the HardSwish operation.
Heaviside#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>HeavisideOp</b>
Attributes:
<sub><em>value</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[data_values]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Heaviside(value: SupportsFloat, name: str = '') aidge_core.aidge_core.Node#
Initialize an Heaviside node. This node will compute a heaviside step function on each element of the input tensor.
\[\begin{split}\text{heaviside}(x, v) = \begin{cases} 0, & x < 0 \\ v, & x = 0 \\ 1, & x > 0 \end{cases}\end{split}\]- Parameters:
value (float) – The value use for the output tensor when input is 0.
name – Name of the node.
-
std::shared_ptr<Node> Aidge::Heaviside(float value, const std::string &name = "")#
Create a Heaviside node.
Initializes a Heaviside node that computes the Heaviside step function for each element of the input tensor, using the specified value for inputs equal to zero.
- Parameters:
value – The value used in the output tensor when the input is 0.
name – Optional. The name of the node.
- Returns:
A shared pointer to a Node representing the Heaviside operation.
Identity#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>IdentityOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Identity(name: str = '') aidge_core.aidge_core.Node#
Creates an Identity operation, which returns the input as-is.
- Parameters:
name – Name of the node.
InnerPairToComplex#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>InnerPairToComplexOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.InnerPairToComplex(name: str = '') aidge_core.aidge_core.Node#
Creates an InnerPairToComplex operation.
- Parameters:
name – Name of the node.
InstanceNorm#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>InstanceNormOp</b>
Attributes:
<sub><em>epsilon</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[scale]:::text-only -->|"In[1]"| Op
In2[bias]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.InstanceNorm(nb_features: SupportsInt, epsilon: SupportsFloat = 9.999999747378752e-06, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a
aidge_core.InstanceNormOpoperator.Instance Normalization normalizes the inputs across the spatial dimensions for each channel and each sample independently. This is particularly useful for style transfer and generative models where instance-specific statistics are more relevant than batch statistics.
Forward computation:
For each channel \(c\) and sample \(n\) with spatial dimensions \(H \times W\):
\[y_{n,c,h,w} = \frac{x_{n,c,h,w} - \mu_{n,c}}{\sqrt{\sigma^2_{n,c}+\epsilon}} \gamma_c + \beta_c\]where:
\(\mu_{n,c}\) and \(\sigma^2_{n,c}\) are the mean and variance computed over spatial dimensions for sample \(n\) and channel \(c\).
epsilonis a small constant to prevent division by zero.\(\gamma_c\) and \(\beta_c\) are learnable scale and bias parameters per channel.
Example:
Input shape:
(N, C, H, W)Normalize over
(H, W)for each(N, C)pairEpsilon:
1e-5Output shape:
(N, C, H, W)
- Parameters:
nb_features (
int) – The number of channels (C dimension).epsilon (
float) – Small constant to add to the denominator for numerical stability.name (
str) – Name of the node.
-
std::shared_ptr<Node> Aidge::InstanceNorm(const DimSize_t nbFeatures, const float epsilon = 1.0e-5F, const std::string &name = "")#
Creates an InstanceNorm operator node.
Instance Normalization normalizes the inputs across the spatial dimensions for each channel and each sample independently. This is particularly useful for style transfer and generative models where instance-specific statistics are more relevant than batch statistics.
Forward computation:
For each channel \(c\) and sample \(n\) with spatial dimensions \(H \times W\):
\[ y_{n,c,h,w} = \frac{x_{n,c,h,w} - \mu_{n,c}} {\sqrt{\sigma^2_{n,c}+\epsilon}} \gamma_c + \beta_c \]where:\(\mu_{n,c} = \frac{1}{HW} \sum_{h,w} x_{n,c,h,w}\) is the mean over spatial dimensions for sample \(n\) and channel \(c\).
\(\sigma^2_{n,c} = \frac{1}{HW} \sum_{h,w} (x_{n,c,h,w} - \mu_{n,c})^2\) is the variance.
\(\epsilon\) is a small constant to prevent division by zero.
\(\gamma_c\) and \(\beta_c\) are learnable scale and bias parameters per channel.
Gradient computation:
Define:
\[ \hat{x}_{n,c,h,w} = \frac{x_{n,c,h,w} - \mu_{n,c}}{\sqrt{\sigma^2_{n,c} + \epsilon}} \]With respect to input \(x\):
\[ \frac{\partial \mathcal{L}}{\partial x_{n,c,h,w}} = \frac{\gamma_c}{\sqrt{\sigma^2_{n,c} + \epsilon}} \left( \frac{\partial \mathcal{L}}{\partial y_{n,c,h,w}} - \frac{1}{HW} \sum_{h',w'} \frac{\partial \mathcal{L}}{\partial y_{n,c,h',w'}} - \frac{\hat{x}_{n,c,h,w}}{HW} \sum_{h',w'} \frac{\partial \mathcal{L}}{\partial y_{n,c,h',w'}} \hat{x}_{n,c,h',w'} \right) \]With respect to \(\gamma\):
\[ \frac{\partial \mathcal{L}}{\partial \gamma_c} = \sum_{n,h,w} \frac{\partial \mathcal{L}}{\partial y_{n,c,h,w}} \hat{x}_{n,c,h,w} \]With respect to \(\beta\):
\[ \frac{\partial \mathcal{L}}{\partial \beta_c} = \sum_{n,h,w} \frac{\partial \mathcal{L}}{\partial y_{n,c,h,w}} \]
- Example:
Input shape:
(N, C, H, W)Normalize over
(H, W)for each(N, C)pairEpsilon:
1e-5Output shape:
(N, C, H, W)
- Parameters:
nbFeatures – Number of channels (C dimension).
epsilon – Small constant for numerical stability (default: 1e-5).
name – Optional operator name.
- Returns:
Shared pointer to the created Node.
LayerNorm#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>LayerNormOp</b>
Attributes:
<sub><em>axis</em></sub>
<sub><em>epsilon</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[scale]:::text-only -->|"In[1]"| Op
In2[bias]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.LayerNorm(nb_features: SupportsInt, axis: SupportsInt, epsilon: SupportsFloat = 9.999999747378752e-06, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a
aidge_core.LayerNormOpoperator.Layer Normalization normalizes the inputs across the last N dimensions (starting from a given
axis) for each sample independently. This is particularly useful for models where batch statistics are less stable, such as in RNNs or when using small batch sizes.Forward computation:
For a single sample \(x\) with \(d\) features:
\[y = \frac{x - E[x] \cdot \mathbb{1}_{d}}{\sqrt{\mathrm{Var}[x] + \epsilon}} \odot \gamma + \beta\]where:
\(E[x]\) and \(\mathrm{Var}[x]\) are the mean and variance computed over the specified normalized dimensions.
epsilonis a small constant to prevent division by zero.\(\gamma\) and \(\beta\) are learnable scale and bias parameters.
Example:
Input shape:
(N, C, H, W)Axis:
2→ normalize over(H, W)Epsilon:
1e-5Output shape:
(N, C, H, W)
- Parameters:
nb_features (
int) – The number of features in the input tensor.axis (
int) – First normalization dimension (0-based index).epsilon (
float) – Small constant to add to the denominator for numerical stability.name (
str) – Name of the node.
-
std::shared_ptr<Node> Aidge::LayerNorm(const DimSize_t nbFeatures, const int axis, const float epsilon = 1.0e-5F, const std::string &name = "")#
Creates a LayerNorm operator node.
Layer Normalization normalizes the inputs across the last N dimensions (starting from a given axis) for each sample independently. This is particularly useful for models where batch statistics are less stable (e.g., RNNs or small batch sizes).
Forward computation:
For a single sample \(x\) of \(X\) with \(d\) features:
\[ y = \frac{x - E[x] \cdot \mathbb{1}_{d}} {\sqrt{\mathrm{Var}[x]+\epsilon}} \odot \gamma + \beta \]where:\(E[x] = \frac{1}{d} \sum_{i=1}^d x_i\) is the mean over the normalized dimensions.
\(\mathrm{Var}[x] = \frac{1}{d} \sum_{i=1}^d (x_i - E[x])^2\) is the variance.
\(\epsilon\) is a small constant to prevent division by zero.
\(\gamma\) and \(\beta\) are learnable scale and bias parameters.
Gradient computation:
Define:
\[ \hat{x} = \frac{x - E[x] \cdot \mathbb{1}_{d}}{\sigma}, \quad \sigma = \sqrt{\mathrm{Var}[x] + \epsilon} \]With respect to input \(x\):
\[ \frac{\partial \mathcal{L}}{\partial x} = \frac{1}{\sigma} \left( I - \frac{1}{d} \mathbb{1} \mathbb{1}^\top - \frac{1}{d} \hat{x} \hat{x}^\top \right) \left( \frac{\partial \mathcal{L}}{\partial y} \odot \gamma \right) \]With respect to \(\gamma\):
\[ \frac{\partial \mathcal{L}}{\partial \gamma} = \sum_{n=1}^{N} \left[ \frac{\partial \mathcal{L}}{\partial y^{(n)}} \odot \hat{x}^{(n)} \right] \]With respect to \(\beta\):
\[ \frac{\partial \mathcal{L}}{\partial \beta} = \sum_{n=1}^{N} \frac{\partial \mathcal{L}}{\partial y^{(n)}} \]
- Example:
Input shape:
(N, C, H, W)Axis:
2→ normalize over(H, W)Epsilon:
1e-5Output shape:
(N, C, H, W)
- Parameters:
nbFeatures – Number of features in the normalized dimension(s).
axis – First dimension to normalize over.
epsilon – Small constant for numerical stability (default: 1e-5).
name – Optional operator name.
- Returns:
Shared pointer to the created Node.
LeakyReLU#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>LeakyReLUOp</b>
Attributes:
<sub><em>negative_slope</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.LeakyReLU(negative_slope: SupportsFloat = 0.0, name: str = '') aidge_core.aidge_core.Node#
Create a LeakyReLU node with a specified negative slope.
- Parameters:
negative_slope – The slope for the negative part of the function. Defaults to 0.0.
name – The name of the node.
Ln#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>LnOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Ln(name: str = '') aidge_core.aidge_core.Node#
Create a node with the natural logarithm operator.
- Parameters:
name – The name of the node.
LRN#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>LRNOp</b>
Attributes:
<sub><em>alpha</em></sub>
<sub><em>beta</em></sub>
<sub><em>bias</em></sub>
<sub><em>size</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.LRN(size: SupportsInt, name: str = '', alpha: SupportsFloat = 9.999999747378752e-05, beta: SupportsFloat = 0.75, bias: SupportsFloat = 1.0) aidge_core.aidge_core.Node#
Create a node containing the Local Response Normalization operator.
- Parameters:
size – The size of the local region for normalization.
name – The name of the node (optional).
-
std::shared_ptr<Node> Aidge::LRN(std::int32_t size, const std::string &name = "", float alpha = 0.0001f, float beta = 0.75f, float bias = 1.0f)#
Create an LRN operation node.
- Parameters:
size – [in] Normalization size (must be an odd positive integer).
name – [in] Name of the operator (optional).
- Returns:
A shared pointer to the Node containing the LRN operator.
LSTM (meta op.)#
☑️ This is a meta-operator.
- aidge_core.LSTM(in_channels: SupportsInt, hidden_channels: SupportsInt, seq_length: SupportsInt, nobias: bool = False, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing an LSTM (Long Short-Term Memory) operator.
The LSTM operator is a recurrent neural network (RNN) variant designed to model sequential data while addressing the vanishing gradient problem. It includes gating mechanisms to control information flow through time.
- Parameters:
in_channels (int) – The number of input features per time step.
hidden_channels (int) – The number of hidden units in the LSTM.
seq_length (int) – The number of time steps in the input sequence.
nobias (bool) – If set to True, no bias terms are included in the LSTM computation.
name (str) – Name of the node (optional).
- Returns:
A node containing the LSTM operator.
- Return type:
-
std::shared_ptr<Node> Aidge::LSTM(DimSize_t in_channels, DimSize_t hidden_channels, DimSize_t seq_length, bool noBias = false, const std::string &name = "")#
Creates an LSTM (Long Short-Term Memory) operator.
- Parameters:
in_channels – [in] The number of input channels.
hidden_channels – [in] The number of hidden channels in the LSTM.
seq_length – [in] The length of the input sequence.
noBias – [in] Whether to disable the bias (default is false).
name – [in] Optional name for the operation.
- Returns:
A shared pointer to the Node representing the LSTM operation.
MatMul#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>MatMulOp</b>
"):::operator
In0[data_input1]:::text-only -->|"In[0]"| Op
In1[data_input2]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.MatMul(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing an MatMul operator that performs Matrix Multiplication between two tensors.
- The operation is defined as:
Output = Input1 @ Input2
This operator implements generalized matrix multiplication, supporting batched matrix multiplication and broadcasting rules consistent with Numpy.
- Example:
Input A: (M, K), Input B: (K, N) -> Output: (M, N) Input A: (batch_size, M, K), Input B: (K, N) -> Output: (batch_size, M, N)
- Parameters:
name (str) – Optional name of the node.
- Returns:
A node containing the MatMul operator.
- Return type:
aidge_core.MatMulOp
Max#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>MaxOp</b>
"):::operator
In0[data_input_1]:::text-only -->|"In[0]"| Op
In1[data_input_n]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Max(nb_inputs: SupportsInt, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a max operator that performs element-wise maximum between multiple tensors.
The operation is defined as:
Output = max(Input1, Input2, ..., InputN)
The output tensor shape is determined by taking the maximum size along each dimension of the input tensors after broadcasting.
Examples:
Input 1: (3, 4, 2), Input 2: (3), Output: (3, 4, 3) Input 1: (1, 5, 3), Input 2: (2, 1, 3), Input 3: (7, 0, 1), Output: (7, 5, 3)
- Parameters:
nb_inputs (int) – number of inputs to max.
name (str) – Name of the node (optional).
- Returns:
A node containing the Max operator.
- Return type:
MaxOp
MaxPooling1D#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>MaxPooling1DOp</b>
Attributes:
<sub><em>kernel_dims</em></sub>
<sub><em>stride_dims</em></sub>
<sub><em>dilations</em></sub>
<sub><em>ceil_mode</em></sub>
<sub><em>storage_order</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
Op -->|"Out[1]"| Out1[indices]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.MaxPooling1D(kernel_dims: collections.abc.Sequence[SupportsInt], name: str = '', stride_dims: collections.abc.Sequence[SupportsInt] = [1], dilations: collections.abc.Sequence[SupportsInt] = [1], ceil_mode: bool = False, storage_order: bool = False) aidge_core.aidge_core.Node#
Initialize a node containing a MaxPooling operator.
This operator performs max pooling, which reduces the input tensor size by selecting the maximum value in each kernel-sized window, with optional strides and ceiling for dimension calculation.
- Parameters:
kernel_dims (List[int]) – The size of the kernel to apply to each dimension.
stride_dims (List[int]) – The stride (step size) to move the kernel over the input.
dilations (List[int]) – The dilation value along each spatial axis of filter.
ceil_mode (bool) – Whether to use ceil or floor when calculating the output dimensions.
storage_order (bool) – Whether to consider input as col-major when computing indices output or not.
name (str) – Name of the node (optional).
- Returns:
A node containing the MaxPooling operator.
- Return type:
MaxPoolingOp
-
std::shared_ptr<Node> Aidge::MaxPooling1D(const std::array<DimSize_t, 1> &kernel_dims, const std::string &name = "", const std::array<DimSize_t, 1> &stride_dims = create_array<DimSize_t, 1>(1), const std::array<DimSize_t, 1> &dilations = create_array<DimSize_t, 1>(1), bool ceil_mode = false, bool storage_order = false)#
Factory function for creating 1D MaxPooling operations.
- Parameters:
kernel_dims – [in] Kernel dimensions specifying the size of the pooling window.
name – [in] Optional name for the operation.
stride_dims – [in] Stride dimensions specifying the step size for the pooling window.
dilations – [in] Spatial dilations for the pooling operation.
ceil_mode – [in] Indicates whether to use ceil mode for output size calculation.
storage_order – [in] If true, indices will be stored as column-major.
- Returns:
A shared pointer to a Node representing the MaxPooling operation.
MaxPooling2D#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>MaxPooling2DOp</b>
Attributes:
<sub><em>kernel_dims</em></sub>
<sub><em>stride_dims</em></sub>
<sub><em>dilations</em></sub>
<sub><em>ceil_mode</em></sub>
<sub><em>storage_order</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
Op -->|"Out[1]"| Out1[indices]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.MaxPooling2D(kernel_dims: collections.abc.Sequence[SupportsInt], name: str = '', stride_dims: collections.abc.Sequence[SupportsInt] = [1, 1], dilations: collections.abc.Sequence[SupportsInt] = [1, 1], ceil_mode: bool = False, storage_order: bool = False) aidge_core.aidge_core.Node#
Initialize a node containing a MaxPooling operator.
This operator performs max pooling, which reduces the input tensor size by selecting the maximum value in each kernel-sized window, with optional strides and ceiling for dimension calculation.
- Parameters:
kernel_dims (List[int]) – The size of the kernel to apply to each dimension.
stride_dims (List[int]) – The stride (step size) to move the kernel over the input.
dilations (List[int]) – The dilation value along each spatial axis of filter.
ceil_mode (bool) – Whether to use ceil or floor when calculating the output dimensions.
storage_order (bool) – Whether to consider input as col-major when computing indices output or not.
name (str) – Name of the node (optional).
- Returns:
A node containing the MaxPooling operator.
- Return type:
MaxPoolingOp
-
std::shared_ptr<Node> Aidge::MaxPooling2D(const std::array<DimSize_t, 2> &kernel_dims, const std::string &name = "", const std::array<DimSize_t, 2> &stride_dims = create_array<DimSize_t, 2>(1), const std::array<DimSize_t, 2> &dilations = create_array<DimSize_t, 2>(1), bool ceil_mode = false, bool storage_order = false)#
Factory function for creating 2D MaxPooling operations.
- Parameters:
kernel_dims – [in] Kernel dimensions specifying the size of the pooling window.
name – [in] Optional name for the operation.
stride_dims – [in] Stride dimensions specifying the step size for the pooling window.
dilations – [in] Spatial dilations for the pooling operation.
ceil_mode – [in] Indicates whether to use ceil mode for output size calculation.
storage_order – [in] If true, indices will be stored as column-major.
- Returns:
A shared pointer to a Node representing the MaxPooling operation.
MaxPooling3D#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>MaxPooling3DOp</b>
Attributes:
<sub><em>kernel_dims</em></sub>
<sub><em>stride_dims</em></sub>
<sub><em>dilations</em></sub>
<sub><em>ceil_mode</em></sub>
<sub><em>storage_order</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
Op -->|"Out[1]"| Out1[indices]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.MaxPooling3D(kernel_dims: collections.abc.Sequence[SupportsInt], name: str = '', stride_dims: collections.abc.Sequence[SupportsInt] = [1, 1, 1], dilations: collections.abc.Sequence[SupportsInt] = [1, 1, 1], ceil_mode: bool = False, storage_order: bool = False) aidge_core.aidge_core.Node#
Initialize a node containing a MaxPooling operator.
This operator performs max pooling, which reduces the input tensor size by selecting the maximum value in each kernel-sized window, with optional strides and ceiling for dimension calculation.
- Parameters:
kernel_dims (List[int]) – The size of the kernel to apply to each dimension.
stride_dims (List[int]) – The stride (step size) to move the kernel over the input.
dilations (List[int]) – The dilation value along each spatial axis of filter.
ceil_mode (bool) – Whether to use ceil or floor when calculating the output dimensions.
storage_order (bool) – Whether to consider input as col-major when computing indices output or not.
name (str) – Name of the node (optional).
- Returns:
A node containing the MaxPooling operator.
- Return type:
MaxPoolingOp
-
std::shared_ptr<Node> Aidge::MaxPooling3D(const std::array<DimSize_t, 3> &kernel_dims, const std::string &name = "", const std::array<DimSize_t, 3> &stride_dims = create_array<DimSize_t, 3>(1), const std::array<DimSize_t, 3> &dilations = create_array<DimSize_t, 3>(1), bool ceil_mode = false, bool storage_order = false)#
Factory function for creating 3D MaxPooling operations.
- Parameters:
kernel_dims – [in] Kernel dimensions specifying the size of the pooling window.
name – [in] Optional name for the operation.
stride_dims – [in] Stride dimensions specifying the step size for the pooling window.
dilations – [in] Spatial dilations for the pooling operation.
ceil_mode – [in] Indicates whether to use ceil mode for output size calculation.
storage_order – [in] If true, indices will be stored as column-major.
- Returns:
A shared pointer to a Node representing the MaxPooling operation.
Memorize#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>MemorizeOp</b>
Attributes:
<sub><em>schedule_step</em></sub>
<sub><em>forward_step</em></sub>
<sub><em>end_step</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[data_input_init]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
Op -->|"Out[1]"| Out1[data_output_rec]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Memorize(end_step: SupportsInt, name: str = '') aidge_core.aidge_core.Node#
-
std::shared_ptr<Node> Aidge::Memorize(const std::uint32_t endStep, const std::string &name = "")#
Create a Memorize operator node.
- Parameters:
endStep – The step duration for memory storage.
name – The optional name for the node.
- Returns:
A shared pointer to the newly created Memorize operator node.
Min#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>MinOp</b>
"):::operator
In0[data_input_1]:::text-only -->|"In[0]"| Op
In1[data_input_n]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Min(nb_inputs: SupportsInt, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a min operator that performs element-wise minimum between multiple tensors.
The operation is defined as:
Output = min(Input1, Input2, ..., InputN)
The output tensor shape is determined by taking the minimum size along each dimension of the input tensors after broadcasting.
Examples:
Input 1: (3, 4, 2), Input 2: (3), Output: (3, 3, 2) Input 1: (1, 5, 3), Input 2: (2, 1, 3), Input 3: (7, 0, 1), Output: (1, 0, 1)
- Parameters:
nb_inputs (int) – number of inputs to min.
name (str) – Name of the node (optional).
- Returns:
A node containing the Min operator.
- Return type:
MinOp
Mod#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ModOp</b>
Attributes:
<sub><em>fmod</em></sub>
"):::operator
In0[dividend]:::text-only -->|"In[0]"| Op
In1[divisor]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[remainder]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Mod(name: str = '', fmod: bool = False) aidge_core.aidge_core.Node#
Initialize a node containing a Mod operator that performs element-wise binary modulus between two tensors.
The operation is defined as:
Output = Input1 mod Input2
The output tensor shape is determined by taking the maximum size along each dimension of the input tensors after broadcasting.
Examples:
Input A: (3, 4, 2), Input B: (2), Output: (3, 4, 2) Input A: (1, 5, 3), Input B: (2, 1, 3), Output: (2, 5, 3)
- Parameters:
name (
str) – Name of the node (optional).fmod (
bool) – Indicates whether the operator should behave like fmod (optional).
- Returns:
A node containing the Mod operator.
- Return type:
ModOp
Mul#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>MulOp</b>
"):::operator
In0[data_input_1]:::text-only -->|"In[0]"| Op
In1[data_input_2]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Mul(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a Mul operator that performs element-wise multiplication.
This operator performs element-wise multiplication between two tensors.
- Parameters:
name (str, optional) – Name of the node, default=””
- Returns:
A node containing the Mul operator.
- Return type:
aidge_core.MulOp
NBitFlip#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>NBitFlipOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.NBitFlip(n_bits: SupportsInt, name: str = '') aidge_core.aidge_core.Node#
Create a NBitFlip node with the specified number of bit flips.
Neg#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>NegOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Neg(name: str = '') aidge_core.aidge_core.Node#
Instantiates a Node that applies the Neg operator to its input tensor, computing the element-wise additive inverse.
Neg(x) = -x
- Parameters:
name (str) – Identifier for the operator node (optional).
- Returns:
A node containing the Neg operator.
- Return type:
NegOp
-
std::shared_ptr<Node> Aidge::Neg(const std::string &name = "")#
Constructs a node containing the Neg operator.
This function creates a computational node that applies the Neg operation to its input tensor. The operation is defined element-wise as:
\[ Neg(x) = -x \]- Parameters:
name – identifier for the operator node (optional).
- Returns:
A shared pointer to the constructed node.
OneHot#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>OneHotOp</b>
Attributes:
<sub><em>axis</em></sub>
<sub><em>depth</em></sub>
<sub><em>values</em></sub>
"):::operator
In0[indices]:::text-only -->|"In[0]"| Op
In1[depth]:::text-only -->|"In[1]"| Op
In2[values]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.OneHot(depth: SupportsInt, name: str = '', values: aidge_core.aidge_core.Tensor = Tensor([0.00000, 1.00000], dims=[2], dtype=float32), axis: SupportsInt = -1) aidge_core.aidge_core.Node#
PaddedAvgPooling2D (meta op.)#
☑️ This is a meta-operator.
- aidge_core.PaddedAvgPooling2D(kernel_dims: collections.abc.Sequence[SupportsInt], name: str = '', stride_dims: collections.abc.Sequence[SupportsInt] = [1, 1], dilations: collections.abc.Sequence[SupportsInt] = [1, 1], padding_dims: collections.abc.Sequence[SupportsInt] = [0, 0, 0, 0], ceil_mode: bool = False, count_include_pad: bool = True, rounding_mode: object = 'half_away_from_zero') aidge_core.aidge_core.Node#
Initialize a node containing a Padded Average Pooling operator.
This operator performs an average pooling operation with explicit padding. The output value is computed as the average of input values within a defined kernel window.
- Parameters:
kernel_dims (List[int]) – The size of the pooling kernel for each dimension.
stride_dims (List[int]) – The stride (step size) for kernel movement.
dilations (List[int]) – The dilation factor for the kernel, which increases the spacing between elements.
padding_dims (List[int]) – Explicit padding to apply before pooling.
ceil_mode (bool) – If set to True, the output shape is computed using ceil instead of floor.
name (str) – Name of the node (optional).
- Returns:
A node containing the Padded Average Pooling operator.
- Return type:
PaddedAvgPoolingOp
-
std::shared_ptr<Node> Aidge::PaddedAvgPooling2D(const std::array<DimSize_t, 2> &kernel_dims, const std::string &name = "", const std::array<DimSize_t, 2> &stride_dims = create_array<DimSize_t, 2>(1), const std::array<DimSize_t, 2> &dilations = create_array<DimSize_t, 2>(1), const std::array<DimSize_t, 2 * 2> &padding_dims = create_array<DimSize_t, 2 * 2>(0), bool ceil_mode = false, bool count_include_pad = true, RoundingMode roundingMode = RoundingMode::HalfAwayFromZero)#
Creates a 2D padded average pooling operation.
This function creates an average pooling operation with padding before pooling.
- Parameters:
kernel_dims – [in] The dimensions of the pooling window.
name – [in] Optional name for the operation.
stride_dims – [in] The stride dimensions for pooling (default is 1).
dilations – [in] The spatial dilations for pooling (default is 1).
padding_dims – [in] Padding dimensions before pooling (default is 0).
ceil_mode – [in] Whether to use ceiling mode for pooling (default is false).
- Returns:
A shared pointer to the Node representing the padded average pooling operation.
PaddedConv1D (meta op.)#
☑️ This is a meta-operator.
- aidge_core.PaddedConv1D(in_channels: SupportsInt, out_channels: SupportsInt, kernel_dims: collections.abc.Sequence[SupportsInt], name: str = '', stride_dims: collections.abc.Sequence[SupportsInt] = [1], padding_dims: collections.abc.Sequence[SupportsInt] = [0, 0], dilation_dims: collections.abc.Sequence[SupportsInt] = [1], no_bias: bool = False) aidge_core.aidge_core.Node#
Initialize a node containing a Padded Convolution operator.
This operator performs a convolution operation with explicit padding. It applies a kernel filter over an input tensor with specified stride and dilation settings.
- Parameters:
in_channels (int) – Number of input channels.
out_channels (int) – Number of output channels.
kernel_dims (List[int]) – The size of the convolutional kernel for each dimension.
stride_dims (List[int]) – The stride (step size) for kernel movement.
padding_dims (List[int]) – Explicit padding to apply before convolution.
dilation_dims (List[int]) – The dilation factor for kernel spacing.
no_bias (bool) – Whether to disable bias addition in the convolution.
name (str) – Name of the node (optional).
- Returns:
A node containing the Padded Convolution operator.
- Return type:
PaddedConvOp
-
std::shared_ptr<Node> Aidge::PaddedConv1D(DimSize_t in_channels, DimSize_t out_channels, const std::array<DimSize_t, 1> &kernel_dims, const std::string &name = "", const std::array<DimSize_t, 1> &stride_dims = create_array<DimSize_t, 1>(1), const std::array<DimSize_t, 2 * 1> &padding_dims = create_array<DimSize_t, 2 * 1>(0), const std::array<DimSize_t, 1> &dilation_dims = create_array<DimSize_t, 1>(1), bool no_bias = false)#
Creates a 1D padded convolution operation.
This function creates a padded convolution operation that applies padding before the convolution operation. It uses various parameters like the number of input/output channels, kernel dimensions, stride, padding, etc.
- Parameters:
in_channels – [in] The number of input channels.
out_channels – [in] The number of output channels.
kernel_dims – [in] The dimensions of the convolution kernel.
name – [in] Optional name for the operation.
stride_dims – [in] The stride dimensions for the convolution operation (default is 1).
padding_dims – [in] The padding dimensions to apply before convolution (default is 0).
dilation_dims – [in] Dilation factor for convolution (default is 1).
no_bias – [in] Whether to disable the bias (default is false).
- Returns:
A shared pointer to the Node representing the padded convolution operation.
PaddedConv2D (meta op.)#
☑️ This is a meta-operator.
- aidge_core.PaddedConv2D(in_channels: SupportsInt, out_channels: SupportsInt, kernel_dims: collections.abc.Sequence[SupportsInt], name: str = '', stride_dims: collections.abc.Sequence[SupportsInt] = [1, 1], padding_dims: collections.abc.Sequence[SupportsInt] = [0, 0, 0, 0], dilation_dims: collections.abc.Sequence[SupportsInt] = [1, 1], no_bias: bool = False) aidge_core.aidge_core.Node#
Initialize a node containing a Padded Convolution operator.
This operator performs a convolution operation with explicit padding. It applies a kernel filter over an input tensor with specified stride and dilation settings.
- Parameters:
in_channels (int) – Number of input channels.
out_channels (int) – Number of output channels.
kernel_dims (List[int]) – The size of the convolutional kernel for each dimension.
stride_dims (List[int]) – The stride (step size) for kernel movement.
padding_dims (List[int]) – Explicit padding to apply before convolution.
dilation_dims (List[int]) – The dilation factor for kernel spacing.
no_bias (bool) – Whether to disable bias addition in the convolution.
name (str) – Name of the node (optional).
- Returns:
A node containing the Padded Convolution operator.
- Return type:
PaddedConvOp
-
std::shared_ptr<Node> Aidge::PaddedConv2D(DimSize_t in_channels, DimSize_t out_channels, const std::array<DimSize_t, 2> &kernel_dims, const std::string &name = "", const std::array<DimSize_t, 2> &stride_dims = create_array<DimSize_t, 2>(1), const std::array<DimSize_t, 2 * 2> &padding_dims = create_array<DimSize_t, 2 * 2>(0), const std::array<DimSize_t, 2> &dilation_dims = create_array<DimSize_t, 2>(1), bool no_bias = false)#
Creates a 2D padded convolution operation.
This function creates a padded convolution operation that applies padding before the convolution operation. It uses various parameters like the number of input/output channels, kernel dimensions, stride, padding, etc.
- Parameters:
in_channels – [in] The number of input channels.
out_channels – [in] The number of output channels.
kernel_dims – [in] The dimensions of the convolution kernel.
name – [in] Optional name for the operation.
stride_dims – [in] The stride dimensions for the convolution operation (default is 1).
padding_dims – [in] The padding dimensions to apply before convolution (default is 0).
dilation_dims – [in] Dilation factor for convolution (default is 1).
no_bias – [in] Whether to disable the bias (default is false).
- Returns:
A shared pointer to the Node representing the padded convolution operation.
PaddedConv3D (meta op.)#
☑️ This is a meta-operator.
- aidge_core.PaddedConv3D(in_channels: SupportsInt, out_channels: SupportsInt, kernel_dims: collections.abc.Sequence[SupportsInt], name: str = '', stride_dims: collections.abc.Sequence[SupportsInt] = [1, 1, 1], padding_dims: collections.abc.Sequence[SupportsInt] = [0, 0, 0, 0, 0, 0], dilation_dims: collections.abc.Sequence[SupportsInt] = [1, 1, 1], no_bias: bool = False) aidge_core.aidge_core.Node#
Initialize a node containing a Padded Convolution operator.
This operator performs a convolution operation with explicit padding. It applies a kernel filter over an input tensor with specified stride and dilation settings.
- Parameters:
in_channels (int) – Number of input channels.
out_channels (int) – Number of output channels.
kernel_dims (List[int]) – The size of the convolutional kernel for each dimension.
stride_dims (List[int]) – The stride (step size) for kernel movement.
padding_dims (List[int]) – Explicit padding to apply before convolution.
dilation_dims (List[int]) – The dilation factor for kernel spacing.
no_bias (bool) – Whether to disable bias addition in the convolution.
name (str) – Name of the node (optional).
- Returns:
A node containing the Padded Convolution operator.
- Return type:
PaddedConvOp
-
std::shared_ptr<Node> Aidge::PaddedConv3D(DimSize_t in_channels, DimSize_t out_channels, const std::array<DimSize_t, 3> &kernel_dims, const std::string &name = "", const std::array<DimSize_t, 3> &stride_dims = create_array<DimSize_t, 3>(1), const std::array<DimSize_t, 2 * 3> &padding_dims = create_array<DimSize_t, 2 * 3>(0), const std::array<DimSize_t, 3> &dilation_dims = create_array<DimSize_t, 3>(1), bool no_bias = false)#
Creates a 3D padded convolution operation.
This function creates a padded convolution operation that applies padding before the convolution operation. It uses various parameters like the number of input/output channels, kernel dimensions, stride, padding, etc.
- Parameters:
in_channels – [in] The number of input channels.
out_channels – [in] The number of output channels.
kernel_dims – [in] The dimensions of the convolution kernel.
name – [in] Optional name for the operation.
stride_dims – [in] The stride dimensions for the convolution operation (default is 1).
padding_dims – [in] The padding dimensions to apply before convolution (default is 0).
dilation_dims – [in] Dilation factor for convolution (default is 1).
no_bias – [in] Whether to disable the bias (default is false).
- Returns:
A shared pointer to the Node representing the padded convolution operation.
PaddedConvDepthWise1D (meta op.)#
☑️ This is a meta-operator.
- aidge_core.PaddedConvDepthWise1D(nb_channels: SupportsInt, kernel_dims: collections.abc.Sequence[SupportsInt], name: str = '', stride_dims: collections.abc.Sequence[SupportsInt] = [1], padding_dims: collections.abc.Sequence[SupportsInt] = [0, 0], dilation_dims: collections.abc.Sequence[SupportsInt] = [1], no_bias: bool = False) aidge_core.aidge_core.Node#
Initialize a node containing a Depthwise Padded Convolution operator.
This operator performs a depthwise convolution operation, where each input channel is convolved separately with a different kernel. The operation includes explicit padding, stride control, and dilation options.
- Parameters:
nb_channels (int) – Number of input channels (also the number of output channels since depthwise convolution does not mix channels).
kernel_dims (List[int]) – The size of the convolutional kernel for each dimension.
stride_dims (List[int]) – The stride (step size) for kernel movement.
padding_dims (List[int]) – Explicit padding to apply before convolution.
dilation_dims (List[int]) – The dilation factor for kernel spacing.
no_bias (bool) – Whether to disable bias addition in the convolution.
name (str) – Name of the node (optional).
- Returns:
A node containing the Depthwise Padded Convolution operator.
- Return type:
PaddedConvDepthWiseOp
-
std::shared_ptr<Node> Aidge::PaddedConvDepthWise1D(const DimSize_t nb_channels, const std::array<DimSize_t, 1> &kernel_dims, const std::string &name = "", const std::array<DimSize_t, 1> &stride_dims = create_array<DimSize_t, 1>(1), const std::array<DimSize_t, 2 * 1> &padding_dims = create_array<DimSize_t, 2 * 1>(0), const std::array<DimSize_t, 1> &dilation_dims = create_array<DimSize_t, 1>(1), bool no_bias = false)#
Creates a 1D padded depthwise convolution operation.
This function creates a depthwise convolution operation with padding, where each input channel has its own filter.
- Parameters:
nb_channels – [in] The number of input/output channels (same for depthwise convolution).
kernel_dims – [in] The dimensions of the convolution kernel.
name – [in] Optional name for the operation.
stride_dims – [in] The stride dimensions for the convolution operation (default is 1).
padding_dims – [in] The padding dimensions to apply before convolution (default is 0).
dilation_dims – [in] Dilation factor for convolution (default is 1).
no_bias – [in] Whether to disable the bias (default is false).
- Returns:
A shared pointer to the Node representing the padded depthwise convolution operation.
PaddedConvDepthWise2D (meta op.)#
☑️ This is a meta-operator.
- aidge_core.PaddedConvDepthWise2D(nb_channels: SupportsInt, kernel_dims: collections.abc.Sequence[SupportsInt], name: str = '', stride_dims: collections.abc.Sequence[SupportsInt] = [1, 1], padding_dims: collections.abc.Sequence[SupportsInt] = [0, 0, 0, 0], dilation_dims: collections.abc.Sequence[SupportsInt] = [1, 1], no_bias: bool = False) aidge_core.aidge_core.Node#
Initialize a node containing a Depthwise Padded Convolution operator.
This operator performs a depthwise convolution operation, where each input channel is convolved separately with a different kernel. The operation includes explicit padding, stride control, and dilation options.
- Parameters:
nb_channels (int) – Number of input channels (also the number of output channels since depthwise convolution does not mix channels).
kernel_dims (List[int]) – The size of the convolutional kernel for each dimension.
stride_dims (List[int]) – The stride (step size) for kernel movement.
padding_dims (List[int]) – Explicit padding to apply before convolution.
dilation_dims (List[int]) – The dilation factor for kernel spacing.
no_bias (bool) – Whether to disable bias addition in the convolution.
name (str) – Name of the node (optional).
- Returns:
A node containing the Depthwise Padded Convolution operator.
- Return type:
PaddedConvDepthWiseOp
-
std::shared_ptr<Node> Aidge::PaddedConvDepthWise2D(const DimSize_t nb_channels, const std::array<DimSize_t, 2> &kernel_dims, const std::string &name = "", const std::array<DimSize_t, 2> &stride_dims = create_array<DimSize_t, 2>(1), const std::array<DimSize_t, 2 * 2> &padding_dims = create_array<DimSize_t, 2 * 2>(0), const std::array<DimSize_t, 2> &dilation_dims = create_array<DimSize_t, 2>(1), bool no_bias = false)#
Creates a 2D padded depthwise convolution operation.
This function creates a depthwise convolution operation with padding, where each input channel has its own filter.
- Parameters:
nb_channels – [in] The number of input/output channels (same for depthwise convolution).
kernel_dims – [in] The dimensions of the convolution kernel.
name – [in] Optional name for the operation.
stride_dims – [in] The stride dimensions for the convolution operation (default is 1).
padding_dims – [in] The padding dimensions to apply before convolution (default is 0).
dilation_dims – [in] Dilation factor for convolution (default is 1).
no_bias – [in] Whether to disable the bias (default is false).
- Returns:
A shared pointer to the Node representing the padded depthwise convolution operation.
PaddedConvTranspose1D (meta op.)#
☑️ This is a meta-operator.
- aidge_core.PaddedConvTranspose1D(in_channels: SupportsInt, out_channels: SupportsInt, kernel_dims: collections.abc.Sequence[SupportsInt], stride_dims: collections.abc.Sequence[SupportsInt] = [1], dilation_dims: collections.abc.Sequence[SupportsInt] = [1], no_bias: bool = False, padding_dims: collections.abc.Sequence[SupportsInt] = [0, 0], name: str = '') aidge_core.aidge_core.Node#
-
std::shared_ptr<Node> Aidge::PaddedConvTranspose1D(const DimSize_t &inChannels, const DimSize_t &outChannels, const std::array<DimSize_t, 1> &kernelDims, const std::array<DimSize_t, 1> &strideDims = create_array<DimSize_t, 1>(1), const std::array<DimSize_t, 1> &dilationDims = create_array<DimSize_t, 1>(1), const bool noBias = false, const std::array<DimSize_t, 2 * 1> &paddingDims = create_array<DimSize_t, 2 * 1>(0), const std::string &name = "")#
PaddedConvTranspose2D (meta op.)#
☑️ This is a meta-operator.
- aidge_core.PaddedConvTranspose2D(in_channels: SupportsInt, out_channels: SupportsInt, kernel_dims: collections.abc.Sequence[SupportsInt], stride_dims: collections.abc.Sequence[SupportsInt] = [1, 1], dilation_dims: collections.abc.Sequence[SupportsInt] = [1, 1], no_bias: bool = False, padding_dims: collections.abc.Sequence[SupportsInt] = [0, 0, 0, 0], name: str = '') aidge_core.aidge_core.Node#
-
std::shared_ptr<Node> Aidge::PaddedConvTranspose2D(const DimSize_t &inChannels, const DimSize_t &outChannels, const std::array<DimSize_t, 2> &kernelDims, const std::array<DimSize_t, 2> &strideDims = create_array<DimSize_t, 2>(1), const std::array<DimSize_t, 2> &dilationDims = create_array<DimSize_t, 2>(1), const bool noBias = false, const std::array<DimSize_t, 2 * 2> &paddingDims = create_array<DimSize_t, 2 * 2>(0), const std::string &name = "")#
PaddedConvTranspose3D (meta op.)#
☑️ This is a meta-operator.
- aidge_core.PaddedConvTranspose3D(in_channels: SupportsInt, out_channels: SupportsInt, kernel_dims: collections.abc.Sequence[SupportsInt], stride_dims: collections.abc.Sequence[SupportsInt] = [1, 1, 1], dilation_dims: collections.abc.Sequence[SupportsInt] = [1, 1, 1], no_bias: bool = False, padding_dims: collections.abc.Sequence[SupportsInt] = [0, 0, 0, 0, 0, 0], name: str = '') aidge_core.aidge_core.Node#
-
std::shared_ptr<Node> Aidge::PaddedConvTranspose3D(const DimSize_t &inChannels, const DimSize_t &outChannels, const std::array<DimSize_t, 3> &kernelDims, const std::array<DimSize_t, 3> &strideDims = create_array<DimSize_t, 3>(1), const std::array<DimSize_t, 3> &dilationDims = create_array<DimSize_t, 3>(1), const bool noBias = false, const std::array<DimSize_t, 2 * 3> &paddingDims = create_array<DimSize_t, 2 * 3>(0), const std::string &name = "")#
PaddedMaxPooling2D (meta op.)#
☑️ This is a meta-operator.
- aidge_core.PaddedMaxPooling2D(kernel_dims: collections.abc.Sequence[SupportsInt], name: str = '', stride_dims: collections.abc.Sequence[SupportsInt] = [1, 1], dilations: collections.abc.Sequence[SupportsInt] = [1, 1], padding_dims: collections.abc.Sequence[SupportsInt] = [0, 0, 0, 0], ceil_mode: bool = False, storage_order: bool = False) aidge_core.aidge_core.Node#
Initialize a node containing a Padded Max Pooling operator.
This operator performs a max pooling operation with explicit padding before pooling is applied. The output value is computed as the maximum of input values within a defined kernel window.
- Parameters:
kernel_dims (List[int]) – The size of the pooling kernel for each dimension.
stride_dims (List[int]) – The stride (step size) for kernel movement.
dilations (List[int]) – The dilation factor for the kernel, which increases the spacing between elements.
padding_dims (List[int]) – Explicit padding to apply before pooling.
ceil_mode (bool) – If set to True, the output shape is computed using ceil instead of floor.
storage_order (bool) – If set to True, the output indices is computed as if input is col-major (default is false).
name (str) – Name of the node (optional).
- Returns:
A node containing the Padded Max Pooling operator.
- Return type:
PaddedMaxPoolingOp
-
inline std::shared_ptr<Node> Aidge::PaddedMaxPooling2D(const std::array<DimSize_t, 2> &kernel_dims, const std::string &name = "", const std::array<DimSize_t, 2> &stride_dims = create_array<DimSize_t, 2>(1), const std::array<DimSize_t, 2> &dilations = create_array<DimSize_t, 2>(1), const std::array<DimSize_t, 2 * 2> &padding_dims = create_array<DimSize_t, 2 * 2>(0), bool ceil_mode = false, bool storage_order = false)#
Creates a 2D padded max pooling operation.
This function creates a max pooling operation with padding before pooling.
- Parameters:
kernel_dims – [in] The dimensions of the pooling window.
name – [in] Optional name for the operation.
stride_dims – [in] The stride dimensions for pooling (default is 1).
dilations – [in] The spatial dilations for pooling (default is 1).
padding_dims – [in] Padding dimensions before pooling (default is 0).
ceil_mode – [in] Whether to use ceiling mode for pooling (default is false).
storage_order – [in] Whether to use col-major system for indices output (default is false).
- Returns:
A shared pointer to the Node representing the padded max pooling operation.
Pad#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>PadOp</b>
Attributes:
<sub><em>mode</em></sub>
<sub><em>pads</em></sub>
<sub><em>constant_value</em></sub>
<sub><em>axes</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[pads]:::text-only -->|"In[1]"| Op
In2[value]:::text-only -->|"In[2]"| Op
In3[axes]:::text-only -->|"In[3]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Pad(pads: collections.abc.Sequence[typing.SupportsInt] = [], name: str = '', mode: object = <PaddingMode.CONSTANT: 0>, constant_value: object = 0.0, axes: collections.abc.Sequence[typing.SupportsInt] = []) aidge_core.aidge_core.Node#
Initialize a node containing a Pad operator.
This function applies padding to the tensor along the specified dimensions using the given padding type and value.
- Parameters:
begin_end_tuples (List[int]) – Padding configuration for each dimension in the format [begin, end] for each dimension.
name (str) – Name of the operator node (optional).
mode (PaddingMode) – Type of padding (Constant, Edge, Reflect, Wrap) (default is Constant).
constant_value (float) – The value used for padding if mode is Constant (default is 0.0).
-
std::shared_ptr<Node> Aidge::Pad(const std::vector<std::int64_t> &pads = {}, const std::string &name = "", PaddingMode mode = PaddingMode::Constant, const Tensor &constantValue = Tensor(), const std::vector<std::int8_t> &axes = {})#
Create a Pad operation node.
- Parameters:
pads – [in] Array specifying padding for the beginning and end of each dimension.
mode – [in] Type of border handling (default is constant).
constantValue – [in] Value to use for constant padding (default is 0.0).
axes – [in] Padding axes (default: on spatial dimensions, according to data format)
- Returns:
A shared pointer to the Node containing the Pad operator.
Pop#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>PopOp</b>
Attributes:
<sub><em>forward_step</em></sub>
<sub><em>backward_step</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Pop(name: str = '') aidge_core.aidge_core.Node#
Pow#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>PowOp</b>
"):::operator
In0[data_input_1]:::text-only -->|"In[0]"| Op
In1[data_input_2]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Pow(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing an Pow operator that performs element-wise power between two tensors.
- The operation is defined as:
Output = Input1 ^ Input2
The output tensor shape is determined by taking the maximum size along each dimension of the input tensors after broadcasting.
- Examples:
Input A: (3, 4, 2), Input B: (2), Output: (3, 4, 2) Input A: (1, 5, 3), Input B: (2, 1, 3), Output: (2, 5, 3)
- Parameters:
name (str, optional) – Name of the node, default=””
- Returns:
A node containing the Pow operator.
- Return type:
aidge_core.PowOp
Producer#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ProducerOp</b>
"):::operator
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Producer(*args, **kwargs)#
Overloaded function.
Producer(tensor: aidge_core.aidge_core.Tensor, name: str = ‘’, constant: bool = False) -> aidge_core.aidge_core.Node
Producer(dims: typing.Annotated[collections.abc.Sequence[typing.SupportsInt], “FixedSize(1)”], name: str = ‘’, constant: bool = False) -> aidge_core.aidge_core.Node
Producer(dims: typing.Annotated[collections.abc.Sequence[typing.SupportsInt], “FixedSize(2)”], name: str = ‘’, constant: bool = False) -> aidge_core.aidge_core.Node
Producer(dims: typing.Annotated[collections.abc.Sequence[typing.SupportsInt], “FixedSize(3)”], name: str = ‘’, constant: bool = False) -> aidge_core.aidge_core.Node
Producer(dims: typing.Annotated[collections.abc.Sequence[typing.SupportsInt], “FixedSize(4)”], name: str = ‘’, constant: bool = False) -> aidge_core.aidge_core.Node
Producer(dims: typing.Annotated[collections.abc.Sequence[typing.SupportsInt], “FixedSize(5)”], name: str = ‘’, constant: bool = False) -> aidge_core.aidge_core.Node
Producer(dims: typing.Annotated[collections.abc.Sequence[typing.SupportsInt], “FixedSize(6)”], name: str = ‘’, constant: bool = False) -> aidge_core.aidge_core.Node
Helper function to create a producer node with specified dimensions.
- Template Parameters:
DIM – The number of dimensions.
- Parameters:
dims – [in] Array defining the dimensions of the tensor.
name – [in] Optional name for the node.
constant – [in] Indicates whether the tensor should be constant.
- Returns:
A shared pointer to the created node.
RandomNormalLike#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>RandomNormalLikeOp</b>
Attributes:
<sub><em>mean</em></sub>
<sub><em>scale</em></sub>
<sub><em>dtype</em></sub>
<sub><em>seed</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.RandomNormalLike(mean: SupportsFloat = 0.0, scale: SupportsFloat = 1.0, dtype: object = 'any', seed: SupportsFloat = nan, name: str = '') aidge_core.aidge_core.Node#
Create a RandomNormalLike node with specified mean and scale.
- Parameters:
mean – The mean of the normal distribution. Defaults to 0.0.
scale – The standard deviation of the normal distribution. Defaults to 1.0.
dtype – The data type for the elements of the output tensor. Defaults to DataType::Any.
seed – The seed for the random number generator. Defaults to NaN.
name – The name of the node.
-
std::shared_ptr<Node> Aidge::RandomNormalLike(float mean = 0.0f, float scale = 1.0f, DataType dtype = DataType::Any, float seed = std::numeric_limits<float>::quiet_NaN(), const std::string &name = "")#
Apply the RandomNormalLike operation to a tensor.
- Parameters:
mean – [in] Mean of the normal distribution.
scale – [in] Standard deviation of the normal distribution.
dtype – [in] The data type for the elements of the output tensor.
seed – [in] Seed to the random generator.
name – [in] Name of the Operator.
- Returns:
Range#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>RangeOp</b>
"):::operator
In0[start]:::text-only -->|"In[0]"| Op
In1[limit]:::text-only -->|"In[1]"| Op
In2[delta]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Range(*args, **kwargs)#
Overloaded function.
Range(name: str = ‘’) -> aidge_core.aidge_core.Node
Range(start: typing.SupportsInt, limit: typing.SupportsInt, delta: typing.SupportsInt, name: str = ‘’) -> aidge_core.aidge_core.Node
Range(start: typing.SupportsFloat, limit: typing.SupportsFloat, delta: typing.SupportsFloat, name: str = ‘’) -> aidge_core.aidge_core.Node
Reciprocal#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ReciprocalOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Reciprocal(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a Reciprocal operator that applies the Reciprocal function element-wise.
- The Reciprocal function is applied element-wise and is defined as:
Reciprocal(x) = 1/x
The operation outputs the inverse of every input value.
- Parameters:
name (str) – Name of the node (optional).
- Returns:
A node containing the Reciprocal operator.
- Return type:
ReciprocalOp
ReduceMax#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ReduceMaxOp</b>
Attributes:
<sub><em>axes</em></sub>
<sub><em>keep_dims</em></sub>
<sub><em>noop_with_empty_axes</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[axes]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.ReduceMax(axes: collections.abc.Sequence[SupportsInt] = [], keep_dims: bool = True, noop_with_empty_axes: bool = False, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a ReduceMax operator.
- Parameters:
axes (List[int]) – Axes along which to do the reduction. The accepted range is [-r, r-1], where r is the rank of the input tensor.
keepdims (bool, optional) – If True, retains the reduced dimensions with size 1. Else, the reduced dimensions are removed, default= True.
noop_with_empty_axes (bool) – If True, the operator just copies the input, else, the operator reduces all the dimensions.
name – name of the node.
-
std::shared_ptr<Node> Aidge::ReduceMax(const std::vector<std::int32_t> &axes = {}, bool keep_dims = true, bool noop_with_empty_axes = false, const std::string &name = "")#
Compute the max value of a Tensor over the specified axes.
Dimensions may be reduced by erasing the specified axes or retaining them with size 1.
- Parameters:
axes – [in] Dimensions over which data max should be computed.
keep_dims – [in] Whether or not reduced dimensions are to be retained.
noop_with_empty_axes – [in] Behavior when no axes are specified.
name – [in] Name of the Operator.
- Returns:
ReduceMean#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ReduceMeanOp</b>
Attributes:
<sub><em>axes</em></sub>
<sub><em>keep_dims</em></sub>
<sub><em>noop_with_empty_axes</em></sub>
<sub><em>rounding_mode</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[axes]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.ReduceMean(axes: collections.abc.Sequence[typing.SupportsInt] = [], keep_dims: bool = True, noop_with_empty_axes: bool = False, rounding_mode: aidge_core.aidge_core.RoundingMode = <RoundingMode.HALF_AWAY_FROM_ZERO: 7>, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a ReduceMean operator.
- Parameters:
axes (List[int]) – Axes along which to do the reduction. The accepted range is [-r, r-1], where r is the rank of the input tensor.
keepdims (bool, optional) – If True, retains the reduced dimensions with size 1. Else, the reduced dimensions are removed, default= True.
noop_with_empty_axes (bool) – If True, the operator just copies the input, else, the operator reduces all the dimensions.
name – name of the node.
-
std::shared_ptr<Node> Aidge::ReduceMean(const std::vector<std::int32_t> &axes = {}, bool keep_dims = true, bool noop_with_empty_axes = false, RoundingMode roundingMode = RoundingMode::HalfAwayFromZero, const std::string &name = "")#
Compute the mean value of a Tensor over the specified axes.
Dimensions may be reduced by erasing the specified axes or retaining them with size 1.
- Parameters:
axes – [in] Dimensions over which data mean should be computed.
keep_dims – [in] Whether or not reduced dimensions are to be retained.
noop_with_empty_axes – [in] Behavior when no axes are specified.
name – [in] Name of the Operator.
- Returns:
ReduceMin#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ReduceMinOp</b>
Attributes:
<sub><em>axes</em></sub>
<sub><em>keep_dims</em></sub>
<sub><em>noop_with_empty_axes</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[axes]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.ReduceMin(axes: collections.abc.Sequence[SupportsInt] = [], keep_dims: bool = True, noop_with_empty_axes: bool = False, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a ReduceMin operator.
- Parameters:
axes (List[int]) – Axes along which to do the reduction. The accepted range is [-r, r-1], where r is the rank of the input tensor.
keepdims (bool, optional) – If True, retains the reduced dimensions with size 1. Else, the reduced dimensions are removed, default= True.
noop_with_empty_axes (bool) – If True, the operator just copies the input, else, the operator reduces all the dimensions.
name – name of the node.
-
std::shared_ptr<Node> Aidge::ReduceMin(const std::vector<std::int32_t> &axes = {}, bool keep_dims = true, bool noop_with_empty_axes = false, const std::string &name = "")#
Compute the min value of a Tensor over the specified axes.
Dimensions may be reduced by erasing the specified axes or retaining them with size 1.
- Parameters:
axes – [in] Dimensions over which data min should be computed.
keep_dims – [in] Whether or not reduced dimensions are to be retained.
noop_with_empty_axes – [in] Behavior when no axes are specified.
name – [in] Name of the Operator.
- Returns:
ReduceSum#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ReduceSumOp</b>
Attributes:
<sub><em>axes</em></sub>
<sub><em>keep_dims</em></sub>
<sub><em>noop_with_empty_axes</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[axes]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.ReduceSum(axes: collections.abc.Sequence[SupportsInt] = [], keep_dims: bool = True, noop_with_empty_axes: bool = False, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a ReduceMean operator.
- Parameters:
axes (List[int]) – Axes along which to do the reduction. The accepted range is [-r, r-1], where r is the rank of the input tensor.
keepdims (bool) – If True (default), retains the reduced dimensions with size 1. If False, the reduced dimensions are removed.
noop_with_empty_axes (bool) – If True, the operator just copies the input, if False, the operatpr reduces all the dimensions.
name – name of the node.
-
inline std::shared_ptr<Node> Aidge::ReduceSum(const std::vector<std::int32_t> &axes = {}, bool keep_dims = true, bool noop_with_empty_axes = false, const std::string &name = "")#
Compute the sum value of a Tensor over the specified axes.
Dimensions may be reduced by erasing the specified axes or retaining them with size 1.
ReLU#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ReLUOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.ReLU(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a ReLU operator that applies the ReLU function element-wise.
- The ReLU function is applied element-wise and is defined as:
ReLU(x) = max(0, x)
The operation sets all negative values to zero and leaves positive values unchanged.
- Parameters:
name (str) – Name of the node (optional).
- Returns:
A node containing the ReLU operator.
- Return type:
ReLUOp
Reshape#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ReshapeOp</b>
Attributes:
<sub><em>shape</em></sub>
<sub><em>allow_zero</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[shape]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Reshape(shape: collections.abc.Sequence[SupportsInt] = [], allowzero: bool = False, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a Reshape operator.
This operator reshapes the input tensor to the specified shape. The shape should be provided as a list of integers, where values are between [-r; r-1], with r = input_tensor.nbDims(), representing the dimensions of the input tensor. The operator also has a flag for allowing zero-size dimensions.
- Parameters:
shape (List[int]) – The target shape to reshape the tensor to.
allowzero (bool) – Whether to allow zero-size dimensions.
name (str) – Name of the node (optional).
- Returns:
A node containing the Reshape operator.
- Return type:
ReshapeOp
-
std::shared_ptr<Node> Aidge::Reshape(const std::vector<std::int64_t> &shape = {}, bool allowzero = false, const std::string &name = "")#
Create a Reshape operation node.
- Parameters:
shape – [in] Target shape for the output tensor (optional).
allowzero – [in] Whether zeros in the shape retain input tensor dimensions.
name – [in] Name of the operator (optional).
- Returns:
A shared pointer to the Node containing the Reshape operator.
Resize#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ResizeOp</b>
Attributes:
<sub><em>axes</em></sub>
<sub><em>coordinate_transformation_mode</em></sub>
<sub><em>interpolation_mode</em></sub>
<sub><em>cubic_coeff_a</em></sub>
<sub><em>extrapolation_value</em></sub>
<sub><em>padding_mode</em></sub>
<sub><em>aspect_ratio</em></sub>
<sub><em>antialias</em></sub>
<sub><em>exclude_outside</em></sub>
<sub><em>roi</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[roi ]:::text-only -->|"In[1]"| Op
In2[scales]:::text-only -->|"In[2]"| Op
In3[sizes]:::text-only -->|"In[3]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Resize(roi: collections.abc.Sequence[SupportsFloat] = [], scale: collections.abc.Sequence[SupportsFloat] = [], size: collections.abc.Sequence[SupportsInt] = [], axes: collections.abc.Sequence[SupportsInt] = [], coord_transfo_mode: object = 'half_pixel', interpolation_mode: object = 'round_prefer_floor', cubic_interpolation_coefficient_a: SupportsFloat = -0.75, extrapolation_value: SupportsFloat = 0.0, padding_mode: object = 'edge', aspect_ratio: object = 'stretch', antialias: bool = False, exclude_outside: bool = False, name: str = '') aidge_core.aidge_core.Node#
Create a node representing a Resize operation.
This node supports up to 4 inputs: #0 Input tensor to resize. #1 ROI (optional) - Not supported currently. Tensor of double, float, or float16. #2 Scales (optional) - Float tensor specifying scale factors per dimension. #3 Sizes - Int64 tensor specifying the target output size.
- Parameters:
roi (List[float]) – Region of interest within the input tensor (optional).
scale (List[float]) – Scale factors per dimension (optional).
size (List[int]) – Target output dimensions (optional).
axes (List[int8]) – Axes to resize (optional).
coord_transfo_mode (Aidge.Interpolation.CoordinateTransformation) – Coordinate transformation mode (optional).
interpolation_mode (Aidge.Interpolation.Mode) – Interpolation mode used for resizing (optional).
cubic_interpolation_coefficient_a (float) – ‘A’ coefficient for cubic interpolation (optional).
extrapolation_value (float) – Value used for extrapolation beyond tensor boundaries (optional).
padding_mode_mode (Aidge.PaddingMode) – Padding mode applied during resizing (optional).
aspect_ratio (Aidge.AspectRatio) – Aspect ratio policy for resizing (optional).
antialias (bool) – Whether to apply antialiasing (optional).
exclude_outside (bool) – Whether to exclude samples outside the ROI (optional).
name (str) – The name of the node (optional).
- Returns:
A node containing the Resize operator.
- Return type:
ResizeOp
-
std::shared_ptr<Node> Aidge::Resize(std::vector<float> roi = std::vector<float>(), std::vector<float> scale = std::vector<float>(), std::vector<std::size_t> size = std::vector<std::size_t>(), std::vector<std::int8_t> axes = std::vector<std::int8_t>(), Interpolation::CoordinateTransformation coordTransfoMode = Interpolation::CoordinateTransformation::HalfPixel, Interpolation::Mode interpolMode = Interpolation::Mode::RoundPreferFloor, float cubicCoefA = -.75f, float extrapolationVal = 0.0f, PaddingMode paddingMode = PaddingMode::Edge, AspectRatio aspectRatio = AspectRatio::Stretch, bool antialias = false, bool excludeOutside = false, const std::string &name = "")#
Factory function to create a node with a Resize operator.
Warning
Only one of ‘scales’ or ‘sizes’ can be set. Both cannot be used simultaneously.
Warning
Padding mode determines how out-of-bound coordinates are handled.
- Parameters:
roi – [in] Optional region of interest.
scale – [in] Optional vector of scaling factors.
size – [in] Optional vector specifying target output size.
axes – [in] Optional list of axes for resizing.
coordTransfoMode – [in] Coordinate transformation method.
interpolMode – [in] Interpolation method.
cubicCoefA – [in] Cubic interpolation coefficient.
extrapolationVal – [in] Value used for out-of-bound positions.
paddingMode – [in] Padding strategy.
aspectRatio – [in] Aspect ratio policy.
antialias – [in] Enable antialias.
excludeOutside – [in] Exclude values outside of ROI.
name – [in] Optional name of the node.
- Returns:
A pointer to the created Node.
Round#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>RoundOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Round(name: str = '', rounding_mode: object = 'half_away_from_zero') aidge_core.aidge_core.Node#
Initialize a node containing a Round operator that rounds tensor values element-wise.
This operator processes the input tensor and rounds each value to the nearest integer. If a value is exactly halfway between two integers, it rounds to the nearest even integer.
- Parameters:
name (str) – The name of the node (optional).
- Returns:
A node containing the Round operator.
- Return type:
RoundOp
Scatter#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ScatterOp</b>
Attributes:
<sub><em>axis</em></sub>
<sub><em>reduction</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[indices]:::text-only -->|"In[1]"| Op
In2[updates]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Scatter(axis: SupportsInt = 0, reduction: object = 'none', name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a
aidge_core.ScatterOpoperator.This operator implements ONNX ScatterElements: it returns a copy of
datawith values fromupdateswritten at positions specified byindicesalong the givenaxis.For each location
pinindices(andupdates), letq = pwithq[axis] = indices[p]. Then the output atqis updated fromupdates[p]. If severalpmap to the sameq,reductiondetermines how to combine them:"none"/Reduction.NONE: replacement (later writes override earlier ones)."add"/Reduction.ADD: sum."mul"/Reduction.MUL: product."max"/Reduction.MAX: maximum."min"/Reduction.MIN: minimum.
- Constraints
indices.shape == updates.shape.updates.dtype == data.dtype.For all dims except
axis,updates/indicessizes matchdata.Each index in
indicesis in[0, data.shape[axis]-1].
Example
axis = 1 data = [[0, 1, 2], [3, 4, 5]] indices = [[1, 0, 2], [2, 1, 0]] updates = [[9, 8, 7], [6, 5, 4]] # ScatterElements(data, indices, updates, axis=1, reduction="none") # -> writes updates into a copy of data at the indexed columns per row.
- Parameters:
axis (int) – Axis along which to scatter.
reduction (aidge_core.ScatterOp.Reduction | str) – Conflict resolution for overlapping indices. Accepts
aidge_core.ScatterOp.Reductionor one of"none","add","mul","max","min".name (str) – Optional name for the created node.
- Returns:
The created
aidge_core.ScatterOpnode.- Return type:
aidge_core.ScatterOp
- Raises:
ValueError – If
reductionis not one of the supported values.
-
std::shared_ptr<Node> Aidge::Scatter(std::int8_t axis = 0, const Scatter_Op::Reduction &reduction = Scatter_Op::Reduction::None, const std::string &name = "")#
Create a Scatter node.
Initializes a Scatter node that extracts elements from an input tensor along a specified axis using a set of indices.
- Parameters:
axis – [in] The axis along which to scatter elements. Default is 0.
indices – [in] A vector specifying which elements to scatter. Default is an empty vector.
scatteredShape – [in] The shape of the resulting scattered tensor. Default is an empty vector.
name – [in] Optional. The name of the node.
- Returns:
A shared pointer to a Node representing the Scatter operation.
Select#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>SelectOp</b>
"):::operator
In0[select]:::text-only -->|"In[0]"| Op
In1[data_input_0]:::text-only -->|"In[1]"| Op
In2[data_input_n]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Select(nb_inputs: SupportsInt, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a Select operator.
The Select operator allows conditional graph execution. The condition can be data dependent, yet the graph scheduling remains fully static! As per Aidge’s philosophy, sub-graph hierarchy is an optional feature, not a mandatory workaround, contrary to ONNX’s
Ifoperator or PyTorchtorch.condmethod. The Select operator has the following advantages over them:Allows interleaved and hierarchical conditions;
Allows pre-execution of conditional branches or not.
Two scheduling behaviors are possible, depending on whether
aidge_core.Scheduler.tag_conditional_nodeswas called or not:Without tag: the graph is scheduled and run as is, meaning every conditional branch is run before selection. Of course, this may result in unnecessary computations. However, branches can be run in parallel, as well as in parallel with the condition determination path;
With tags: only the selected conditional branch is run. To achieve this, the condition determination path has to be scheduled and run entirely before any conditional branch.
In the following example, we implement a conditional graph where the condition depends on the input data.
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%% flowchart TB Producer_0("input<br/><sub><em>(Producer#0)</em></sub>"):::producerCls_rootCls CryptoHash_0("hash<br/><sub><em>(CryptoHash#0)</em></sub>") Mod_0("mod<br/><sub><em>(Mod#0)</em></sub>") ReLU_0("relu fa:fa-circle-question<br/><sub><em>(ReLU#0)</em></sub>") Tanh_0("tanh fa:fa-circle-question<br/><sub><em>(Tanh#0)</em></sub>"):::conditionCls Sqrt_0("sqrt fa:fa-circle-question<br/><sub><em>(Sqrt#0)</em></sub>"):::conditionCls Select_0("select<br/><sub><em>(Select#0)</em></sub>") Producer_1(<em>Producer#1</em>):::producerCls Producer_0-->|"0 [2, 3] Float32<br/>↓<br/>0"|CryptoHash_0 Producer_0-->|"0 [2, 3] Float32<br/>↓<br/>0"|ReLU_0 Producer_0-->|"0 [2, 3] Float32<br/>↓<br/>0"|Tanh_0 Producer_0-->|"0 [2, 3] Float32<br/>↓<br/>0"|Sqrt_0 CryptoHash_0-->|"0 [4] UInt64<br/>↓<br/>0"|Mod_0 Mod_0-->|"0 [4] UInt64<br/>↓<br/>0"|Select_0 ReLU_0-->|"0 [2, 3] Float32<br/>↓<br/>1"|Select_0 Tanh_0-->|"0 [2, 3] Float32<br/>↓<br/>2"|Select_0 Sqrt_0-->|"0 [2, 3] Float32<br/>↓<br/>3"|Select_0 Producer_1-->|"0 [1] UInt64<br/>↓<br/>1"|Mod_0 Select_0--->|"0 [2, 3] Float32<br/>↓"|output0((out#0)):::outputCls classDef inputCls fill:#afa classDef outputCls fill:#ffa classDef externalCls fill:#ccc classDef producerCls fill:#ccf classDef genericCls fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5 classDef metaCls stroke-width:5px classDef rootCls stroke:#f00 classDef producerCls_rootCls stroke:#f00,fill:#ccf classDef genericCls_rootCls stroke:#f00,fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5 classDef metaCls_rootCls stroke:#f00,stroke-width:5px- Parameters:
nb_inputs (
int) – The number of input tensors to select from.name (
str) – Name of the node.
Shape#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ShapeOp</b>
Attributes:
<sub><em>start</em></sub>
<sub><em>end</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Shape(start: SupportsInt = 0, end: SupportsInt = -1, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a Shape operator that extracts a slice of the tensor’s shape.
This operator extracts a slice of the tensor’s shape from the specified start to end indices. The start and end indices are inclusive and exclusive respectively, and can be positive or negative. The accepted range for both is [-r, r-1], where r is the rank of the input tensor.
- Parameters:
start (int) – The starting index (inclusive) for the shape slice. The accepted range is [-r; r-1].
end (int) – The ending index (exclusive) for the shape slice. The accepted range is [-r; r-1].
name (str) – Name of the node (optional).
- Returns:
A node containing the Shape operator.
- Return type:
ShapeOp
-
std::shared_ptr<Node> Aidge::Shape(const std::int64_t start = 0, const std::int64_t end = -1, const std::string &name = "")#
Create a Shape operation node.
- Parameters:
start – [in] Start index for slicing dimensions (default is 0).
end – [in] End index (exclusive) for slicing dimensions (default is -1, meaning all dimensions).
name – [in] Name of the operator (optional).
- Returns:
A shared pointer to the Node containing the Shape operator.
Sigmoid#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>SigmoidOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Sigmoid(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a Sigmoid operator that applies the Sigmoid function element-wise.
- The Sigmoid function is applied element-wise and is defined as:
Sigmoid(x) = 1 / (1 + exp(-x))
This operation squashes each value of the tensor into the range (0, 1), making it commonly used for activation functions in neural networks.
- Parameters:
name (str) – Name of the node (optional).
- Returns:
A node containing the Sigmoid operator.
- Return type:
SigmoidOp
Sinh#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>SinhOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Sinh(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing an Sinh operator.
- Parameters:
name (
str) – Name of the node.
Sin#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>SinOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Sin(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing an Sin operator.
- Parameters:
name (
str) – Name of the node.
Slice#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>SliceOp</b>
Attributes:
<sub><em>starts</em></sub>
<sub><em>ends</em></sub>
<sub><em>axes</em></sub>
<sub><em>steps</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[starts]:::text-only -->|"In[1]"| Op
In2[ends]:::text-only -->|"In[2]"| Op
In3[axes]:::text-only -->|"In[3]"| Op
In4[steps]:::text-only -->|"In[4]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Slice(starts: collections.abc.Sequence[SupportsInt] = [], ends: collections.abc.Sequence[SupportsInt] = [], axes: collections.abc.Sequence[SupportsInt] = [], steps: collections.abc.Sequence[SupportsInt] = [], name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a Slice operator that slices a tensor along specified axes.
The slicing is done by specifying the starts, ends, axes, and steps for each axis. The accepted range for each of the starts and ends is [-r, r-1], where r is the rank of the input tensor. The axes specify which axes to apply the slice on. The steps specify the step size along each axis. If steps is not provided, it defaults to 1.
- Parameters:
starts (List[int]) – The start indices for the slice along each axis. The accepted range is [-r, r-1].
ends (List[int]) – The end indices for the slice along each axis. The accepted range is [-r, r-1], exclusive at the end index.
axes (List[int]) – The axes along which to slice the tensor. If not specified, slices all axes.
steps (List[int]) – The step size for each axis in the slice. Defaults to 1.
name (str) – Name of the node (optional).
- Returns:
A node containing the Slice operator.
- Return type:
SliceOp
-
std::shared_ptr<Node> Aidge::Slice(const std::vector<std::int64_t> &starts = {}, const std::vector<std::int64_t> &ends = {}, const std::vector<std::int8_t> &axes = {}, const std::vector<std::int64_t> &steps = {}, const std::string &name = "")#
Extract a sub-Tensor from a bigger original Tensor.
- Parameters:
starts – [in] Starting indices for the slice.
ends – [in] Ending indices (exclusive) for the slice.
axes – [in] Axes along which the slice operation is performed.
steps – [in] Step sizes for slicing along each axis.
name – [in] Name of the Operator.
- Returns:
SLSTM (meta op.)#
☑️ This is a meta-operator.
- aidge_core.SLSTM(in_channels: SupportsInt, hidden_channels: SupportsInt, seq_length: SupportsInt, nobias: bool = False, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing an SLSTM (Spiking Long Short-Term Memory) operator.
The SLSTM operator is a variant of LSTM tailored to spiking or synaptic dynamics, featuring separate synaptic and membrane states while preserving gating mechanisms.
- Parameters:
in_channels (int) – The number of input features per time step.
hidden_channels (int) – The number of hidden units in the SLSTM.
seq_length (int) – The number of time steps in the input sequence.
nobias (bool) – If set to True, no bias terms are included in the SLSTM computation.
name (str) – Name of the node (optional).
- Returns:
A node containing the SLSTM operator.
- Return type:
-
std::shared_ptr<Node> Aidge::SLSTM(DimSize_t in_channels, DimSize_t hidden_channels, DimSize_t seq_length, bool noBias = false, const std::string &name = "")#
Creates an SLSTM (Synaptic LSTM) operator.
This function creates an SLSTM node that produces both a hidden (mem) and a synaptic (syn) state per time step.
Input expectations: The input tensor X_t is expected to be of shape (N, X), where N is the batch size and X is the input feature dimension. Weight producers must match these dimensions:
Input weights: (hidden_channels, in_channels)
Recurrent weights: (hidden_channels, hidden_channels)
Biases: (hidden_channels), if enabled
Outputs are ordered as: 0: mem_t (hidden state) 1: syn_t (synaptic state)
- Parameters:
in_channels – [in] The number of input channels (X).
hidden_channels – [in] The number of hidden channels.
seq_length – [in] The length of the input sequence.
noBias – [in] Whether to disable bias terms (default false).
name – [in] Optional name for the operation.
- Returns:
A shared pointer to the Node representing the SLSTM operation.
Softmax#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>SoftmaxOp</b>
Attributes:
<sub><em>axis</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Softmax(axis: SupportsInt, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a Softmax operator that computes the softmax along the specified axis.
- Parameters:
axis (int) – Axis along which to compute the softmax. The accepted range is [-r, r-1], where r is the rank (number of dimensions) of the input tensor.
name (str) – Name of the node (optional).
- Returns:
A node containing the Softmax operator.
- Return type:
SoftmaxOp
-
std::shared_ptr<Node> Aidge::Softmax(std::int32_t axis, const std::string &name = "")#
Create a Softmax operation node.
- Parameters:
axis – [in] Axis along which the softmax operation is applied.
name – [in] Name of the operator (optional).
- Returns:
A shared pointer to the Node containing the Softmax operator.
Split#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>SplitOp</b>
Attributes:
<sub><em>axis</em></sub>
<sub><em>split</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[split]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output_0]:::text-only
Op -->|"Out[1]"| Out1[data_output_n]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Split(nb_outputs: SupportsInt, axis: SupportsInt = 0, split: collections.abc.Sequence[SupportsInt] = [], name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a Split operator that splits a tensor along a specified axis.
The operator splits the input tensor along the specified axis. The number of splits is defined by nb_outputs, and the split argument specifies how to divide the input tensor. The axis argument defines the axis along which the split occurs.
- Parameters:
nb_outputs (int) – The number of splits (outputs) from the input tensor. Must be a positive integer.
axis (int) – The axis along which to perform the split. Must be in the range [-r, r-1], where r is the number of dimensions in the input tensor.
split (List[int]) – A list of integers indicating the size of each split along the specified axis. The sum of all values in the list must be equal to the size of the input tensor along the split axis.
name (str) – The name of the node (optional).
- Returns:
A node containing the Split operator.
- Return type:
SplitOp
Sqrt#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>SqrtOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Sqrt(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a Square Root operator that computes the element-wise square root of the input tensor. The input tensor values must be non-negative for the square root to be computed.
- Parameters:
name (str) – The name of the node (optional).
- Returns:
A node containing the Sqrt operator.
- Return type:
SqrtOp
Squeeze#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>SqueezeOp</b>
Attributes:
<sub><em>axes</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[axes_to_squeeze]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[squeezed]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Squeeze(axes: collections.abc.Sequence[SupportsInt] = [], name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a squeeze operator.
- Parameters:
axes – axes to squeeze between [-r;r-1] with r = input_tensor.nbDims() & r in [-128 , 127]
name (str) – name of the node.
Stack#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>StackOp</b>
Attributes:
<sub><em>forward_step</em></sub>
<sub><em>backward_step</em></sub>
<sub><em>max_elements</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[max_elements]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Stack(max_elements: SupportsInt = 0, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a Stack operator.
- Parameters:
max_elements – the maximum number of tensors to be stacked.
name – name of the node.
-
std::shared_ptr<Node> Aidge::Stack(std::uint32_t maxElements = 0, const std::string &name = "")#
Create a Stack operator node.
- Parameters:
maxElements – The maximum number of elements to stack.
name – The optional name for the node.
- Returns:
A shared pointer to the newly created Stack operator node.
STFT#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>STFTOp</b>
Attributes:
<sub><em>frame_length</em></sub>
<sub><em>frame_step</em></sub>
<sub><em>onesided</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[frame_step]:::text-only -->|"In[1]"| Op
In2[window]:::text-only -->|"In[2]"| Op
In3[frame_length]:::text-only -->|"In[3]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.STFT(name: str = '', frame_length: SupportsInt = 0, frame_step: SupportsInt = 0, onesided: bool = False) aidge_core.aidge_core.Node#
Sub#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>SubOp</b>
"):::operator
In0[data_input_1]:::text-only -->|"In[0]"| Op
In1[data_input_2]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Sub(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a Subtraction operator that performs element-wise subtraction between two tensors.
- The operation is defined as:
Output = Input1 - Input2
The output tensor shape is determined by taking the maximum size along each dimension of the input tensors after broadcasting.
- Examples:
Input A: (3, 4, 2), Input B: (2), Output: (3, 4, 2) Input A: (1, 5, 3), Input B: (2, 1, 3), Output: (2, 5, 3)
- Parameters:
name (str, optional) – Name of the node, default=””
- Returns:
A node containing the Sub operator.
- Return type:
aidge_core.SubOp
Sum#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>SumOp</b>
"):::operator
In0[data_input_1]:::text-only -->|"In[0]"| Op
In1[data_input_n]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Sum(nb_inputs: SupportsInt, name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a sum operator that performs element-wise addition between multiple tensors.
The operation is defined as:
Output = Input1 + Input2 + ... + InputN
The output tensor shape is determined by taking the maximum size along each dimension of the input tensors after broadcasting.
Examples:
Input 1: (3, 4, 2), Input 2: (2), Output: (3, 4, 2) Input 1: (1, 5, 3), Input 2: (2, 1, 3), Input 3: (2), Output: (2, 5, 3)
- Parameters:
nb_inputs (int) – number of inputs to sum.
name (str) – Name of the node (optional).
- Returns:
A node containing the Sum operator.
- Return type:
SumOp
Tanh#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>TanhOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Tanh(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a Tanh operator that applies the tanh function element-wise.
- The tanh function is applied element-wise, and the operation is defined as:
tanh(x) = (exp(x) - exp(-x)) / (exp(x) + exp(-x))
- Parameters:
name (str) – Name of the node (optional).
- Returns:
A node containing the Tanh operator.
- Return type:
TanhOp
Tan#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>TanOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Tan(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing an Tan operator.
- Parameters:
name (
str) – Name of the node.
Tile#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>TileOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[repeats_input]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Tile(name: str = '') aidge_core.aidge_core.Node#
Initialize a Tile operator.
This operator performs element duplication of the input tensor according to repeat tensor. Indeed, repeat tensor should specify duplication for each dimension of the input tensor.
The operation is defined as:
OutDim[0] = InDim[0] * repeats[0] ... OutDim[N] = InDim[N] * repeats[N]
The output tensor shape is determined by taking the maximum size along each dimension of the input tensors after broadcasting.
- Parameters:
name (str, optional) – Name of the node, default=””
- Returns:
A node containing the Tile operator.
- Return type:
aidge_core.TileOp
TopK#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>TopKOp</b>
Attributes:
<sub><em>axis</em></sub>
<sub><em>largest</em></sub>
<sub><em>sorted</em></sub>
<sub><em>k</em></sub>
"):::operator
In0[x]:::text-only -->|"In[0]"| Op
In1[k]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[values]:::text-only
Op -->|"Out[1]"| Out1[indices]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.TopK(name: str = '') aidge_core.aidge_core.Node#
Transpose#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>TransposeOp</b>
Attributes:
<sub><em>output_dims_order</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Transpose(output_dims_order: collections.abc.Sequence[SupportsInt] = [], name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a transpose operator.
- Parameters:
output_dims_order – axes permutation order, must be of rank = r and values between [0;r-1] with r = input_tensor.nbDims()
name (str) – name of the node.
-
std::shared_ptr<Node> Aidge::Transpose(const std::vector<DimSize_t> &outputDimsOrder = {}, const std::string &name = "")#
Create a Transpose operation node.
- Parameters:
outputDimsOrder – [in] Axes permutation order (optional).
name – [in] Name of the operator (optional).
- Returns:
A shared pointer to the Node containing the Transpose operator.
Unfold2D#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>Unfold2DOp</b>
Attributes:
<sub><em>stride_dims</em></sub>
<sub><em>dilation_dims</em></sub>
<sub><em>kernel_dims</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Unfold2D(kernel_dims: collections.abc.Sequence[SupportsInt], name: str = '', stride_dims: collections.abc.Sequence[SupportsInt] = [1, 1], dilation_dims: collections.abc.Sequence[SupportsInt] = [1, 1]) aidge_core.aidge_core.Node#
Initialize a node containing a Unfold operator.
- Parameters:
kernel_dims (List[int]) – The dimensions of the unfold kernel (filter size).
name (str) – The name of the operator (optional).
stride_dims (List[int]) – The stride size for the unfold (default is [1]).
dilation_dims (List[int]) – The dilation size for the unfold (default is [1]).
- Returns:
A new Unfold operator node.
- Return type:
UnfoldOp
-
std::shared_ptr<Node> Aidge::Unfold2D(const std::array<DimSize_t, 2> &kernelDims, const std::string &name = "", const std::array<DimSize_t, 2> &strideDims = create_array<DimSize_t, 2>(1), const std::array<DimSize_t, 2> &dilationDims = create_array<DimSize_t, 2>(1))#
Create an Unfold operation node.
- Parameters:
kernelDims – [in] Size of the sliding window.
name – [in] Name of the operator (optional).
strideDims – [in] Step size for moving the window (optional).
dilationDims – [in] Spacing between elements in the kernel (optional).
- Returns:
A shared pointer to the Node containing the Unfold operator.
Unsqueeze#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>UnsqueezeOp</b>
Attributes:
<sub><em>axes</em></sub>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
In1[axes_to_unsqueeze]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[unsqueezed]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Unsqueeze(axes: collections.abc.Sequence[SupportsInt] = [], name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing an unsqueeze operator.
- Parameters:
axes – axes to unsqueeze between [-r;r-1] with r = input_tensor.nbDims() + len(axes)
name – name of the node.
WeightInterleaving#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>WeightInterleavingOp</b>
"):::operator
In0[data_input]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.WeightInterleaving(name: str = '') aidge_core.aidge_core.Node#
Where#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>WhereOp</b>
"):::operator
In0[cond_input]:::text-only -->|"In[0]"| Op
In1[data_input_1]:::text-only -->|"In[1]"| Op
In2[data_input_2]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[data_output]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_core.Where(name: str = '') aidge_core.aidge_core.Node#
Initialize a node containing a Where operator that performs element-wise where operation between two tensors.
- The operation is defined as:
Output = condition ? x : y
The output tensor shape is determined by taking the maximum size along each dimension of the input tensors after broadcasting.
- Examples:
Input condition: = [[True, False],[False, True]], Input A: [[1, 2],[3, 4]], Input B: [[9, 8],[7, 6]], Output: [[1, 8],[7, 4]] Input condition: = [[True, False, True]], Input A: [[1, 7, 10]], Input B: [[2, 5, 12]], Output: [[1, 5, 10]]
- Parameters:
name (str, optional) – Name of the node, default=””
- Returns:
A node containing the Where operator.
- Return type:
aidge_core.WhereOp