Data#

DataType#

Data type is a property of a aidge_core.Tensor. When setting the data type of an operator, one actually sets the data type of its output tensors, except for output of fixed type (like tensor containing indexes). By default, the input tensors type is not changed, except for inputs of category parameters (weights or bias for example).

Automatic type conversion#

When changing the data type of a aidge_core.Tensor through aidge_core.Tensor.to_dtype(), the data is automatically converted to the new type, except if the copy_cast argument is False.

Supported data types#

The following data types are supported:

enum aidge_core.dtype(value)#

An enumeration.

Valid values are as follows:

float64 = <dtype.float64: 0>#
float32 = <dtype.float32: 1>#
float16 = <dtype.float16: 2>#
bfloat16 = <dtype.bfloat16: 3>#
complex64 = <dtype.complex64: 4>#
complex32 = <dtype.complex32: 5>#
boolean = <dtype.boolean: 6>#
binary = <dtype.binary: 7>#
octo_binary = <dtype.octo_binary: 8>#
ternary = <dtype.ternary: 9>#
int2 = <dtype.int2: 10>#
quad_int2 = <dtype.quad_int2: 11>#
uint2 = <dtype.uint2: 12>#
quad_uint2 = <dtype.quad_uint2: 13>#
int3 = <dtype.int3: 14>#
dual_int3 = <dtype.dual_int3: 15>#
uint3 = <dtype.uint3: 16>#
dual_uint3 = <dtype.dual_uint3: 17>#
int4 = <dtype.int4: 18>#
dual_int4 = <dtype.dual_int4: 19>#
uint4 = <dtype.uint4: 20>#
dual_uint4 = <dtype.dual_uint4: 21>#
int5 = <dtype.int5: 22>#
int6 = <dtype.int6: 23>#
int7 = <dtype.int7: 24>#
int8 = <dtype.int8: 25>#
int9 = <dtype.int9: 26>#
int10 = <dtype.int10: 27>#
int11 = <dtype.int11: 28>#
int12 = <dtype.int12: 29>#
int13 = <dtype.int13: 30>#
int14 = <dtype.int14: 31>#
int15 = <dtype.int15: 32>#
int16 = <dtype.int16: 33>#
int32 = <dtype.int32: 34>#
int64 = <dtype.int64: 35>#
uint5 = <dtype.uint5: 36>#
uint6 = <dtype.uint6: 37>#
uint7 = <dtype.uint7: 38>#
uint8 = <dtype.uint8: 39>#
uint9 = <dtype.uint9: 40>#
uint10 = <dtype.uint10: 41>#
uint11 = <dtype.uint11: 42>#
uint12 = <dtype.uint12: 43>#
uint13 = <dtype.uint13: 44>#
uint14 = <dtype.uint14: 45>#
uint15 = <dtype.uint15: 46>#
uint16 = <dtype.uint16: 47>#
uint32 = <dtype.uint32: 48>#
uint64 = <dtype.uint64: 49>#
string = <dtype.string: 50>#
any = <dtype.any: 51>#
any_1 = <dtype.any_1: 52>#
any_2 = <dtype.any_2: 53>#
any_3 = <dtype.any_3: 54>#
any_4 = <dtype.any_4: 55>#
any_5 = <dtype.any_5: 56>#
enum class Aidge::DataType#

Enumeration of data types supported by the framework.

Represents the various data types that can be used for computation and storage. This includes standard types (floating point, integers), quantized types, and specialized formats for neural network operations.

Values:

enumerator Float64#

64-bit floating point (double)

enumerator Float32#

32-bit floating point (float)

enumerator Float16#

16-bit floating point (half)

enumerator BFloat16#

16-bit brain floating point

enumerator Complex64#

64-bit complex floating point (std::complex<double>)

enumerator Complex32#

32-bit complex floating point (std::complex<float>)

enumerator Boolean#

Boolean type (true/false)

enumerator Binary#

1-bit binary values

enumerator Octo_Binary#

8x1-bit interleaved binary

enumerator Ternary#

Ternary values (-1,0,1)

enumerator Int2#

2-bit signed integer

enumerator Quad_Int2#

4x2-bit interleaved signed integer

enumerator UInt2#

2-bit unsigned integer

enumerator Quad_UInt2#

4x2-bit interleaved unsigned integer

enumerator Int3#

3-bit signed integer

enumerator Dual_Int3#

2x3-bit interleaved signed integer

enumerator UInt3#

3-bit unsigned integer

enumerator Dual_UInt3#

2x3-bit interleaved unsigned integer

enumerator Int4#

4-bit signed integer

enumerator Dual_Int4#

2x4-bit interleaved signed integer

enumerator UInt4#

4-bit unsigned integer

enumerator Dual_UInt4#

2x4-bit interleaved unsigned integer

enumerator Int5#

5-bit signed integer

enumerator Int6#

6-bit signed integer

enumerator Int7#

7-bit signed integer

enumerator Int8#

8-bit signed integer

enumerator Int9#

9-bit signed integer

enumerator Int10#

10-bit signed integer

enumerator Int11#

11-bit signed integer

enumerator Int12#

12-bit signed integer

enumerator Int13#

13-bit signed integer

enumerator Int14#

14-bit signed integer

enumerator Int15#

15-bit signed integer

enumerator Int16#

16-bit signed integer

enumerator Int32#

32-bit signed integer

enumerator Int64#

64-bit signed integer

enumerator UInt5#

5-bit unsigned integer

enumerator UInt6#

6-bit unsigned integer

enumerator UInt7#

7-bit unsigned integer

enumerator UInt8#

8-bit unsigned integer

enumerator UInt9#

9-bit unsigned integer

enumerator UInt10#

10-bit unsigned integer

enumerator UInt11#

11-bit unsigned integer

enumerator UInt12#

12-bit unsigned integer

enumerator UInt13#

13-bit unsigned integer

enumerator UInt14#

14-bit unsigned integer

enumerator UInt15#

15-bit unsigned integer

enumerator UInt16#

16-bit unsigned integer

enumerator UInt32#

32-bit unsigned integer

enumerator UInt64#

64-bit unsigned integer

enumerator String#
enumerator Any#

Unspecified type.

enumerator Any_1#

Unspecified type 1.

enumerator Any_2#

Unspecified type 2.

enumerator Any_3#

Unspecified type 3.

enumerator Any_4#

Unspecified type 4.

enumerator Any_5#

Unspecified type 5.

DataFormat#

By default, the data format is unspecified. In this case, operators that depend on a data format (such as Conv, MaxPool, etc.) implicitly assume the NCHW layout.

When an ONNX file is loaded, however, the data format is explicitly set to NCHW, as this is currently the only data format supported by ONNX.

Whether a data format is specified or left unspecified has important implications: methods and functions that manipulate data formats—such as aidge_core.Operator.set_dataformat() or aidge_core.adapt_to_backend()—will perform data format adaptations only when a data format has been explicitly specified.

The data format is a property of a aidge_core.Tensor. Like the data type, when setting the data format of an operator, one actually sets the data format of its output tensors, when it is meaningful for the considered outputs. By default, the input tensors format is not changed, except for inputs of category parameters (weights or bias for example) and inputs reprensenting an attribute related to the dimensions.

Automatic format conversion#

When changing the data format of a aidge_core.Tensor through aidge_core.Tensor.to_dformat(), the data is automatically transposed to the new format, except if the copy_transpose argument is False.

A weak property#

The data format is a weak property of a aidge_core.Tensor, meaning that any format not consistent with the dimension of the aidge_core.Tensor will automatically be discarded. For example, if the dimension is [1, 2, 4] and one tries to set the format to NCHW, aidge_core.Tensor.to_dformat() will do nothing (note that in this case, the previous format is kept. This allows to apply different related formats successively on a whole graph, like NHWC followed by NWC, without losing the formats already set for other numbers of dimensions). Conversely, if one resizes the tensor to a different number of dimensions than the specified data format, the format is reset to default (unspecified).

Additional interpretations#

In some cases, the format can be relevant for tensors whose values are related to dimensions. For example, the aidge_core.ResizeOp or aidge_core.PadOp operator take an sizes or pads input tensor respectively. For those inputs, the data format can be enforced even when the number of dimensions does not match the format, on some conditions (these conditions are required in order for the values to be interpretable with a format):

  • There is a single dimension and the number of values is the number of dimensions (like the sizes input for aidge_core.ResizeOp), or a multiple of the number of dimensions (like the pads input for aidge_core.PadOp, which has twice as many values as the number of dimensions). The values are therefore assumed to represent a size in each dimensions, in order, modulo the number of dimensions;

  • There is exactly one value (tensor of size 1 or scalar tensor), which is then assumed to represent the index of the dimension.

To enforce the additional data format interpretations of a aidge_core.Tensor, aidge_core.Tensor.to_dformat() must be called with the enforce argument to True.

Limited interpretation#

Some operators have an axes input tensor, like aidge_core.PadOp, which also depends on the data format, but can be of variable size and represents one or several indexes of dimensions. In this case, there are two issues to handle the format at the tensor level:

  • The total number of dimensions is unknown (but could be derived from the specified format);

  • More critically, there is no way to know if the value represents sizes or indexes related to the dimensions (this would require an additional information at the tensor level, like a flag to indicate if the tensor is reprensenting indexes of not).

Currently, in these cases, the data format change must be handled at the operator level, in the aidge_core.Operator.set_dataformat() method and the related aidge_core.Tensor keep a default (unspecified) format.

Supported data formats#

The following data formats are supported:

enum aidge_core.dformat(value)#

Memory layouts representation for multi-dimensional tensors.

Valid values are as follows:

default = <dformat.default: 0>#
nc = <dformat.nc: 1>#
chw = <dformat.chw: 2>#
hwc = <dformat.hwc: 3>#
ncw = <dformat.ncw: 4>#
nwc = <dformat.nwc: 5>#
nchw = <dformat.nchw: 6>#
nhwc = <dformat.nhwc: 7>#
chwn = <dformat.chwn: 8>#
ncdhw = <dformat.ncdhw: 9>#
ndhwc = <dformat.ndhwc: 10>#
cdhwn = <dformat.cdhwn: 11>#
any = <dformat.any: 12>#
any_1 = <dformat.any_1: 13>#
any_2 = <dformat.any_2: 14>#
any_3 = <dformat.any_3: 15>#
any_4 = <dformat.any_4: 16>#
any_5 = <dformat.any_5: 17>#
enum class Aidge::DataFormat#

Enumeration of supported tensor data layouts.

Represents different memory layouts for multi-dimensional tensors. The dimensions typically represent:

  • N: Batch

  • C: Channels

  • D: Depth

  • H: Height

  • W: Width

The enum values are generated via the X-macro.

Values:

enumerator X#

Tensor#

class aidge_core.Tensor#
__init__()#
abs(self: aidge_core.aidge_core.Tensor) aidge_core.aidge_core.Tensor#
as_coord(self: aidge_core.aidge_core.Tensor, flatIdx: SupportsInt) list[int]#
as_idx(self: aidge_core.aidge_core.Tensor, coords: collections.abc.Sequence[SupportsInt]) int#
static available_backends() set[str]#
clip(self: aidge_core.aidge_core.Tensor, min: SupportsFloat = -3.4028234663852886e+38, max: SupportsFloat = 3.4028234663852886e+38) aidge_core.aidge_core.Tensor#
clone(self: aidge_core.aidge_core.Tensor) aidge_core.aidge_core.Tensor#
cpy_transpose(self: aidge_core.aidge_core.Tensor, src: aidge_core.aidge_core.Tensor, transpose: collections.abc.Sequence[SupportsInt]) None#
exp(self: aidge_core.aidge_core.Tensor) aidge_core.aidge_core.Tensor#
extract(*args, **kwargs)#

Overloaded function.

  1. extract(self: aidge_core.aidge_core.Tensor, coords: collections.abc.Sequence[typing.SupportsInt]) -> aidge_core.aidge_core.Tensor

Returns a sub-tensor with equal or lower number of dimensions. For instance, t.extract([1]) on a CHW tensor will return the HW tensor of channel #1.

  1. extract(self: aidge_core.aidge_core.Tensor, start_coords: collections.abc.Sequence[typing.SupportsInt], dims: collections.abc.Sequence[typing.SupportsInt]) -> aidge_core.aidge_core.Tensor

Returns a sub-tensor at some coordinate and with some dimension.

static get_available_backends() set[str]#
get_batch_dim_idx(self: aidge_core.aidge_core.Tensor) int#

Get the index for the batch dimension.

get_batch_dim_size(self: aidge_core.aidge_core.Tensor) int#

Get the batch dimension size.

get_channel_dim_idx(self: aidge_core.aidge_core.Tensor) int#

Get the index for the channel dimension.

get_channel_dim_size(self: aidge_core.aidge_core.Tensor) int#

Get the channel dimension size.

get_coord(self: aidge_core.aidge_core.Tensor, flatIdx: SupportsInt) list[int]#
get_depth_dim_idx(self: aidge_core.aidge_core.Tensor) int#

Get the index for the depth dimension.

get_depth_dim_size(self: aidge_core.aidge_core.Tensor) int#

Get the depth dimension size

get_height_dim_idx(self: aidge_core.aidge_core.Tensor) int#

Get the index for the height dimension.

get_height_dim_size(self: aidge_core.aidge_core.Tensor) int#

Get the height dimension size.

get_idx(self: aidge_core.aidge_core.Tensor, coords: collections.abc.Sequence[SupportsInt]) int#
get_width_dim_idx(self: aidge_core.aidge_core.Tensor) int#

Get the index for the width dimension.

get_width_dim_size(self: aidge_core.aidge_core.Tensor) int#

Get the width dimension size.

has_impl(self: aidge_core.aidge_core.Tensor) bool#
is_undefined(self: aidge_core.aidge_core.Tensor) bool#
ln(self: aidge_core.aidge_core.Tensor) aidge_core.aidge_core.Tensor#
mean(self: aidge_core.aidge_core.Tensor) aidge_core.aidge_core.Tensor#
ref_from(self: aidge_core.aidge_core.Tensor, fallback: aidge_core.aidge_core.Tensor, backend: str, device: SupportsInt = 0) aidge_core.aidge_core.Tensor#
resize(self: aidge_core.aidge_core.Tensor, dims: collections.abc.Sequence[SupportsInt], strides: collections.abc.Sequence[SupportsInt] = []) None#
set_backend(self: aidge_core.aidge_core.Tensor, name: str, device: SupportsInt = 0, copyFrom: bool = True) None#
set_data_format(self: aidge_core.aidge_core.Tensor, data_format: aidge_core.aidge_core.dformat, copyTrans: bool = True, enforce: bool = False) None#
set_datatype(self: aidge_core.aidge_core.Tensor, datatype: aidge_core.aidge_core.dtype, copyCast: bool = True) None#
sqrt(self: aidge_core.aidge_core.Tensor) aidge_core.aidge_core.Tensor#
stride(self: aidge_core.aidge_core.Tensor, idx: SupportsInt) int#
to_backend(self: aidge_core.aidge_core.Tensor, name: str, device: SupportsInt = 0, copyFrom: bool = True) None#
to_dformat(self: aidge_core.aidge_core.Tensor, dformat_: aidge_core.aidge_core.dformat, copy_and_transpose_: bool = True, enforce: bool = False) None#
to_dtype(self: aidge_core.aidge_core.Tensor, dtype_: aidge_core.aidge_core.dtype, copy_and_cast_: bool = True) None#
undefined(self: aidge_core.aidge_core.Tensor) bool#
zeros(self: aidge_core.aidge_core.Tensor) None#
class Tensor : public Aidge::Data, public Aidge::Registrable<Tensor, std::tuple<std::string, DataType>, std::function<std::shared_ptr<TensorImpl>(DeviceIdx_t device, std::vector<DimSize_t> dims)>>#

Description for the tensor data structure.

Sets the properties of the tensor without actually containing any data. Contains a pointer to an actual contiguous implementation of data.

Public Functions

explicit Tensor(DataType dtype = DataType::Float32, DataFormat dformat = DataFormat::Default)#

Construct a new empty Tensor object. It is considered undefined, i.e. dims can’t be forwarded from such a Tensor. isUndefined() method for details.

template<typename T, typename VT = std::enable_if_t<std::is_arithmetic<T>::value, std::decay_t<T>>>
inline explicit Tensor(T val)#

Construct a new Tensor object from an arithmetic parameter.

Template Parameters:
  • T – Type of the input parameter.

  • VT – Decayed type of the input parameter.

Parameters:

val – Input value.

inline explicit Tensor(half_float::half val)#
inline explicit Tensor(bool val)#

Construct a new Tensor object from a boolean scalar.

Parameters:

val – Input value.

explicit Tensor(std::vector<DimSize_t> dims)#

Construct a new Tensor object from dimensions.

Parameters:

dims – dimensions of the tensor

template<typename T>
inline explicit Tensor(const Vector<T> &arr)#

Construct a new Tensor object from the 1-dimension Vector helper.

Template Parameters:

T – datatype

template<typename T, std::size_t SIZE_0>
inline explicit constexpr Tensor(const Array1D<T, SIZE_0> &arr)#

Construct a new Tensor object from the 1-dimension Array helper.

Template Parameters:
  • T – datatype

  • SIZE_0 – first array dimension.

template<typename T, std::size_t SIZE_0, std::size_t SIZE_1>
inline explicit constexpr Tensor(const Array2D<T, SIZE_0, SIZE_1> &arr)#

Construct a new Tensor object from the 2-dimensions Array helper.

Template Parameters:
  • T – datatype

  • SIZE_0 – first array dimension.

  • SIZE_1 – second array dimension.

template<typename T, std::size_t SIZE_0, std::size_t SIZE_1, std::size_t SIZE_2>
inline explicit constexpr Tensor(const Array3D<T, SIZE_0, SIZE_1, SIZE_2> &arr)#

Construct a new Tensor object from the 3-dimensions Array helper.

Template Parameters:
  • T – datatype

  • SIZE_0 – first array dimension.

  • SIZE_1 – second array dimension.

  • SIZE_2 – third array dimension.

template<typename T, std::size_t SIZE_0, std::size_t SIZE_1, std::size_t SIZE_2, std::size_t SIZE_3>
inline explicit constexpr Tensor(const Array4D<T, SIZE_0, SIZE_1, SIZE_2, SIZE_3> &arr)#

Construct a new Tensor object from the 4-dimensions Array helper.

Template Parameters:
  • T – datatype

  • SIZE_0 – first array dimension.

  • SIZE_1 – second array dimension.

  • SIZE_2 – third array dimension.

  • SIZE_3 – fourth array dimension.

template<typename T, std::size_t SIZE_0, std::size_t SIZE_1, std::size_t SIZE_2, std::size_t SIZE_3, std::size_t SIZE_4>
inline explicit constexpr Tensor(const Array5D<T, SIZE_0, SIZE_1, SIZE_2, SIZE_3, SIZE_4> &arr)#

Construct a new Tensor object from the 5-dimensions Array helper.

Template Parameters:
  • T – datatype

  • SIZE_0 – first array dimension.

  • SIZE_1 – second array dimension.

  • SIZE_2 – third array dimension.

  • SIZE_3 – fourth array dimension.

  • SIZE_4 – fifth array dimension.

Tensor(const Tensor &other)#

Copy constructor. Construct a new Tensor object from another one (shallow copy). Data memory is not copied, but shared between the new Tensor and the initial one.

Parameters:

other

Tensor(Tensor &&other) noexcept#

Move constructor.

Parameters:

other

Tensor &operator=(const Tensor &other)#

Copy dimensions, datatype and data from another Tensor. Tensor backend/device are also copied and only a shallow copy is performed for data. Implementation will be shared with original Tensor.

Parameters:

other – other Tensor object.

Returns:

Tensor&

Tensor &operator=(Tensor &&other) noexcept#
template<typename T>
inline constexpr Tensor &operator=(Vector<T> &&arr)#
template<typename T, std::size_t SIZE_0>
inline constexpr Tensor &operator=(Array1D<T, SIZE_0> &&arr)#
template<typename T, std::size_t SIZE_0, std::size_t SIZE_1>
inline constexpr Tensor &operator=(Array2D<T, SIZE_0, SIZE_1> &&arr)#
template<typename T, std::size_t SIZE_0, std::size_t SIZE_1, std::size_t SIZE_2>
inline constexpr Tensor &operator=(Array3D<T, SIZE_0, SIZE_1, SIZE_2> &&arr)#
template<typename T, std::size_t SIZE_0, std::size_t SIZE_1, std::size_t SIZE_2, std::size_t SIZE_3>
inline constexpr Tensor &operator=(Array4D<T, SIZE_0, SIZE_1, SIZE_2, SIZE_3> &&arr)#
bool operator==(const Tensor &otherTensor) const#

Assess data type, dimensions, backend and data are the same.

Parameters:

otherTensor

Tensor operator+(const Tensor &other) const#

Element-wise addition operation for two Tensors.

Todo:

If input Tensors have a different dataType, the output should have the dataType of the Tensor with the highest precision.

Note

Tensors should be stored on the same backend.

Parameters:

other

Returns:

Tensor

template<typename T, typename VT = std::enable_if_t<std::is_arithmetic<T>::value, std::decay_t<T>>>
inline Tensor operator+(T val) const#
Tensor &operator+=(const Tensor &other)#
template<typename T, typename VT = std::enable_if_t<std::is_arithmetic<T>::value, std::decay_t<T>>>
inline Tensor &operator+=(T val)#
Tensor operator-(const Tensor &other) const#

Element-wise subtraction operation for two Tensors.

Todo:

If input Tensors have a different dataType, the output should have the dataType of the Tensor with the highest precision.

Note

Tensors should be stored on the same backend.

Parameters:

other

Returns:

Tensor

template<typename T, typename VT = std::enable_if_t<std::is_arithmetic<T>::value, std::decay_t<T>>>
inline Tensor operator-(T val) const#
Tensor &operator-=(const Tensor &other)#
template<typename T, typename VT = std::enable_if_t<std::is_arithmetic<T>::value, std::decay_t<T>>>
inline Tensor &operator-=(T val)#
Tensor operator*(const Tensor &other) const#

Element-wise multiplication operation for two Tensors.

Todo:

If input Tensors have a different dataType, the output should have the dataType of the Tensor with the highest precision.

Note

Tensors should be stored on the same backend.

Parameters:

other

Returns:

Tensor

template<typename T, typename VT = std::enable_if_t<std::is_arithmetic<T>::value, std::decay_t<T>>>
inline Tensor operator*(T val) const#
Tensor &operator*=(const Tensor &other)#
template<typename T, typename VT = std::enable_if_t<std::is_arithmetic<T>::value, std::decay_t<T>>>
inline Tensor &operator*=(T val)#
Tensor operator/(const Tensor &other) const#

Element-wise division operation for two Tensors.

Todo:

If input Tensors have a different dataType, the output should have the dataType of the Tensor with the highest precision.

Note

Tensors should be stored on the same backend.

Parameters:

other

Returns:

Tensor

template<typename T, typename VT = std::enable_if_t<std::is_arithmetic<T>::value, std::decay_t<T>>>
inline Tensor operator/(T val) const#
Tensor &operator/=(const Tensor &other)#
template<typename T, typename VT = std::enable_if_t<std::is_arithmetic<T>::value, std::decay_t<T>>>
inline Tensor &operator/=(T val)#
Tensor sqrt() const#

Element-wise sqrt operation for Tensor.

Returns:

Tensor

Tensor ln() const#

Element-wise ln operation for Tensor.

Returns:

Tensor

Tensor exp() const#

Element-wise exp operation for Tensor.

Returns:

Tensor

Tensor abs() const#

Element-wise abs operation for Tensor.

Returns:

Tensor

Tensor mean() const#

Mean operation for Tensor.

Returns:

Tensor

Tensor clip(float min = std::numeric_limits<float>::lowest(), float max = std::numeric_limits<float>::max()) const#

Element-wise clip operation for Tensor.

Returns:

Tensor

~Tensor() override#
inline Tensor clone() const#

Perform a deep copy of the tensor.

inline virtual std::string type() const override#
inline const std::string &backend() const noexcept#
inline DeviceIdx_t device() const noexcept#

Get the device index.

Returns:

DeviceIdx_t

inline void setBackend(const std::string &name, DeviceIdx_t device = 0, bool copyFrom = true)#

Set the backend of the Tensor associated implementation. If there was no previous implementation set, data will be allocated, but it will not be initialized to any particular value. If data was already initialized in a previous backend, it will be moved to the new one except if copyFrom is false.

Parameters:
  • name – Backend name

  • device – Backend device

  • copyFrom – If true (default), move data from previous backend/device to the new one. Previous data is lost otherwise.

void toBackend(const std::string &name, DeviceIdx_t device = 0, bool copyFrom = true)#

Set the backend of the Tensor associated implementation. If there was no previous implementation set, data will be allocated, but it will not be initialized to any particular value. If data was already initialized in a previous backend, it will be moved to the new one except if copyFrom is false.

Parameters:
  • name – Backend name

  • device – Backend device

  • copyFrom – If true (default), move data from previous backend/device to the new one. Previous data is lost otherwise.

inline DataType dataType() const noexcept#

Get the data type enum (deprecated).

Deprecated:

Use dtype() instead.

Returns:

DataType

inline DataType dtype() const noexcept#

Get the data type enum.

Returns:

DataType

inline DataFormat dataFormat() const noexcept#

Get the data format enum.

Deprecated:

Use dformat() instead.

Returns:

DataFormat

inline DataFormat dformat() const noexcept#

Get the data format enum.

Returns:

DataFormat

int getBatchDimIdx() const noexcept#

Get the index to retrieve the batch dimension for the dims or the data according to its DataFormat.

Returns:

int - if idx is equal to -1, then the index was not found.

int getChannelDimIdx() const noexcept#

Get the index to retrieve the channel dimension for the dims or the data according to its DataFormat.

Returns:

int - if idx is equal to -1, then the index was not found.

int getDepthDimIdx() const noexcept#

Get the index to retrieve the depth dimension for the dims or the data according to its DataFormat.

Returns:

int - if idx is equal to -1, then the index was not found.

int getHeightDimIdx() const noexcept#

Get the index to retrieve the height dimension for the dims or the data according to its DataFormat.

Returns:

int - if idx is equal to -1, then the index was not found.

int getWidthDimIdx() const noexcept#

Get the index to retrieve the width dimension for the dims or the data according to its DataFormat.

Returns:

int - if idx is equal to -1, then the index was not found.

DimSize_t getDimensionSize(const int idx) const#

Get the size of a dimension, e.g: tensor.getDimension(getBatchIdx()).

Returns:

Aidge::DimSize_t - By default, if dimension is not supported, 1 is returned.

inline DimSize_t getBatchDimSize() const#

Get the size of the batch dimension.

Returns:

constexpr Aidge::DimSize_t - By default, if dimension is not supported, 1 is returned.

inline DimSize_t getChannelDimSize() const#

Get the size of the channel dimension.

Returns:

constexpr Aidge::DimSize_t - By default, if dimension is not supported, 1 is returned.

inline DimSize_t getDepthDimSize() const#

Get the size of the depth dimension.

Returns:

constexpr Aidge::DimSize_t - By default, if dimension is not supported, 1 is returned.

inline DimSize_t getHeightDimSize() const#

Get the size of the height dimension.

Returns:

constexpr Aidge::DimSize_t - By default, if dimension is not supported, 1 is returned.

inline DimSize_t getWidthDimSize() const#

Get the size of the width dimension.

Returns:

constexpr Aidge::DimSize_t - By default, if dimension is not supported, 1 is returned.

inline void setDataType(DataType dt, bool copyCast = true)#

Set the DataType of the Tensor and converts data if the Tensor has already been initialized and copyCast is true.

Parameters:
  • dt – DataType

  • copyCast – If true (default), previous data is copy-casted. Otherwise previous data is lost.

inline void toDtype(DataType dt, bool copyCast = true)#

Set the DataType of the Tensor and converts data if the Tensor has already been initialized and copyCast is true.

Parameters:
  • dt – DataType

  • copyCast – If true (default), previous data is copy-casted. Otherwise previous data is lost.

inline void setDataFormat(DataFormat df, bool copyTrans = true, bool enforce = false)#

Set the DataFormat of the Tensor and transpose data, only if the Tensor has already been initialized and copyTrans is true. In this case, a transposition occurs only if both previous format and new format are different from DataFormat::Default.

Parameters:
  • df – New DataFormat

  • copyTrans – If true (default), when both previous format and new format are different from DataFormat::Default, previous data is copy-transposed.

  • enforce – If true, throws an error if the DataFormat does not match the number of dimensions of the Tensor or if it cannot be interpreted as a 1D list or 0D index.

void toDformat(DataFormat df, bool copyTrans = true, bool enforce = false)#

Set the data format of the Tensor and transpose data, only if the Tensor has already been initialized and copyTrans is true. In this case, a transposition occurs only if both previous format and new format are different from DataFormat::Default.

Parameters:
  • df – New DataFormat

  • copyTrans – If true (default), when both previous format and new format are different from DataFormat::Default, previous data is copy-transposed.

  • enforce – If true, throws an error if the DataFormat does not match the number of dimensions of the Tensor or if it cannot be interpreted as a 1D list or 0D index.

inline const std::shared_ptr<TensorImpl> &getImpl() const noexcept#

Get the Impl object (deprecated).

Deprecated:

Use impl() instead.

Returns:

const std::shared_ptr<TensorImpl>&

inline const std::shared_ptr<TensorImpl> &impl() const noexcept#

Get the Impl object.

Returns:

const std::shared_ptr<TensorImpl>&

inline std::size_t getImplOffset() const noexcept#

Get the Impl offset (deprecated).

Deprecated:

Use implOffset() instead.

Returns:

Offset within the Impl

inline std::size_t implOffset() const noexcept#

Get the Impl offset.

Returns:

Offset within the Impl

inline void setImpl(std::shared_ptr<TensorImpl> impl, std::size_t implOffset = 0)#

Set the Impl object.

Parameters:
  • impl – New impl shared pointer

  • implOffset – Storage offset in this new impl for this Tensor

inline bool hasImpl() const noexcept#

Return if an implementation has been associated.

Deprecated:

Will be removed in futur versions.

Returns:

true

Returns:

false

inline std::size_t nbDims() const#

Get number of dimensions of the Tensor.

Returns:

std::size_t

template<DimIdx_t DIM>
inline constexpr std::array<DimSize_t, DIM> dims() const#

Get dimensions of the Tensor object.

Template Parameters:

DIM – number of dimensions.

Returns:

constexpr std::array<DimSize_t, DIM>

inline const std::vector<DimSize_t> &dims() const noexcept#

Get dimensions of the Tensor object.

Returns:

constexpr const std::vector<DimSize_t>&

inline DimSize_t dim(DimIdx_t idx) const#

Get the size of the given dimension (deprecated).

Deprecated:

Use dims()[idx] instead.

Parameters:

idx – Dimension index

Returns:

DimSize_t

inline const std::vector<DimSize_t> &strides() const noexcept#

Get strides of the Tensor object.

Returns:

constexpr const std::vector<DimSize_t>&

inline DimSize_t stride(DimIdx_t idx) const#

Get the stride of the given dimension (deprecated).

Deprecated:

Use strides()[idx] instead.

Parameters:

idx – Dimension index

Returns:

DimSize_t

inline constexpr bool isContiguous() const noexcept#

Return true if Tensor is contiguous in memory.

Returns:

bool

inline constexpr std::size_t size() const noexcept#

Get the number of elements in the Tensor object.

Returns:

constexpr std::size_t

inline std::size_t capacity() const noexcept#

Return the current capacity of the tensor, i.e. the actual memory currently being allocated. It can be different from the size:

  • Capacity conservatively returns 0 is no implementation is provided.

  • Capacity can be 0 if the tensor memory was not yet initialized (because of lazy initialization, memory is allocated only when it needs to be accessed the first time).

  • Capacity can be > size if the tensor was downsized but memory was not reallocated.

template<std::array<DimSize_t, 1>::size_type DIM>
inline void resize(const std::array<DimSize_t, DIM> &dims)#

Change the dimensions of the Tensor object according to the given argument. If the overall size is not changed (meaning we actually only performed a reshape), data is guaranteed to remain valid. Otherwise, no guarantee is provided regarding the validy of previous data (unlike std::vector). If the new overall size is larger than the previous one, all previous data is invalided. Otherwise, previous data may or may not remain valid, depending on the backend implementation.

Template Parameters:

DIM – Number of dimensions.

Parameters:

dims – New dimensions

void resize(const std::vector<DimSize_t> &dims, std::vector<DimSize_t> strides = std::vector<DimSize_t>())#

Change the dimensions of the Tensor object according to the given argument. If the overall size is not changed (meaning we actually only performed a reshape), data is guaranteed to remain valid. Otherwise, no guarantee is provided regarding the validy of previous data (unlike std::vector). If the new overall size is larger than the previous one, all previous data is invalided. Otherwise, previous data may or may not remain valid, depending on the backend implementation.

Parameters:
  • dims – New dimensions

  • strides – Stride of the tensor (if not specified, “nested” stride is used)

inline bool empty() const#

Return whether the Tensor object as a rank of 0, i.e. dimensions == {}. For defined Tensors, this implies that the Tensor is scalar. For backward compatibility reasons, it is valid to call this predicate even on undefined Tensors, in which case it returns true. Hence before test the rank with this method, always check that the Tensor is not undefined(). In particular for operations such as forwardDims(), one should always use undefined() to test whether the Tensor dimensions have been defined. In this case empty() can be used to distinguish scalars from N-D Tensors.

Deprecated:

Use isEmpty() instead.

Returns:

true if rank is 0 or the tensor is undefined

inline bool isEmpty() const noexcept#

Return whether the Tensor object has a rank of 0, i.e. dimensions == {}.

Returns:

true if rank is 0 or the tensor is undefined

inline bool undefined() const#

Returns whether the Tensor object is undefined. An undefined Tensor is equivalent to a tensor for which dimensions have not been defined yet. Hence, dimensions forwarding can’t be done from undefined tensors. The only cases where a tensor is undefined is after the default constructor and before any call to resize(). Also, as soon as the resize() method has been called, the Tensor is irreversibly defined. isEmpty() method for distinguishing an undefined from a scalar.

Deprecated:

Use isUndefined() instead.

Returns:

true if undefined

inline bool isUndefined() const#

Returns whether the Tensor object is undefined.

See above for detailed behavior.

Returns:

true if undefined

inline void zeros() const#

Set each element of the tensor to zero.

template<typename ExpectedType>
inline const ExpectedType &get(std::size_t idx) const#
template<typename ExpectedType>
inline const ExpectedType &get(std::vector<std::size_t> coordIdx) const#
template<typename ExpectedType>
inline void set(std::size_t idx, ExpectedType value)#
template<typename ExpectedType>
inline void set(std::vector<std::size_t> coordIdx, ExpectedType value)#
virtual std::string toString(int precision = -1, std::size_t offset = 0) const override#
inline void print() const#
inline bool hasGrad() const noexcept#
inline std::shared_ptr<Tensor> grad()#

Get the gradient Tensor. If not initialized, set a Tensor instance and set its implementation if none was previously set.

Note

Dimensions for the Tensor instance are copied from the original current Tensor.

Note

If a Tensor instance was already associated, only the implementation is created with values set to 0.

Note

If Tensor instance and implementation already existed for the gradient nothing is done.

inline void setGrad(std::shared_ptr<Tensor> newGrad)#
inline std::vector<std::size_t> getCoord(std::size_t index) const#

From the the 1D contiguous index, return the coordinate of an element in the tensor. Beware: do not use this function with the storage index!

Parameters:

flatIdx – 1D contiguous index of the value considering a flatten, contiguous, tensor.

Returns:

std::vector<DimSize_t>

inline std::vector<std::size_t> asCoord(std::size_t index) const#

From the the 1D contiguous index, return the coordinate of an element in the tensor. Beware: do not use this function with the storage index!

Parameters:

flatIdx – 1D contiguous index of the value considering a flatten, contiguous, tensor.

Returns:

std::vector<DimSize_t>

inline std::size_t getIdx(const std::vector<std::size_t> &coords) const#

From the coordinate returns the 1D contiguous index of an element in the tensor. If the number of coordinates is inferior to the number of dimensions, the remaining coordinates are assumed to be 0. Beware: the contiguous index will only correspond to the storage index if the tensor is contiguous! Note that the coordIdx may be an empty vector.

Parameters:

coordIdx – Coordinate to an element in the tensor

Returns:

DimSize_t Contiguous index

inline std::size_t asIdx(const std::vector<std::size_t> &coords) const#

From the coordinate returns the 1D contiguous index of an element in the tensor. If the number of coordinates is inferior to the number of dimensions, the remaining coordinates are assumed to be 0. Beware: the contiguous index will only correspond to the storage index if the tensor is contiguous! Note that the coordIdx may be an empty vector.

Parameters:

coordIdx – Coordinate to an element in the tensor

Returns:

DimSize_t Contiguous index

inline std::size_t getStorageIdx(const std::vector<std::size_t> &coordIdx) const#

From the coordinate returns the 1D storage index of an element in the tensor. If the number of coordinates is inferior to the number of dimensions, the remaining coordinates are assumed to be 0.

Parameters:

coordIdx – Coordinate to an element in the tensor

Returns:

DimSize_t Storage index

std::size_t storageIdx(const std::vector<std::size_t> &coordinates) const#

Computes the linear (1D) storage index corresponding to a multi-dimensional coordinate.

Given a tensor with a known shape and stride configuration, this function maps a multi-dimensional index to its corresponding flat memory index using the row-major ordering defined by the stride vector.

If the provided coordinate vector contains fewer entries than the tensor’s rank, the unspecified dimensions are implicitly assumed to be zero.

Parameters:

coordinates – A vector representing the coordinate in the multi-dimensional tensor space.

Throws:

AssertionError – if the coordinate dimensionality exceeds the tensor’s rank or if any coordinate component exceeds its respective dimension size.

Returns:

std::size_t The corresponding linear storage index in the underlying memory layout.

Tensor extract(const std::vector<std::size_t> &coordIdx) const#

Returns a sub-tensor with equal or lower number of dimensions.

Note

For instance, t.extract({1}) on a CHW tensor will return the HW tensor of channel #1. Likewise, t.extract({0, 1}) on a NCHW tensor will return the HW tensor of batch #0 and channel #1.

Note

No memory copy is performed, the returned tensor does not own the memory.

Note

If the number of coordinates matches the number of dimensions, a scalar tensor is returned.

Note

If current tensor was contiguous, the returned tensor is guaranteed to be contiguous as well.

Parameters:

coordIdx – Coordinates of the sub-tensor to extract

Returns:

Tensor Sub-tensor.

Tensor extract(const std::vector<std::size_t> &coordIdx, const std::vector<std::size_t> &dims) const#

Returns a sub-tensor at some coordinate and with some dimension.

Note

Data contiguity of the returned Tensor is not guaranteed.

Parameters:
  • coordIdx – First coordinates of the sub-tensor to extract

  • dims – Dimensions of the sub-tensor to extract

Returns:

Tensor Sub-tensor.

void makeContiguous()#

Make the tensor’s storage contiguous, if it is not already the case. If not contiguous, a new memory space is allocated.

void copyCast(const Tensor &src)#

Copy-cast data from a Tensor on the same device. If current tensor backend/device is set and is different from src, an assertion is raised.

Parameters:

src – Source tensor to copy-cast from.

void copyFrom(const Tensor &src)#

Copy data from a Tensor from another backend/device. If current tensor data type is set and is different from src, an assertion is raised.

Parameters:

src – Source tensor to copy from.

void copyTranspose(const Tensor &src, const std::vector<DimSize_t> &transpose)#

Transpose data from another Tensor (which can be itself).

Parameters:

src – Source tensor to copy from.

void copyTranspose(const Tensor &src, const DataFormatTranspose &transpose)#
void copyPermute(const Tensor &src, const std::vector<DimSize_t> &permutation)#
void copyChangeIndex(Tensor &src, const std::vector<DimSize_t> &permutation)#
void copyCastFrom(const Tensor &src, std::shared_ptr<Tensor> &movedSrc)#

Copy-cast data from a Tensor.

Parameters:
  • src – Source tensor to copy-cast from.

  • movedSrc – shared_ptr to an indermediate Tensor that will contain the moved data if a device change should occur AND a type conversion is necessary (otherwise it remains unused). Any data already present will be overwritten. No new memory allocation will occur if movedSrc has already been allocated with the right type/size/device. If required, memory is always allocated on current (destination) Tensor’s device.

inline void copyCastFrom(const Tensor &src)#

Copy-cast data from a Tensor. In case of both a device change AND a data type conversion, an intermediate buffer on will be allocated and deallocated each time. If required, buffer’s memory is always allocated on current (destination) Tensor’s device.

Parameters:

src – Source tensor to copy-cast from.

Tensor &refContiguous(std::shared_ptr<Tensor> &fallback)#

Return a reference to a Tensor that is guaranteed to be contiguous:

  • itself, if already contiguous;

  • the provided Tensor, overwritten with the copied data. The data type, backend and device stay the same.

Parameters:

fallback – A shared_ptr to Tensor ready to be overwritten if necessary. The shared_ptr does not need to be initialized. No new memory allocation will occur if fallback has already been allocated with the right type/size/device.

Returns:

Reference to either itself or to fallback.

const Tensor &refContiguous(std::shared_ptr<Tensor> &fallback) const#
Tensor &refCast(std::shared_ptr<Tensor> &fallback, const Aidge::DataType &dt)#

Return a reference to a Tensor casted to the desired data type:

  • itself, if already at the right data type;

  • the provided Tensor, overwritten with the copy-casted data. The backend stays the same.

Parameters:
  • fallback – A shared_ptr to Tensor ready to be overwritten if necessary. The shared_ptr does not need to be initialized. No new memory allocation will occur if fallback has already been allocated with the right type/size/device.

  • dt – The desired data type.

Returns:

Reference to either itself or to fallback.

const Tensor &refCast(std::shared_ptr<Tensor> &fallback, const Aidge::DataType &dt) const#
Tensor &refFrom(std::shared_ptr<Tensor> &fallback, const std::string &backend, DeviceIdx_t device = 0)#

Return a reference to a Tensor on the desired backend/device:

  • itself, if already on the right device;

  • the provided Tensor, overwritten with the copied data. The data type stays the same.

Parameters:
  • fallback – A shared_ptr to Tensor ready to be overwritten if necessary. The shared_ptr does not need to be initialized. No new memory allocation will occur if fallback has already been allocated with the right type/size/device.

  • backend – The desired backend.

  • device – The desired device.

Returns:

Reference to either itself or to fallback.

const Tensor &refFrom(std::shared_ptr<Tensor> &fallback, const std::string &backend, DeviceIdx_t device = 0) const#
inline Tensor &refCastFrom(std::shared_ptr<Tensor> &fallback, const Aidge::DataType &dt, const std::string &backend, DeviceIdx_t device = 0)#

Return a reference to a Tensor on desired data type and backend/device:

  • itself, if already with the right characteristics;

  • the provided Tensor, overwritten with the copy-casted data. If required, fallback is always allocated on desired (destination) device.

Parameters:
  • fallback – A shared_ptr to Tensor ready to be overwritten if necessary. The shared_ptr does not need to be initialized. No new memory allocation will occur if fallback has already been allocated with the right type/size/device.

  • dt – The desired data type.

  • backend – The desired backend.

  • device – The desired device.

Returns:

Reference to either itself or to fallback.

inline const Tensor &refCastFrom(std::shared_ptr<Tensor> &fallback, const Aidge::DataType &dt, const std::string &backend, DeviceIdx_t device = 0) const#
inline Tensor &refCastFrom(std::shared_ptr<Tensor> &fallback, const Tensor &targetReqs)#

Return a reference to a Tensor with same characteristics (data type, backend/device) as targetReqs Tensor:

  • itself, if already with the right characteristics;

  • the provided Tensor, overwritten with the copy-casted data. If required, fallback is always allocated on current (destination) Tensor’s device.

Parameters:
  • fallback – A shared_ptr to Tensor ready to be overwritten if necessary. The shared_ptr does not need to be initialized. No new memory allocation will occur if fallback has already been allocated with the right type/size/device.

  • targetReqsTensor with the desired target characteristics.

Returns:

Reference to either itself or to fallback.

inline const Tensor &refCastFrom(std::shared_ptr<Tensor> &fallback, const Tensor &targetReqs) const#
Tensor &ref(std::shared_ptr<Tensor> &fallback, const Aidge::DataType &dt, const std::string &backend, DeviceIdx_t device = 0)#

Return a reference to a Tensor on desired data type and backend/device:

  • itself, if already with the right characteristics;

  • the provided Tensor, overwritten with the right characteristics.

Note

no data is copy-casted. If it was so in a previous refCastFrom() on the same fallback, it remains valid, otherwise, data is invalid.

Parameters:
  • fallback – A shared_ptr to Tensor ready to be overwritten if necessary. The shared_ptr does not need to be initialized. No new memory allocation will occur if fallback has already been allocated with the right type/size/device.

  • dt – The desired data type.

  • backend – The desired backend.

  • device – The desired device.

Returns:

Reference to either itself or to fallback.

const Tensor &ref(std::shared_ptr<Tensor> &fallback, const Aidge::DataType &dt, const std::string &backend, DeviceIdx_t device = 0) const#
inline Tensor &ref(std::shared_ptr<Tensor> &fallback, const Tensor &targetReqs)#

Return a reference to a Tensor with same characteristics (data type, backend/device) as targetReqs Tensor:

  • itself, if already with the right characteristics;

  • the provided Tensor, overwritten with the right characteristics.

Note

no data is copy-casted. If it was so in a previous refCastFrom() on the same fallback, it remains valid, otherwise, data is invalid.

Parameters:
  • fallback – A shared_ptr to Tensor ready to be overwritten if necessary. The shared_ptr does not need to be initialized. No new memory allocation will occur if fallback has already been allocated with the right type/size/device.

  • targetReqsTensor with the desired target characteristics.

Returns:

Reference to either itself or to fallback.

inline const Tensor &ref(std::shared_ptr<Tensor> &fallback, const Tensor &targetReqs) const#
Tensor repeat(int times) const#

Repeat the tensor along a new first dimension. For example, if the current tensor has dimensions (n, m), calling repeat(10) returns a tensor of shape (10, n, m) with 10 copies of the original data.

Parameters:

times – number of repetitions (must be positive)

Returns:

Tensor new tensor containing the repeated data.

Public Static Functions

static std::set<std::string> getAvailableBackends()#

Get a list of available backends.

Deprecated:

Use availableBackends() instead.

Returns:

std::set<std::string>

static std::set<std::string> availableBackends()#

Get a list of available backends.

Returns:

std::set<std::string>

static std::vector<std::size_t> toCoord(const std::vector<Aidge::DimSize_t> &dimensions, std::size_t index)#

From the the 1D contiguous index, return the coordinate of an element in the tensor. Beware: do not use this function with the storage index!

Parameters:

index – 1D contiguous index of the value considering a flatten, contiguous, tensor.

Returns:

std::vector<DimSize_t>

static std::size_t toIndex(const std::vector<DimSize_t> &dimensions, const std::vector<std::size_t> &coords)#

From the coordinate returns the 1D contiguous index of an element in the tensor. If the number of coordinates is inferior to the number of dimensions, the remaining coordinates are assumed to be 0. Beware: the contiguous index will only correspond to the storage index if the tensor is contiguous! Note that the coordIdx may be an empty vector.

Parameters:

coords – Coordinate to an element in the tensor

Returns:

DimSize_t Contiguous index

template<typename T>
static bool isInBounds(const std::vector<DimSize_t> &dimensions, const std::vector<T> &coords)#

check if index is in bound of given tensor dimensions

Warning

this function is templated in order to welcome cases like interpolation where indexes are not integers. However, the only types accepted are floating, integer & size_t

Parameters:
  • tensorDims – : tensor dimensions

  • coords – : coords of the tensor you want to flattened index of

Returns:

true if all coords are in bound. False otherwise

static bool isInBounds(const std::vector<DimSize_t> &dimensions, std::size_t index)#

Public Static Attributes

static constexpr const char *mType = {"Tensor"}#

Friends

template<typename T, typename VT = std::enable_if_t<std::is_arithmetic<T>::value, std::decay_t<T>>>
inline friend Tensor operator+(T val, const Tensor &other)#
template<typename T, typename VT = std::enable_if_t<std::is_arithmetic<T>::value, std::decay_t<T>>>
inline friend Tensor operator-(T val, const Tensor &other)#
template<typename T, typename VT = std::enable_if_t<std::is_arithmetic<T>::value, std::decay_t<T>>>
inline friend Tensor operator*(T val, const Tensor &other)#

Database#

class Database#

Abstract class representing a map from a key to data. All databases should inherit from this class. All subclasses should overwrite :cpp:function:Database::getItem to fetch data from a given index.

Subclassed by Aidge::CIFAR, Aidge::MNIST

Public Functions

Database() = default#
virtual ~Database() noexcept = default#
virtual std::vector<std::shared_ptr<Tensor>> getItem(const std::size_t index) const = 0#

Fetch an item of the database.

Parameters:

index – index of the item.

Returns:

vector of data mapped to index.

virtual std::size_t getLen() const noexcept = 0#

Get the number of items in the database.

Returns:

std::size_t

virtual std::size_t getNbModalities() const noexcept = 0#

Get the number of modalities in one database item.

Returns:

std::size_t

DataProvider#

class aidge_core.DataProvider#
__init__(self: aidge_core.aidge_core.DataProvider, database: aidge_core.aidge_core.Database, batch_size: SupportsInt, backend: str = 'cpu', shuffle: bool = False, drop_last: bool = False) None#
transforms(self: aidge_core.aidge_core.DataProvider, modality: typing.SupportsInt, graph: Aidge::GraphView) None#
class DataProvider : public std::enable_shared_from_this<DataProvider>#

Data Provider. Takes in a database and compose batches by fetching data from the given database.

Todo:

Implement Drop last batch option. Currently returns the last batch with less elements in the batch.

Implement readRandomBatch to compose batches from the database with a random sampling strategy. Necessary for training.

Public Functions

DataProvider(const Database &database, const std::size_t batchSize, const std::string &backend = "cpu", const bool shuffle = false, const bool dropLast = false)#

Constructor of Data Provider.

Parameters:
  • database – database from which to load the data.

  • batchSize – number of data samples per batch.

inline void transforms(size_t modality, std::shared_ptr<GraphView> graph)#
std::vector<std::shared_ptr<Tensor>> readBatch() const#

Create a batch for each data modality in the database.

Returns:

a vector of tensors. Each tensor is a batch corresponding to one modality.

inline std::size_t getNbBatch()#

Get the Number of Batch.

Returns:

std::size_t

inline std::size_t getIndexBatch()#

Get the current Index Batch.

Returns:

std::size_t

inline void resetIndexBatch()#

Reset the internal index batch that browses the data of the database to zero.

inline void incrementIndexBatch()#

Increment the internal index batch that browses the data of the database.

void setBatches()#

Setup the batches for one pass on the database.

inline bool done()#

End condition of dataProvider for one pass on the database.

Returns:

true when all batch were fetched, False otherwise

std::shared_ptr<DataProvider> iter()#

iter method for iterator protocol

Returns:

DataProvider*

std::vector<std::shared_ptr<Aidge::Tensor>> next()#

next method for iterator protocol

Returns:

std::vector<std::shared_ptr<Aidge::Tensor>>