Aidge backend OpenCV API#
Introduction#
The OpenCV backend provides its own operators and is particularly useful for data pre-processing. It is possible to create pre-processing pipelines using the same graph IR as Aidge core.
The OpenCV backend, alongside its database drivers, serves as a drop-in replacement for Torch datasets and data loaders. You can use either with Aidge, depending on your front-end and backend preferences (Torch mostly relies on Python Imaging Library (PIL) instead of OpenCV). Performance is similar for both, with a slight advantage for the Aidge OpenCV backend.
Operators specific to the OpenCV backend are usually suffixed by Transformation.
Warning
In accordance with OpenCV, color images are loaded in BGR format. Gray images are not automatically converted to color images (images are loaded as is).
To ensure that all images share the same RGB format (for example), add the following operator in the transforms pipeline:
aidge_backend_opencv.ColorSpaceTransformation ( aidge_backend_opencv.colorspace.rgb ).
Note
OpenCV uses the HWC format when loading multi-channel images.
This is the expected format for the Transformation operators.
Data in HWC are automatically converted to CHW, which is Aidge default format, in the aidge_core.DataProvider at the end of the transforms pipeline.
Example:
aidge_database = aidge_backend_opencv.CIFAR10(data_path="/local/DATABASE/cifar-10-batches-bin", train=True)
aidge_dataprovider = aidge_core.DataProvider(aidge_database,
backend=BACKEND,
batch_size=BATCH_SIZE,
shuffle=True,
drop_last=True)
aidge_dataprovider.transforms(0, aidge_core.sequential([
aidge_backend_opencv.FlipTransformation(True, True)
]))
About ROI interface#
This backend also includes a ROI handling and transformation mechanism. The goal is to allow easy and efficient data augmentation for objects detector training. It will be possible to apply the same pre-processing pipeline to images, pixel-wise labels and ROI labels seamlessly.
Example:
import aidge_core
import aidge_backend_opencv as aicv
import numpy as np
# Image pre-processing flow
preproc = aidge_core.sequential([
aicv.RandomFlipTransformation(0.5, 0.5)
])
a = aidge_core.Tensor(np.zeros([32, 32], dtype=np.float32))
a.to_backend("opencv")
scheduler = aidge_core.SequentialScheduler(preproc)
scheduler.forward(forward_dims=False, data=[a])[0]
# ROI pre-processing flow
preproc_roi = aicv.ROIGraph(preproc)
r = aicv.CircularROI(1, (1, 2), 3.0)
rois = aicv.ROIs([32, 32])
rois.data = [r]
scheduler = aidge_core.SequentialScheduler(preproc_roi)
out = scheduler.forward(forward_dims=False, data=[rois])[0]
print(out.data[0])
Synchronization between different modalities flow (image and ROI for example) is done with shared Producer_Op for random transformations.
The ROI flow can be automatically constructed from the image flow with the ROIGraph method: preproc_roi = aicv.ROIGraph(preproc).
The shared Producer_Op output value is set by a forward hook applied to the image flow Producer operator.
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
flowchart TB
subgraph shared operator
Producer_0("<em>Producer#0</em>"):::producerCls_rootCls
Producer_roi_0("<em>Producer#0</em>"):::producerCls
end
subgraph shared operator
Producer_1("<em>Producer#1</em>"):::producerCls_rootCls
Producer_roi_1("<em>Producer#1</em>"):::producerCls
end
FlipTransformation_0("<em>FlipTransformation#0</em>")
Producer_0-->|"0 [1] boolean<br/>↓<br/>1"|FlipTransformation_0
Producer_1-->|"0 [1] boolean<br/>↓<br/>2"|FlipTransformation_0
input0((in#0)):::inputCls--->|" [32, 32] float32<br/>↓<br/>0"|FlipTransformation_0
FlipTransformation_0--->|"0 [32, 32] float32<br/>↓"|output0((out#0)):::outputCls
classDef inputCls fill:#afa
classDef outputCls fill:#ffa
classDef externalCls fill:#ccc
classDef producerCls fill:#ccf
classDef genericCls fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5
classDef metaCls stroke-width:5px
classDef rootCls stroke:#f00
classDef producerCls_rootCls stroke:#f00,fill:#ccf
classDef genericCls_rootCls stroke:#f00,fill:#f9f9ff,stroke-width:1px,stroke-dasharray: 5 5
classDef metaCls_rootCls stroke:#f00,stroke-width:5px
FlipTransformation_ROI_0("<em>FlipTransformation_ROI#0</em>")
Producer_roi_1-->|"0 [1] boolean<br/>↓<br/>1"|FlipTransformation_ROI_0
Producer_roi_0-->|"0 [1] boolean<br/>↓<br/>2"|FlipTransformation_ROI_0
input_roi0((in#0)):::inputCls--->|"↓<br/>0"|FlipTransformation_ROI_0
FlipTransformation_ROI_0--->|"0<br/>↓"|output_roi0((out#0)):::outputCls
Predefined operators#
⚠️ The documentation is still missing but these operators are almost identical to legacy N2D2 ones. Please refer to the N2D2 documentation for now.
AffineTransformation#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>AffineTransformationOp</b>
Attributes:
<sub><em>first_operator</em></sub>
<sub><em>first_value</em></sub>
<sub><em>second_operator</em></sub>
<sub><em>second_value</em></sub>
<sub><em>div_by_zero_warn_limit</em></sub>
"):::operator
In0[input_data]:::text-only -->|"In[0]"| Op
In1[first_value]:::text-only -->|"In[1]"| Op
In2[second_operator]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[output_data]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_backend_opencv.AffineTransformation(*args, **kwargs)#
Overloaded function.
AffineTransformation(first_operator: aidge_backend_opencv.aidge_backend_opencv.affine_operator, first_value: aidge_core.aidge_core.Tensor = Tensor([], dims = [], dtype = float32), second_operator: aidge_backend_opencv.aidge_backend_opencv.affine_operator = <affine_operator.plus: 0>, second_value: aidge_core.aidge_core.Tensor = Tensor([], dims = [], dtype = float32), name: str = ‘’) -> aidge_core.aidge_core.Node
AffineTransformation(first_operator: aidge_backend_opencv.aidge_backend_opencv.affine_operator, first_value: str, second_operator: aidge_backend_opencv.aidge_backend_opencv.affine_operator = <affine_operator.plus: 0>, second_value: str = ‘’, name: str = ‘’) -> aidge_core.aidge_core.Node
CentroidCropTransformation#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>CentroidCropTransformationOp</b>
Attributes:
<sub><em>axis</em></sub>
"):::operator
In0[input_data]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[output_data]:::text-only
Op -->|"Out[1]"| Out1[crop_rect]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_backend_opencv.CentroidCropTransformation(axis: SupportsInt | SupportsIndex, name: str = '') aidge_core.aidge_core.Node#
ChannelExtractionTransformation#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ChannelExtractionTransformationOp</b>
Attributes:
<sub><em>channel</em></sub>
"):::operator
In0[input_data]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[output_data]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_backend_opencv.ChannelExtractionTransformation(channel: aidge_backend_opencv.aidge_backend_opencv.channel, name: str = '') aidge_core.aidge_core.Node#
ColorSpaceTransformation#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>ColorSpaceTransformationOp</b>
Attributes:
<sub><em>color_space</em></sub>
"):::operator
In0[input_data]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[output_data]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_backend_opencv.ColorSpaceTransformation(color_space: aidge_backend_opencv.aidge_backend_opencv.colorspace, name: str = '') aidge_core.aidge_core.Node#
CompressionNoiseTransformation#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>CompressionNoiseTransformationOp</b>
Attributes:
<sub><em>range</em></sub>
"):::operator
In0[input_data]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[output_data]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_backend_opencv.CompressionNoiseTransformation(range: tuple[SupportsInt | SupportsIndex, SupportsInt | SupportsIndex], name: str = '') aidge_core.aidge_core.Node#
CropTransformation#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>CropTransformationOp</b>
Attributes:
<sub><em>width</em></sub>
<sub><em>height</em></sub>
<sub><em>offset_x</em></sub>
<sub><em>offset_y</em></sub>
"):::operator
In0[input_data]:::text-only -->|"In[0]"| Op
In1[crop_rect]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[output_data]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_backend_opencv.CropTransformation(width: SupportsInt | SupportsIndex, height: SupportsInt | SupportsIndex, offset_x: SupportsInt | SupportsIndex = 0, offset_y: SupportsInt | SupportsIndex = 0, name: str = '') aidge_core.aidge_core.Node#
FlipTransformation#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>FlipTransformationOp</b>
Attributes:
<sub><em>horizontal_flip</em></sub>
<sub><em>vertical_flip</em></sub>
"):::operator
In0[input_data]:::text-only -->|"In[0]"| Op
In1[horizontal_flip]:::text-only -->|"In[1]"| Op
In2[vertical_flip]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[output_data]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_backend_opencv.FlipTransformation(horizontal_flip: bool, vertical_flip: bool = False, name: str = '') aidge_core.aidge_core.Node#
PadCropTransformation#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>PadCropTransformationOp</b>
Attributes:
<sub><em>width</em></sub>
<sub><em>height</em></sub>
<sub><em>additive_hw</em></sub>
<sub><em>border_type</em></sub>
<sub><em>border_value</em></sub>
"):::operator
In0[input_data]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[output_data]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_backend_opencv.PadCropTransformation(width: typing.SupportsInt | typing.SupportsIndex, height: typing.SupportsInt | typing.SupportsIndex, additive_wh: bool = False, border_type: aidge_backend_opencv.aidge_backend_opencv.pad_crop_border_type = <pad_crop_border_type.minusonereflectborder: 4>, border_value: collections.abc.Sequence[typing.SupportsFloat | typing.SupportsIndex] = [], name: str = '') aidge_core.aidge_core.Node#
RangeAffineTransformation#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>RangeAffineTransformationOp</b>
Attributes:
<sub><em>first_operator</em></sub>
<sub><em>first_value</em></sub>
<sub><em>second_operator</em></sub>
<sub><em>second_value</em></sub>
<sub><em>truncate</em></sub>
"):::operator
In0[input_data]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[output_data]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_backend_opencv.RangeAffineTransformation(*args, **kwargs)#
Overloaded function.
RangeAffineTransformation(first_operator: aidge_backend_opencv.aidge_backend_opencv.range_affine_operator, first_value: collections.abc.Sequence[typing.SupportsFloat | typing.SupportsIndex], second_operator: aidge_backend_opencv.aidge_backend_opencv.range_affine_operator = <range_affine_operator.plus: 0>, second_value: collections.abc.Sequence[typing.SupportsFloat | typing.SupportsIndex] = [], name: str = ‘’) -> aidge_core.aidge_core.Node
RangeAffineTransformation(first_operator: aidge_backend_opencv.aidge_backend_opencv.range_affine_operator, first_value: typing.SupportsFloat | typing.SupportsIndex, second_operator: aidge_backend_opencv.aidge_backend_opencv.range_affine_operator = <range_affine_operator.plus: 0>, second_value: typing.SupportsFloat | typing.SupportsIndex = 0.0, name: str = ‘’) -> aidge_core.aidge_core.Node
RescaleTransformation#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>RescaleTransformationOp</b>
Attributes:
<sub><em>width</em></sub>
<sub><em>height</em></sub>
<sub><em>keep_aspect_ratio</em></sub>
<sub><em>resize_to_fit</em></sub>
<sub><em>interpolation</em></sub>
"):::operator
In0[input_data]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[output_data]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_backend_opencv.RescaleTransformation(width: SupportsInt | SupportsIndex, height: SupportsInt | SupportsIndex, keep_aspect_ratio: bool = False, resize_to_fit: bool = True, name: str = '') aidge_core.aidge_core.Node#
SliceExtractionTransformation#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>SliceExtractionTransformationOp</b>
Attributes:
<sub><em>width</em></sub>
<sub><em>height</em></sub>
<sub><em>offset_x</em></sub>
<sub><em>offset_y</em></sub>
<sub><em>rotation</em></sub>
<sub><em>scaling</em></sub>
<sub><em>allow_padding</em></sub>
<sub><em>border_type</em></sub>
<sub><em>border_value</em></sub>
<sub><em>interpolation</em></sub>
"):::operator
In0[input_data]:::text-only -->|"In[0]"| Op
In1[offset_x]:::text-only -->|"In[1]"| Op
In2[offset_y]:::text-only -->|"In[2]"| Op
In3[rotation]:::text-only -->|"In[3]"| Op
In4[scaling]:::text-only -->|"In[4]"| Op
Op -->|"Out[0]"| Out0[output_data]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_backend_opencv.SliceExtractionTransformation(width: typing.SupportsInt | typing.SupportsIndex, height: typing.SupportsInt | typing.SupportsIndex, offset_x: typing.SupportsInt | typing.SupportsIndex = 0, offset_y: typing.SupportsInt | typing.SupportsIndex = 0, rotation: typing.SupportsFloat | typing.SupportsIndex = 0.0, scaling: typing.SupportsFloat | typing.SupportsIndex = 1.0, allow_padding: bool = False, border_type: aidge_backend_opencv.aidge_backend_opencv.slice_extraction_border_type = <slice_extraction_border_type.minusonereflectborder: 4>, border_value: collections.abc.Sequence[typing.SupportsFloat | typing.SupportsIndex] = [], name: str = '') aidge_core.aidge_core.Node#
-
std::shared_ptr<Node> Aidge::SliceExtractionTransformation(int width, int height, int offsetX = 0, int offsetY = 0, float rotation = 0.0, float scaling = 1.0, bool allowPadding = false, SliceExtractionBorderType borderType = SliceExtractionBorderType::MinusOneReflectBorder, const std::vector<double> &borderValue = std::vector<double>(), const std::string &name = "")#
TrimTransformation#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>TrimTransformationOp</b>
Attributes:
<sub><em>nb_levels</em></sub>
<sub><em>kernel</em></sub>
<sub><em>method</em></sub>
"):::operator
In0[input_data]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[output_data]:::text-only
Op -->|"Out[1]"| Out1[crop_rect]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_backend_opencv.TrimTransformation(nb_levels: typing.SupportsInt | typing.SupportsIndex, kernel: aidge_core.aidge_core.Tensor = Tensor([[[ 1], [ 1], [ 1]], [[ 1], [ 1], [ 1]], [[ 1], [ 1], [ 1]]], dims = [3, 3, 1], dtype = uint8), method: aidge_backend_opencv.aidge_backend_opencv.method = <method.discretize: 0>, name: str = '') aidge_core.aidge_core.Node#
-
std::shared_ptr<Node> Aidge::TrimTransformation(unsigned int nbLevels, const Tensor &kernel = cvMatToTensor(cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3))), TrimTransformation_Op::Method method = TrimTransformation_Op::Method::Discretize, const std::string &name = "")#
Predefined operators acting on ROI#
CropTransformation_ROI#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>CropTransformationROIOp</b>
Attributes:
<sub><em>width</em></sub>
<sub><em>height</em></sub>
<sub><em>offset_x</em></sub>
<sub><em>offset_y</em></sub>
"):::operator
In0[input_data]:::text-only -->|"In[0]"| Op
In1[crop_rect]:::text-only -->|"In[1]"| Op
Op -->|"Out[0]"| Out0[output_data]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_backend_opencv.CropTransformationROI(width: SupportsInt | SupportsIndex, height: SupportsInt | SupportsIndex, offset_x: SupportsInt | SupportsIndex = 0, offset_y: SupportsInt | SupportsIndex = 0, name: str = '') aidge_core.aidge_core.Node#
FlipTransformation_ROI#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>FlipTransformationROIOp</b>
Attributes:
<sub><em>horizontal_flip</em></sub>
<sub><em>vertical_flip</em></sub>
"):::operator
In0[input_data]:::text-only -->|"In[0]"| Op
In1[horizontal_flip]:::text-only -->|"In[1]"| Op
In2[vertical_flip]:::text-only -->|"In[2]"| Op
Op -->|"Out[0]"| Out0[output_data]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_backend_opencv.FlipTransformationROI(horizontal_flip: bool, vertical_flip: bool = False, name: str = '') aidge_core.aidge_core.Node#
PadCropTransformation_ROI#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>PadCropTransformationROIOp</b>
Attributes:
<sub><em>width</em></sub>
<sub><em>height</em></sub>
<sub><em>additive_hw</em></sub>
"):::operator
In0[input_data]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[output_data]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_backend_opencv.PadCropTransformationROI(width: SupportsInt | SupportsIndex, height: SupportsInt | SupportsIndex, additive_wh: bool = False, name: str = '') aidge_core.aidge_core.Node#
RescaleTransformation_ROI#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>RescaleTransformationROIOp</b>
Attributes:
<sub><em>width</em></sub>
<sub><em>height</em></sub>
<sub><em>keep_aspect_ratio</em></sub>
<sub><em>resize_to_fit</em></sub>
"):::operator
In0[input_data]:::text-only -->|"In[0]"| Op
Op -->|"Out[0]"| Out0[output_data]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_backend_opencv.RescaleTransformationROI(width: SupportsInt | SupportsIndex, height: SupportsInt | SupportsIndex, keep_aspect_ratio: bool = False, resize_to_fit: bool = True, name: str = '') aidge_core.aidge_core.Node#
SliceExtractionTransformation_ROI#
%%{init: {'flowchart': { 'curve': 'monotoneY'}, 'fontFamily': 'Verdana' } }%%
graph TD
Op("<b>SliceExtractionTransformationROIOp</b>
Attributes:
<sub><em>width</em></sub>
<sub><em>height</em></sub>
<sub><em>offset_x</em></sub>
<sub><em>offset_y</em></sub>
<sub><em>rotation</em></sub>
<sub><em>scaling</em></sub>
"):::operator
In0[input_data]:::text-only -->|"In[0]"| Op
In1[offset_x]:::text-only -->|"In[1]"| Op
In2[offset_y]:::text-only -->|"In[2]"| Op
In3[rotation]:::text-only -->|"In[3]"| Op
In4[scaling]:::text-only -->|"In[4]"| Op
Op -->|"Out[0]"| Out0[output_data]:::text-only
classDef text-only fill-opacity:0, stroke-opacity:0;
classDef operator stroke-opacity:0;
- aidge_backend_opencv.SliceExtractionTransformationROI(width: SupportsInt | SupportsIndex, height: SupportsInt | SupportsIndex, offset_x: SupportsInt | SupportsIndex = 0, offset_y: SupportsInt | SupportsIndex = 0, rotation: SupportsFloat | SupportsIndex = 0.0, scaling: SupportsFloat | SupportsIndex = 1.0, name: str = '') aidge_core.aidge_core.Node#
Databases#
MNIST#
- class aidge_backend_opencv.MNIST#
- __init__(self: aidge_backend_opencv.aidge_backend_opencv.MNIST, dataPath: str, train: bool, load_data_in_memory: bool = False) None#
- get_item(self: aidge_core.aidge_core.Database, index: SupportsInt | SupportsIndex) list[Aidge::Tensor]#
- get_nb_modalities(self: aidge_core.aidge_core.Database) int#
- len(self: aidge_core.aidge_core.Database) int#
-
class MNIST : public Aidge::Database#
Public Types
Public Functions
-
inline MNIST(const std::string &dataPath, bool train, bool loadDataInMemory = false)#
Data Transformations.
-
~MNIST() noexcept#
-
virtual std::vector<std::shared_ptr<Tensor>> getItem(const std::size_t index) const final override#
Fetch an item of the database.
- Parameters:
index – index of the item.
- Returns:
vector of data mapped to index.
-
inline virtual std::size_t getLen() const noexcept final override#
Get the number of items in the database.
- Returns:
std::size_t
-
inline virtual std::size_t getNbModalities() const noexcept final override#
Get the number of modalities in one database item.
- Returns:
std::size_t
-
union MagicNumber#
-
inline MNIST(const std::string &dataPath, bool train, bool loadDataInMemory = false)#
CIFAR10#
- class aidge_backend_opencv.CIFAR10#
- __init__(self: aidge_backend_opencv.aidge_backend_opencv.CIFAR10, data_path: str, train: bool, label_path: str = '') None#
- get_item(self: aidge_core.aidge_core.Database, index: SupportsInt | SupportsIndex) list[Aidge::Tensor]#
- get_labels_name(self: aidge_backend_opencv.aidge_backend_opencv.CIFAR) list[str]#
- get_nb_modalities(self: aidge_core.aidge_core.Database) int#
- len(self: aidge_core.aidge_core.Database) int#
CIFAR100#
- class aidge_backend_opencv.CIFAR100#
- __init__(self: aidge_backend_opencv.aidge_backend_opencv.CIFAR100, data_path: str, train: bool, use_coarse: bool = False, label_path: str = '') None#
- get_item(self: aidge_core.aidge_core.Database, index: SupportsInt | SupportsIndex) list[Aidge::Tensor]#
- get_labels_name(self: aidge_backend_opencv.aidge_backend_opencv.CIFAR) list[str]#
- get_nb_modalities(self: aidge_core.aidge_core.Database) int#
- len(self: aidge_core.aidge_core.Database) int#
Directory#
- class aidge_backend_opencv.Directory#
- __init__(self: aidge_backend_opencv.aidge_backend_opencv.Directory) None#
- get_item(self: aidge_core.aidge_core.Database, index: SupportsInt | SupportsIndex) list[Aidge::Tensor]#
- get_labels_name(self: aidge_backend_opencv.aidge_backend_opencv.Directory) list[str]#
- get_nb_modalities(self: aidge_core.aidge_core.Database) int#
- len(self: aidge_core.aidge_core.Database) int#
- load_dir(self: aidge_backend_opencv.aidge_backend_opencv.Directory, dir_path: str, depth: SupportsInt | SupportsIndex = 0, label_name: str = '', label_depth: SupportsInt | SupportsIndex = 0) None#
- load_file(self: aidge_backend_opencv.aidge_backend_opencv.Directory, file_name: str, label_name: str) int#
- set_ignore_masks(self: aidge_backend_opencv.aidge_backend_opencv.Directory, ignore_masks: collections.abc.Sequence[str]) None#
- set_valid_extensions(self: aidge_backend_opencv.aidge_backend_opencv.Directory, valid_extensions: collections.abc.Sequence[str]) None#
-
class Directory : public Aidge::Database#
Subclassed by Aidge::ILSVRC2012_Directory
Public Functions
-
inline Directory()#
-
virtual std::vector<std::shared_ptr<Tensor>> getItem(const std::size_t index) const final override#
Fetch an item of the database.
- Parameters:
index – index of the item.
- Returns:
vector of data mapped to index.
-
inline virtual std::size_t getLen() const noexcept final override#
Get the number of items in the database.
- Returns:
std::size_t
-
inline virtual std::size_t getNbModalities() const noexcept final override#
Get the number of modalities in one database item.
- Returns:
std::size_t
-
inline std::vector<std::string> getLabelsName() const noexcept#
-
void loadDir(const std::string &dirPath, int depth = 0, const std::string &labelName = "", int labelDepth = 0)#
Example:
- Parameters:
depth – depth = 0: load stimuli only from the current directory (dirPath) depth = 1: load stimuli from dirPath and stimuli contained in the sub-directories of dirPath depth < 0: load stimuli recursively from dirPath and all its sub-directories
labelDepth – labelDepth = -1: no label for all stimuli (label ID = -1) labelDepth = 0: uses
labelNamestring for all stimuli labelDepth = 1: useslabelNamestring for stimuli in the current directory (dirPath) andlabelNamesub-directory name for stimuli in the sub-directories
-
std::size_t loadFile(const std::string &fileName, const std::string &labelName)#
-
void setIgnoreMasks(const std::vector<std::string> &ignoreMasks)#
-
void setValidExtensions(const std::vector<std::string> &validExtensions)#
-
virtual ~Directory() noexcept#
-
inline Directory()#
ILSVRC2012_Directory#
- class aidge_backend_opencv.ILSVRC2012_Directory#
- __init__(self: aidge_backend_opencv.aidge_backend_opencv.ILSVRC2012_Directory, data_path: str, train: bool, label_path: str = '', background_class: bool = False) None#
- get_item(self: aidge_core.aidge_core.Database, index: SupportsInt | SupportsIndex) list[Aidge::Tensor]#
- get_labels_name(self: aidge_backend_opencv.aidge_backend_opencv.Directory) list[str]#
- get_nb_modalities(self: aidge_core.aidge_core.Database) int#
- len(self: aidge_core.aidge_core.Database) int#
- load_dir(self: aidge_backend_opencv.aidge_backend_opencv.Directory, dir_path: str, depth: SupportsInt | SupportsIndex = 0, label_name: str = '', label_depth: SupportsInt | SupportsIndex = 0) None#
- load_file(self: aidge_backend_opencv.aidge_backend_opencv.Directory, file_name: str, label_name: str) int#
- set_ignore_masks(self: aidge_backend_opencv.aidge_backend_opencv.Directory, ignore_masks: collections.abc.Sequence[str]) None#
- set_valid_extensions(self: aidge_backend_opencv.aidge_backend_opencv.Directory, valid_extensions: collections.abc.Sequence[str]) None#