Aidge learning API#

Introduction#

Aidge provides its own native learning module. The primary purpose of Aidge Learning is to support Aidge’s IR for automated model optimization, notably advanced QAT, across heterogeneous deployment targets:

  • Ability to apply automated quantization schemes that are easily verifiable and customizable by users;

  • Formalized and fully reproducible support for industry-standard models and their derivatives;

  • Focus on custom hardware development, necessitating specific training strategies.

Basic example of training pipeline in Aidge (for a single epoch):

# Define the model (or load an existing ONNX one)
model = ...
model.set_backend("cuda")

# Initialize parameters (weights and biases)
aidge_core.init_producer(model, "Producer-1>(Conv2D|FC)", aidge_core.he_filler)
aidge_core.init_producer(model, "Producer-2>(Conv2D|FC)", lambda x: aidge_core.constant_filler(x, 0.01))

# Define the data provider (using either aidge_backend_opencv or torch DataLoader)
dataprovider = ...

# Define the optimizer and set the parameters to train
opt = aidge_learning.SGD(momentum=0.9)
opt.set_parameters(aidge_core.producers(model))

# Define the learning rate scheduler
learning_rates = aidge_learning.constant_lr(0.01)
opt.set_learning_rate_scheduler(learning_rates)

scheduler = aidge_core.SequentialScheduler(model)
for i, (input, label) in enumerate(tqdm(dataprovider)):
    # Forward pass
    pred = scheduler.forward(data=[input])[0]
    # Reset the gradient
    opt.reset_grad(model)
    # Compute loss
    loss = aidge_learning.loss.CELoss(pred, label)
    # Compute accuracy (optional)
    acc = aidge_learning.metrics.Accuracy(pred, label, 1)[0]
    # Backward pass
    scheduler.backward()
    # Optimize the parameters
    opt.update()

Components#

Fillers#

The aidge_core.init_producer method can be used to match in the graph the Producer to be filled with a filler:

Usage example:

# Initialize the weights
aidge_core.init_producer(model, "Producer-1>(Conv2D|FC)", aidge_core.he_filler)
# Initialize the bias
aidge_core.init_producer(model, "Producer-2>(Conv2D|FC)", lambda x: aidge_core.constant_filler(x, 0.01))

Note

Fillers are part of aidge_core and not aidge_learning.

The available fillers are:

aidge_core.constant_filler(tensor: aidge_core.aidge_core.Tensor, value: object) None#
aidge_core.normal_filler(tensor: aidge_core.aidge_core.Tensor, mean: object = 0.0, stdDev: object = 1.0) None#
aidge_core.uniform_filler(tensor: aidge_core.aidge_core.Tensor, min: object, max: object) None#
aidge_core.xavier_uniform_filler(tensor: aidge_core.aidge_core.Tensor, scaling: object = 1.0, varianceNorm: aidge_core.aidge_core.VarianceNorm = <VarianceNorm.FanIn: 0>) None#
aidge_core.xavier_normal_filler(tensor: aidge_core.aidge_core.Tensor, scaling: object = 1.0, varianceNorm: aidge_core.aidge_core.VarianceNorm = <VarianceNorm.FanIn: 0>) None#
aidge_core.he_filler(tensor: aidge_core.aidge_core.Tensor, varianceNorm: aidge_core.aidge_core.VarianceNorm = <VarianceNorm.FanIn: 0>, meanNorm: object = 0.0, scaling: object = 1.0) None#

Losses#

aidge_learning.loss.MSE(graph: aidge_core.aidge_core.Tensor, target: aidge_core.aidge_core.Tensor) aidge_core.aidge_core.Tensor#
aidge_learning.loss.BCE(graph: aidge_core.aidge_core.Tensor, target: aidge_core.aidge_core.Tensor) aidge_core.aidge_core.Tensor#
aidge_learning.loss.CELoss(graph: aidge_core.aidge_core.Tensor, target: aidge_core.aidge_core.Tensor) aidge_core.aidge_core.Tensor#
aidge_learning.loss.KD(student_prediction: aidge_core.aidge_core.Tensor, teacher_prediction: aidge_core.aidge_core.Tensor, temperature: SupportsFloat | SupportsIndex = 2.0) aidge_core.aidge_core.Tensor#
aidge_learning.loss.multiStepCELoss(graph: aidge_core.aidge_core.Tensor, target: aidge_core.aidge_core.Tensor, nbTimeSteps: SupportsInt | SupportsIndex) aidge_core.aidge_core.Tensor#

Optimizers#

The base optimizer class is :class:` aidge_learning.Optimizer`:

aidge_learning.Optimizer()#

The available optimizers are:

aidge_learning.Adam()#
aidge_learning.SGD()#

Learning rate scheduling#

The base learning rate scheduling class is :class:` aidge_learning.LRScheduler`:

aidge_learning.LRScheduler()#

The available learning rate schedulers are:

aidge_learning.constant_lr(initial_lr: SupportsFloat | SupportsIndex) aidge_learning.aidge_learning.LRScheduler#
aidge_learning.step_lr(initial_lr: SupportsFloat | SupportsIndex, step_size: SupportsInt | SupportsIndex, gamma: SupportsFloat | SupportsIndex = 0.10000000149011612) aidge_learning.aidge_learning.LRScheduler#

Metrics#

aidge_learning.metrics.Accuracy(prediction: aidge_core.aidge_core.Tensor, target: aidge_core.aidge_core.Tensor, axis: SupportsInt | SupportsIndex) aidge_core.aidge_core.Tensor#