Training a simple Neural Network#
This tutorial introduces the basics of learning with the framework Aidge.
What you will learn: 1. creating a model using Aidge API 2. create and import a dataset 3. train a model
The following modules will be required:
[1]:
!pip install torchvision==0.14.1+cpu --extra-index-url https://download.pytorch.org/whl/cpu
!pip install numpy==1.24.1
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cpu
Collecting torchvision==0.14.1+cpu
Downloading https://download.pytorch.org/whl/cpu/torchvision-0.14.1%2Bcpu-cp310-cp310-linux_x86_64.whl (16.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.8/16.8 MB 97.0 MB/s eta 0:00:00
Requirement already satisfied: typing-extensions in /builds/eclipse/aidge/aidge/venv/lib/python3.10/site-packages (from torchvision==0.14.1+cpu) (4.12.2)
Requirement already satisfied: numpy in /builds/eclipse/aidge/aidge/venv/lib/python3.10/site-packages (from torchvision==0.14.1+cpu) (2.2.0)
Requirement already satisfied: requests in /builds/eclipse/aidge/aidge/venv/lib/python3.10/site-packages (from torchvision==0.14.1+cpu) (2.32.3)
Collecting torch==1.13.1 (from torchvision==0.14.1+cpu)
Downloading https://download.pytorch.org/whl/cpu/torch-1.13.1%2Bcpu-cp310-cp310-linux_x86_64.whl (199.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 199.1/199.1 MB 125.9 MB/s eta 0:00:00
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /builds/eclipse/aidge/aidge/venv/lib/python3.10/site-packages (from torchvision==0.14.1+cpu) (11.0.0)
Requirement already satisfied: charset-normalizer<4,>=2 in /builds/eclipse/aidge/aidge/venv/lib/python3.10/site-packages (from requests->torchvision==0.14.1+cpu) (3.4.0)
Requirement already satisfied: idna<4,>=2.5 in /builds/eclipse/aidge/aidge/venv/lib/python3.10/site-packages (from requests->torchvision==0.14.1+cpu) (3.10)
Requirement already satisfied: urllib3<3,>=1.21.1 in /builds/eclipse/aidge/aidge/venv/lib/python3.10/site-packages (from requests->torchvision==0.14.1+cpu) (2.2.3)
Requirement already satisfied: certifi>=2017.4.17 in /builds/eclipse/aidge/aidge/venv/lib/python3.10/site-packages (from requests->torchvision==0.14.1+cpu) (2024.12.14)
Installing collected packages: torch, torchvision
Attempting uninstall: torch
Found existing installation: torch 2.5.1
Uninstalling torch-2.5.1:
Successfully uninstalled torch-2.5.1
Successfully installed torch-1.13.1+cpu torchvision-0.14.1+cpu
Collecting numpy==1.24.1
Downloading numpy-1.24.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (5.6 kB)
Downloading numpy-1.24.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 17.3/17.3 MB 78.4 MB/s eta 0:00:00
Installing collected packages: numpy
Attempting uninstall: numpy
Found existing installation: numpy 2.2.0
Uninstalling numpy-2.2.0:
Successfully uninstalled numpy-2.2.0
Successfully installed numpy-1.24.1
Choose the backend:
[2]:
# "cpu" or "cuda"
BACKEND = "cpu"
Import the required modules:
[3]:
import aidge_core
if BACKEND == "cuda":
import aidge_backend_cuda
else:
import aidge_backend_cpu
import aidge_learning
import numpy as np
# required to load CIFAR10 dataset
import torchvision
import torchvision.transforms as transforms
Creating Aidge model#
In this example, we will create a simple perceptron model. For this we will use the helper function sequential
which will connect the nodes in a sequential manner and will return the reuslting GraphView
.
[4]:
model = aidge_core.sequential([
aidge_core.FC(in_channels=32*32*3, out_channels=512),
aidge_core.ReLU(),
aidge_core.FC(in_channels=512, out_channels=256),
aidge_core.ReLU(),
aidge_core.FC(in_channels=256, out_channels=128),
aidge_core.ReLU(),
aidge_core.FC(in_channels=128, out_channels=10),
])
Once the model is created, we can set its backend, datatype and initialize the values of the parameters.
We will initialize FC
weights with the He filler and set all biases to 0.01
.
[5]:
# Set backend and datatype
model.set_backend(BACKEND)
model.set_datatype(aidge_core.dtype.float32)
# Initialize parameters (weights and biases)
for node in model.get_nodes():
if node.type() == "Producer":
prod_op = node.get_operator()
value = prod_op.get_output(0)
tuple_out = node.output(0)[0]
# No conv in current network
if tuple_out[0].type() == "Conv" and tuple_out[1]==1:
# Conv weight
aidge_core.xavier_uniform_filler(value)
elif tuple_out[0].type() == "Conv" and tuple_out[1]==2:
# Conv bias
aidge_core.constant_filler(value, 0.01)
elif tuple_out[0].type() == "FC" and tuple_out[1]==1:
# FC weight
aidge_core.he_filler(value)
elif tuple_out[0].type() == "FC" and tuple_out[1]==2:
# FC bias
aidge_core.constant_filler(value, 0.01)
else:
pass
Note: We could have initialized producers using graph matching.
Aidge database#
Now that the model is ready we need to prepare a database. For this we will use the possiiblity to create a custom database using aidge_core.Database
.
We will use the framework PyTorch to load CIFAR10 and then write a custom database to transform the tensors to one hot encoded Aidge tensor.
[6]:
def one_hot_encoding(cls, nb_cls):
values = np.array([float(0.0)] * nb_cls)
values[cls] = float(1.0)
t = aidge_core.Tensor(np.array(values))
t.set_datatype(aidge_core.dtype.float32)
return t
class aidge_cifar10(aidge_core.Database):
def __init__(self):
aidge_core.Database.__init__(self)
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
self.trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
def get_item(self, idx):
data, label = self.trainset.__getitem__(idx)
return [aidge_core.Tensor(data.numpy()),
one_hot_encoding(label, 10)]
def len(self):
return len(self.trainset)
def get_nb_modalities(self):
return 2
Using this, we can now create a dataprovider that we will use to send data to the network.
[7]:
aidge_database = aidge_cifar10()
BATCH_SIZE = 64
aidge_dataprovider = aidge_core.DataProvider(aidge_database,
backend=BACKEND,
batch_size=BATCH_SIZE,
shuffle=True,
drop_last=True)
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz
100.0%
Extracting ./data/cifar-10-python.tar.gz to ./data
Set up learning objects#
We now have all the basic elements required to run the leanring. We just need to setup the object specific to the learning and we will be able to write our first training loop !
For propagation and backpropagation, Aidge use scheduler obejct, we will use the SequentialScheduler
.
[8]:
# Set object for learning
scheduler = aidge_core.SequentialScheduler(model)
To update weights, we will use an optimizer, in this case SGD.
[9]:
# setup optimizer
opt = aidge_learning.SGD()
learning_rates = aidge_learning.constant_lr(0.01)
opt.set_learning_rate_scheduler(learning_rates)
opt.set_parameters(list(aidge_core.producers(model)))
Training loop#
[10]:
tot_acc = 0
for i, (input, label) in enumerate(aidge_dataprovider):
# input.init_grad()
scheduler.forward(data=[input])
# Really long line should be a faster way ...
pred = list(model.get_output_nodes())[0].get_operator().get_output(0)
opt.reset_grad()
loss = aidge_learning.loss.MSE(pred, label)
acc = aidge_learning.metrics.Accuracy(pred, label, 1)[0]
tot_acc += acc
scheduler.backward()
opt.update()
print(f"Nb samples {(i+1)*BATCH_SIZE}, loss: {loss[0]}, acc:{(acc/BATCH_SIZE)*100}%, tot_acc:{(tot_acc/((i+1)*BATCH_SIZE))*100}%")
# Break point
if i == 5:
break
Nb samples 64, loss: 0.9795454740524292, acc:6.25%, tot_acc:6.25%
Nb samples 128, loss: 1.3091881275177002, acc:7.8125%, tot_acc:7.03125%
Nb samples 192, loss: 0.5493475794792175, acc:14.0625%, tot_acc:9.375%
Nb samples 256, loss: 0.19139410555362701, acc:12.5%, tot_acc:10.15625%
Nb samples 320, loss: 0.15566213428974152, acc:15.625%, tot_acc:11.25%
Nb samples 384, loss: 0.15013185143470764, acc:15.625%, tot_acc:11.979166666666668%