Bayesian losses module¶
This tutorial demonstrates the functionality of a Bayesian losses module. It covers basic operations such as checking trainable parameters and forward propagation of loss functions.
Libraries¶
To get started, you'll need to import some essential libraries. The specific libraries you use will depend on the backend you've chosen, such as PyTorch, TensorFlow, or Jax. Additionally, you'll need to import NumPy.
import torch
import numpy as np
Functions¶
The check_parameters
function verifies the existence of trainable parameters.
def check_parameters(torch_module):
print("Check the existence of trainable parameters in the classes...")
torch_list_parameters = list(torch_module.parameters())
assert (
len(torch_list_parameters) != 0
), "No parameters availables in TorchTestModule"
print("Test passed!", "\n\n")
The check_forward_losses
function ensures that the loss functions produce consistent outputs.
def check_forward_losses(torch_module, torch_kl_divengence, torch_elbo_loss):
print("Check the forward propagation of the loss functions...")
# Input data
input_data = np.random.randn(1, 10).astype(np.float32)
y_true = np.random.randn(1, 10).astype(np.float32)
y_pred = np.random.randn(1, 10).astype(np.float32)
# PyTorch forward pass
torch_input = torch.from_numpy(input_data)
torch_output = torch_module(torch_input)
torch_kl_divengence_output = torch_kl_divengence(torch_module)
torch_elbo_loss_output = torch_elbo_loss(
torch.from_numpy(y_true), torch.from_numpy(y_pred), torch_module
)
# Assert that the outputs are similar
print("Torch output:", torch_output)
print("Torch KL divergence output:", torch_kl_divengence_output)
print("Torch ELBO loss output:", torch_elbo_loss_output)
print("Test passed!", "\n\n")
The run_all_tests
function executes all test functions in sequence to validate the module's functionality.
def run_all_tests(torch_module, torch_kl_divengence, torch_elbo_loss):
check_parameters(torch_module)
check_forward_losses(torch_module, torch_kl_divengence, torch_elbo_loss)
Illia¶
When setting the backend, we import the Illia library, which provides Bayesian module implementations. Note that backend selection requires a kernel restart and cannot be changed dynamically.
import sys
import os
sys.path.append("/home/dani/Repositorios/illia/")
os.environ["ILLIA_BACKEND"] = "torch"
from illia.nn import BayesianModule, Linear
from illia.losses import KLDivergenceLoss, ELBOLoss
Class definitions¶
Define test classes implementing a simple linear layer and a method to compute KL divergence. These classes will be used in the tests.
class TorchTestModule(BayesianModule):
def __init__(self):
super().__init__()
self.linear = Linear(10, 5)
def forward(self, x):
return self.linear(x)
def kl_cost(self):
return torch.tensor(1.0), 1
# PyTorch
torch_kl_divengence = KLDivergenceLoss()
torch_elbo_loss = ELBOLoss(loss_function=torch.nn.MSELoss())
torch_module = TorchTestModule()
Finally, run all tests to ensure that the module's functionalities work as expected across backends.
run_all_tests(torch_module, torch_kl_divengence, torch_elbo_loss)
Check the existence of trainable parameters in the classes... Test passed! Check the forward propagation of the loss functions... Torch output: tensor([[-0.3599, 0.1619, -0.0092, 0.4080, 0.4452]], grad_fn=<AddmmBackward0>) Torch KL divergence output: tensor(4.7174, grad_fn=<MulBackward0>) Torch ELBO loss output: tensor(1.4076, grad_fn=<DivBackward0>) Test passed!