Bayesian base module¶
This tutorial explains the main features of a Bayesian base module. You'll learn how to perform essential tasks, including:
- Freezing and unfreezing layers: controlling which parts of the model are trainable.
- Calculating the KL divergence cost: measuring how much one probability distribution differs from a reference distribution.
- Performing a forward pass: processing input data through the model to get predictions.
This guide is designed to help you understand these operations using Illia.
Libraries¶
To get started, you'll need to import some essential libraries. The specific libraries you use will depend on the backend you've chosen, such as PyTorch, TensorFlow, or Jax. Additionally, you'll need to import NumPy.
import torch
import numpy as np
Functions¶
The test_freeze_unfreeze
function confirms that layers can be accurately frozen and unfrozen.
def test_freeze_unfreeze():
print("Testing freeze and unfreeze...")
# Test PyTorch module
assert not torch_module.frozen, "PyTorch module should not be frozen initially"
torch_module.freeze()
assert torch_module.frozen, "PyTorch module should be frozen after freeze()"
torch_module.unfreeze()
assert (
not torch_module.frozen
), "PyTorch module should not be frozen after unfreeze()"
print("Freeze and unfreeze test passed!", "\n\n")
The test_kl_cost
function verifies the calculation of the KL divergence cost, ensuring that all frameworks yield consistent results.
def test_kl_cost():
print("Testing KL cost...")
torch_kl, torch_n = torch_module.kl_cost()
print(f"\nPyTorch : {torch_kl.item()}, {torch_n}")
print("KL cost test passed!", "\n\n")
The test_forward_pass
function ensures that the forward pass generates similar outputs across different framework models when provided with the same input data.
def test_forward_pass():
print("Testing forward pass...")
# Input data
input_data = np.random.randn(1, 10).astype(np.float32)
# PyTorch forward pass
torch_input = torch.from_numpy(input_data)
torch_output = torch_module(torch_input)
print("PyTorch output:", torch_output.detach().numpy())
print("Forward pass test passed!", "\n\n")
The run_all_tests
function executes all test functions in sequence to validate the module's functionality.
def run_all_tests():
test_freeze_unfreeze()
test_kl_cost()
test_forward_pass()
Random seeds¶
Set random seeds for reproducibility across different runs. This ensures that the results are consistent each time the code is executed.
np.random.seed(0)
torch.manual_seed(0)
<torch._C.Generator at 0x7f97914e9470>
Illia¶
When setting the backend, we import the Illia library, which provides Bayesian module implementations. Note that backend selection requires a kernel restart and cannot be changed dynamically.
import sys
import os
sys.path.append("/home/dani/Repositorios/illia/")
os.environ["ILLIA_BACKEND"] = "torch"
import illia
from illia.nn import BayesianModule
# Display available backends
print(f"Version: {illia.version()}, Backend: {illia.get_backend()}")
Version: 0.0.1, Backend: torch
Class definitions¶
Create test classes for various frameworks. Each class should implement a simple linear layer and include a method to calculate the KL divergence. These classes will be utilized in testing.
class TorchTestModule(BayesianModule):
def __init__(self):
super().__init__()
self.linear = torch.nn.Linear(10, 5)
def forward(self, x):
return self.linear(x)
def kl_cost(self):
return torch.tensor(1.0), 1
# PyTorch
torch_module = TorchTestModule()
Finally, run all tests to ensure that the module's functionalities work as expected across backends.
run_all_tests()
Testing freeze and unfreeze... Freeze and unfreeze test passed! Testing KL cost... PyTorch : 1.0, 1 KL cost test passed! Testing forward pass... PyTorch output: [[-0.8272763 -0.8433395 1.076015 -0.88353664 -0.528913 ]] Forward pass test passed!