2. Neural Network
2.1
linear
This module contains the code for Linear Bayesian layer.
2.1.1
Linear(input_size, output_size, weights_distribution=None, bias_distribution=None, *, use_bias=True, precision=None, dot_general=lax.dot_general)
This class is the bayesian implementation of the Linear class.
This is the constructor of the Linear class.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_size
|
int
|
Size of the input features. |
required |
output_size
|
int
|
Size of the output features. |
required |
weights_distribution
|
Optional[GaussianDistribution]
|
Prior distribution of the weights. |
None
|
bias_distribution
|
Optional[GaussianDistribution]
|
Prior distribution of the bias. |
None
|
use_bias
|
bool
|
Whether to include a bias term in the layer. |
True
|
precision
|
PrecisionLike
|
Precision used in dot product operations. |
None
|
dot_general
|
DotGeneralT
|
Function for computing generalized dot products. |
dot_general
|
Source code in illia/nn/jax/linear.py
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
|
2.1.1.1
__call__(inputs)
This method is the forward pass of the model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
inputs
|
Array
|
Inputs of the model. Dimensions: [*, input size]. |
required |
Returns:
Type | Description |
---|---|
Array
|
Output tensor. Dimension: [*, output size]. |
Source code in illia/nn/jax/linear.py
75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 |
|
2.1.1.2
kl_cost()
Computes the Kullback-Leibler (KL) divergence cost for the layer's weights and bias.
Returns:
Type | Description |
---|---|
Array
|
KL divergence cost. |
int
|
Total number of parameters. |
Source code in illia/nn/jax/linear.py
110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
|