Classification Neural Network¶
Single Parameter¶
-
touvlo.nnet.sgl_parm.
back_propagation
(y, theta, a, z, num_labels, n_hidden_layers=1)[source]¶ Applies back propagation to minimize model’s loss.
Parameters: - y (numpy.array) – Column vector of expected values.
- theta (numpy.array(numpy.array)) – array of model’s weight matrices by layer.
- a (numpy.array(numpy.array)) – array of activation matrices by layer.
- z (numpy.array(numpy.array)) – array of parameters prior to sigmoid by layer.
- num_labels (int) – Number of classes in multiclass classification.
- n_hidden_layers (int) – Number of hidden layers in network.
Returns: array of matrices of ‘error values’ by layer.
Return type: numpy.array(numpy.array)
-
touvlo.nnet.sgl_parm.
cost_function
(X, y, theta, _lambda, num_labels, n_hidden_layers=1)[source]¶ Computes the cost function J for Neural Network.
Parameters: - X (numpy.array) – Features’ dataset.
- y (numpy.array) – Column vector of expected values.
- theta (numpy.array) – Column vector of model’s parameters.
- _lambda (float) – The regularization hyperparameter.
- num_labels (int) – Number of classes in multiclass classification.
- n_hidden_layers (int) – Number of hidden layers in network.
Returns: Computed cost.
Return type:
-
touvlo.nnet.sgl_parm.
feed_forward
(X, theta, n_hidden_layers=1)[source]¶ Applies forward propagation to calculate model’s hypothesis.
Parameters: - X (numpy.array) – Features’ dataset.
- theta (numpy.array) – Column vector of model’s parameters.
- n_hidden_layers (int) – Number of hidden layers in network.
Returns: A 2-tuple consisting of an array of parameters prior to activation by layer and an array of activation matrices by layer.
Return type: (numpy.array(numpy.array), numpy.array(numpy.array))
-
touvlo.nnet.sgl_parm.
grad
(X, y, nn_params, _lambda, input_layer_size, hidden_layer_size, num_labels, n_hidden_layers=1)[source]¶ Calculates gradient of neural network’s parameters.
Parameters: - X (numpy.array) – Features’ dataset.
- y (numpy.array) – Column vector of expected values.
- nn_params (numpy.array) – Column vector of model’s parameters.
- _lambda (float) – The regularization hyperparameter.
- input_layer_size (int) – Number of units in the input layer.
- hidden_layer_size (int) – Number of units in a hidden layer.
- num_labels (int) – Number of classes in multiclass classification.
- n_hidden_layers (int) – Number of hidden layers in network.
Returns: array of gradient values by weight matrix.
Return type: numpy.array(numpy.array)
-
touvlo.nnet.sgl_parm.
h
(X, theta, n_hidden_layers=1)[source]¶ Classification Neural Network hypothesis.
Parameters: - X (numpy.array) – Features’ dataset.
- theta (numpy.array) – Column vector of model’s parameters.
- n_hidden_layers (int) – Number of hidden layers in network.
Returns: The probability that each entry belong to class 1.
Return type: numpy.array
-
touvlo.nnet.sgl_parm.
init_nn_weights
(input_layer_size, hidden_layer_size, num_labels, n_hidden_layers=1)[source]¶ Initialize the weight matrices of a network with random values.
Parameters: Returns: array of weight matrices of random values.
Return type: numpy.array(numpy.array)
-
touvlo.nnet.sgl_parm.
rand_init_weights
(L_in, L_out)[source]¶ Initializes weight matrix with random values.
Parameters: Returns: Random values’ matrix of conforming dimensions.
Return type: numpy.array
-
touvlo.nnet.sgl_parm.
unravel_params
(nn_params, input_layer_size, hidden_layer_size, num_labels, n_hidden_layers=1)[source]¶ Unravels flattened array into list of weight matrices
Parameters: - nn_params (numpy.array) – Row vector of model’s parameters.
- input_layer_size (int) – Number of units in the input layer.
- hidden_layer_size (int) – Number of units in a hidden layer.
- num_labels (int) – Number of classes in multiclass classification.
- n_hidden_layers (int) – Number of hidden layers in network.
Returns: array with model’s weight matrices.
Return type: numpy.array(numpy.array)
Computation Graph¶
-
touvlo.nnet.cmpt_grf.
L_model_backward
(AL, Y, caches)[source]¶ Implements the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group.
Parameters: - AL (numpy.array) – probability vector, output of the forward propagation (L_model_forward()).
- Y (numpy.array) – true “label” vector (containing 0 if non-cat, 1 if cat)
- caches (list(tuple)) – list of caches containing every cache of linear_activation_forward() with “relu” (it’s caches[l], for l in range(L-1) i.e l = 0…L-2) the cache of linear_activation_forward() with “sigmoid” (it’s caches[L-1]).
Returns: - A dictionary with the gradients:
- grads[“dA” + str(l)] = …
- grads[“dW” + str(l)] = …
- grads[“db” + str(l)] = …
Return type: (dict)
-
touvlo.nnet.cmpt_grf.
L_model_forward
(X, parameters)[source]¶ Implements forward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID computation.
Parameters: - X (numpy.array) data of shape (input size, number of examples) –
- parameters (dict) output of initialize_parameters_deep() –
Returns: A 2-tuple consisting of the last post-activation value and a list of caches containing every cache of linear_activation_forward(), (there are L-1 of them, indexed from 0 to L-1).
Return type:
-
touvlo.nnet.cmpt_grf.
compute_cost
(AL, Y)[source]¶ Implements the cost function.
Parameters: - AL (numpy.array) – probability vector corresponding to your label predictions, shape (1, number of examples)
- Y (numpy.array) – true “label” vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns: cross-entropy cost
Return type:
-
touvlo.nnet.cmpt_grf.
init_params
(layer_dims, _seed=1)[source]¶ Creates numpy arrays to to represent the weight matrices and intercepts of the Neural Network.
Parameters: Returns: Single dictionary containing your parameters “W1”, “b1”, …, “WL”, “bL” where Wl is a weight matrix of shape (layer_dims[l], layer_dims[l-1]) and bl is the bias vector of shape (layer_dims[l], 1).
Return type:
-
touvlo.nnet.cmpt_grf.
linear_activation_backward
(dA, cache, activation)[source]¶ Implements the backward propagation for the LINEAR -> ACTIVATION layer.
Parameters: Returns: A 3-tuple consisting of the Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev, the Gradient of the cost with respect to W (current layer l), same shape as W and the Gradient of the cost with respect to b (current layer l), same shape as b.
Return type: (numpy.array, numpy.array, float)
-
touvlo.nnet.cmpt_grf.
linear_activation_forward
(A_prev, W, b, activation)[source]¶ Implement the forward propagation for the LINEAR->ACTIVATION layer
Parameters: - A_prev (numpy.array) – activations from previous layer (or input data): (size of previous layer, number of examples)
- W (numpy.array) – numpy array of shape (size of current layer, size of previous layer)
- b (float) – bias vector, numpy array of shape (size of the current layer, 1)
- activation (str) – the activation to be used in this layer, stored as a text string: “sigmoid” or “relu”
Returns: A 2-tuple consisting of the output of the activation function, also called the post-activation value and a python tuple containing “linear_cache” and “activation_cache”; stored for computing the backward pass efficiently.
Return type: (numpy.array, dict)
-
touvlo.nnet.cmpt_grf.
linear_backward
(dZ, cache)[source]¶ Implements the linear portion of backward propagation for a single layer (layer l).
Parameters: - dZ (numpy.array) – Gradient of the cost with respect to the linear output (of current layer l).
- cache (tuple) – values (A_prev, W, b) coming from the forward propagation in the current layer.
Returns: A 3-tuple consisting of the Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev, the Gradient of the cost with respect to W (current layer l),same shape as W and the Gradient of the cost with respect to b (current layer l), same shape as b.
Return type: (numpy.array, numpy.array, float)
-
touvlo.nnet.cmpt_grf.
linear_forward
(A, W, b)[source]¶ Implement the linear part of a layer’s forward propagation.
Parameters: - A (numpy.array) – activations from previous layer (or input data): (size of previous layer, number of examples).
- W (numpy.array) – weights matrix: numpy array of shape (size of current layer, size of previous layer).
- b (numpy.array) – bias vector, numpy array of shape (size of the current layer, 1).
Returns: A 2-tuple consisting of the input of the activation function, also called pre-activation parameter and a python tuple containing “A”, “W” and “b” ; stored for computing the backward pass efficiently.
Return type: (numpy.array, dict)