Utils

touvlo.utils.BGD(X, y, grad, initial_theta, alpha, num_iters, **kwargs)[source]

Performs parameter optimization via Batch Gradient Descent.

Parameters:
  • X (numpy.array) – Features’ dataset plus bias column.
  • y (numpy.array) – Column vector of expected values.
  • grad (numpy.array) – Routine that generates the partial derivatives given theta.
  • initial_theta (numpy.array) – Initial value for parameters to be optimized.
  • alpha (float) – Learning rate or _step size of the optimization.
  • num_iters (int) – Number of times the optimization will be performed.
Returns:

Optimized model parameters.

Return type:

numpy.array

touvlo.utils.MBGD(X, y, grad, initial_theta, alpha, num_iters, b, **kwargs)[source]

Performs parameter optimization via Mini-Batch Gradient Descent.

Parameters:
  • X (numpy.array) – Features’ dataset plus bias column.
  • y (numpy.array) – Column vector of expected values.
  • grad (numpy.array) – Routine that generates the partial derivatives given theta.
  • initial_theta (numpy.array) – Initial value for parameters to be optimized.
  • alpha (float) – Learning rate or _step size of the optimization.
  • num_iters (int) – Number of times the optimization will be performed.
  • b (int) – Number of examples in mini batch.
Returns:

Optimized model parameters.

Return type:

numpy.array

touvlo.utils.SGD(X, y, grad, initial_theta, alpha, num_iters, **kwargs)[source]

Performs parameter optimization via Stochastic Gradient Descent.

Parameters:
  • X (numpy.array) – Features’ dataset plus bias column.
  • y (numpy.array) – Column vector of expected values.
  • grad (numpy.array) – Routine that generates the partial derivatives given theta.
  • initial_theta (numpy.array) – Initial value for parameters to be optimized.
  • alpha (float) – Learning rate or _step size of the optimization.
  • num_iters (int) – Number of times the optimization will be performed.
Returns:

Optimized model parameters.

Return type:

numpy.array

touvlo.utils.feature_normalize(X)[source]

Performs Z score normalization in a numeric dataset.

Parameters:X (numpy.array) – Features’ dataset plus bias column.
Returns:
A 3-tuple of X_norm,
normalized features’ dataset, mu, mean of each feature, and sigma, standard deviation of each feature.
Return type:(numpy.array, numpy.array, numpy.array)
touvlo.utils.g(x)[source]

This function applies the sigmoid function on a given value.

Parameters:x (obj) – Input value or object containing value .
Returns:Sigmoid function at value.
Return type:obj
touvlo.utils.g_grad(x)[source]

This function calculates the sigmoid gradient at a given value.

Parameters:x (obj) – Input value or object containing value .
Returns:Sigmoid gradient at value.
Return type:obj
touvlo.utils.mean_normlztn(Y, R)[source]

Performs mean normalization in a numeric dataset.

Parameters:
  • Y (numpy.array) – Scores’ dataset.
  • R (numpy.array) – Dataset of 0s and 1s (whether there’s a rating).
Returns:

  • Y_norm - Normalized scores’ dataset (row wise).
  • Y_mean - Column vector of calculated means.

Return type:

  • Y_norm (:py:class: numpy.array)
  • Y_mean (:py:class: numpy.array)

touvlo.utils.numerical_grad(J, theta, err)[source]

Numerically calculates the gradient of a given cost function.

Parameters:
  • J (Callable) – Function handle that computes cost given theta.
  • theta (numpy.array) – Model parameters.
  • err (float) – distance between points where J is evaluated.
Returns:

Computed numeric gradient.

Return type:

numpy.array

touvlo.utils.relu(Z)[source]

Implement the RELU function.

Arguments: Z – Output of the linear layer, of any shape

Returns: A – Post-activation parameter, of the same shape as Z cache – a python dictionary containing “A” ; stored for

computing the backward pass efficiently
touvlo.utils.relu_backward(dA, cache)[source]

Implement the backward propagation for a single RELU unit.

Arguments: dA – post-activation gradient, of any shape cache – ‘Z’ where we store for computing backward propagation efficiently

Returns: dZ – Gradient of the cost with respect to Z

touvlo.utils.sigmoid(Z)[source]

Implements the sigmoid activation in numpy

Arguments: Z – numpy array of any shape

Returns: A – output of sigmoid(z), same shape as Z cache – returns Z as well, useful during backpropagation

touvlo.utils.sigmoid_backward(dA, cache)[source]

Implement the backward propagation for a single SIGMOID unit.

Arguments: dA – post-activation gradient, of any shape cache – ‘Z’ where we store for computing backward propagation efficiently

Returns: dZ – Gradient of the cost with respect to Z