pydbm.optimization package

Submodules

pydbm.optimization.batch_norm module

class pydbm.optimization.batch_norm.BatchNorm

Bases: object

Batch normalization for a regularization.

References

  • Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167.
back_propagation

Back propagation.

Parameters:delta_arrnp.ndarray of delta.
Returns:np.ndarray of delta.
beta_arr

getter

delta_beta_arr

getter

delta_gamma_arr

getter

forward_propagation

Forward propagation.

Parameters:observed_arrnp.ndarray of observed data points.
Retunrs:
np.ndarray of normalized data.
gamma_arr

getter

get_beta_arr

getter

get_delta_beta_arr

getter

get_delta_gamma_arr

getter

get_gamma_arr

getter

get_test_mode

getter

set_beta_arr

setter

set_delta_beta_arr

setter

set_delta_gamma_arr

setter

set_gamma_arr

setter

set_test_mode

setter

test_mode

getter

pydbm.optimization.opt_params module

class pydbm.optimization.opt_params.OptParams

Bases: object

Abstract class of optimization functions.

References

  • Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  • Pascanu, R., Mikolov, T., & Bengio, Y. (2012). Understanding the exploding gradient problem. CoRR, abs/1211.5063, 2.
  • Pascanu, R., Mikolov, T., & Bengio, Y. (2013, February). On the difficulty of training recurrent neural networks. In International conference on machine learning (pp. 1310-1318).
  • Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1), 1929-1958.
  • Zaremba, W., Sutskever, I., & Vinyals, O. (2014). Recurrent neural network regularization. arXiv preprint arXiv:1409.2329.
constrain_weight

Regularization for weights matrix to repeat multiplying the weights matrix and 0.9 until $sum_{j=0}^{n}w_{ji}^2 < weight_limit$.

Parameters:weight_arr – wegiht matrix.
Returns:weight matrix.
de_dropout

Dropout.

Parameters:activity_arr – The state of delta.
Returns:The state of delta.
dropout

Dropout.

Parameters:activity_arr – The state of units.
Returns:The state of units.
dropout_rate

getter

get_dropout_rate

getter

get_grad_clip_threshold

getter for the threshold of the gradient clipping.

get_inferencing_mode

getter

get_weight_limit

getter

grad_clip_threshold

getter for the threshold of the gradient clipping.

inferencing_mode

getter

optimize

Return of result from this concrete optimization function.

Parameters:
  • params_dictlist of parameters.
  • grads_arrnp.ndarray of gradation.
  • learning_rate – Learning rate.
Returns:

list of optimized parameters.

set_dropout_rate

setter

set_grad_clip_threshold

setter for the threshold of the gradient clipping.

set_inferencing_mode

setter

set_weight_limit

setter

weight_limit

getter

Module contents