# pydbm.optimization package¶

## pydbm.optimization.batch_norm module¶

class pydbm.optimization.batch_norm.BatchNorm

Bases: object

Batch normalization for a regularization.

References

• Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167.
back_propagation

Back propagation.

Parameters: delta_arr – np.ndarray of delta. np.ndarray of delta.
beta_arr

getter

delta_beta_arr

getter

delta_gamma_arr

getter

forward_propagation

Forward propagation.

Parameters: observed_arr – np.ndarray of observed data points.
Retunrs:
np.ndarray of normalized data.
gamma_arr

getter

get_beta_arr

getter

get_delta_beta_arr

getter

get_delta_gamma_arr

getter

get_gamma_arr

getter

get_test_mode

getter

set_beta_arr

setter

set_delta_beta_arr

setter

set_delta_gamma_arr

setter

set_gamma_arr

setter

set_test_mode

setter

test_mode

getter

## pydbm.optimization.opt_params module¶

class pydbm.optimization.opt_params.OptParams

Bases: object

Abstract class of optimization functions.

Note that this library underestimates effects and functions of weight decay regularizations and then disregards the possibilities of various variants of weight decay such as decoupling the weight decay from the gradient-based update (Loshchilov, I., & Hutter, F., 2017). From the perspective of architecture design, the concept of weight decays are highly variable. This concept often tends to be described as obscuring the difference from L2 regularization. From the perspective of algorithm design, it is considered that weight constraint or so-called max-norm regularization is more effective than weight decay. This regularization technic is structurally easily loosely coupled to other regularization techniques such as dropout (Srivastava, N., et al., 2014).

References

• Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
• Loshchilov, I., & Hutter, F. (2017). Fixing weight decay regularization in adam. arXiv preprint arXiv:1711.05101.
• Pascanu, R., Mikolov, T., & Bengio, Y. (2012). Understanding the exploding gradient problem. CoRR, abs/1211.5063, 2.
• Pascanu, R., Mikolov, T., & Bengio, Y. (2013, February). On the difficulty of training recurrent neural networks. In International conference on machine learning (pp. 1310-1318).
• Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1), 1929-1958.
• Zaremba, W., Sutskever, I., & Vinyals, O. (2014). Recurrent neural network regularization. arXiv preprint arXiv:1409.2329.
compute_weight_decay

Compute penalty term of weight decay.

Parameters: weight_arr – np.ndarray of weight matrix. np.ndarray of delta.
compute_weight_decay_delta

Compute delta of weight decay.

Parameters: weight_arr – np.ndarray of weight matrix. np.ndarray of delta.
constrain_weight

So-called max-norm regularization.

Regularization for weights matrix to repeat multiplying the weights matrix and 0.9 until $sum_{j=0}^{n}w_{ji}^2 < weight_limit$.

Parameters: weight_arr – wegiht matrix. weight matrix.
de_dropout

Dropout.

Parameters: activity_arr – The state of delta. The state of delta.
dropout

Dropout.

Parameters: activity_arr – The state of units. The state of units.
dropout_rate

getter

get_dropout_rate

getter

get_grad_clip_threshold

getter for the threshold of the gradient clipping.

get_inferencing_mode

getter

get_weight_decay_lambda

getter

get_weight_limit

getter

grad_clip_threshold

getter for the threshold of the gradient clipping.

inferencing_mode

getter

optimize

Return of result from this concrete optimization function.

Parameters: params_dict – list of parameters. grads_arr – np.ndarray of gradation. learning_rate – Learning rate. list of optimized parameters.
set_dropout_rate

setter

set_grad_clip_threshold

setter for the threshold of the gradient clipping.

set_inferencing_mode

setter

set_weight_decay_lambda

setter

set_weight_limit

setter

weight_decay_lambda

getter

weight_limit

getter