pydbm.optimization.optparams package

Submodules

pydbm.optimization.optparams.ada_grad module

class pydbm.optimization.optparams.ada_grad.AdaGrad

Bases: pydbm.optimization.opt_params.OptParams

Optimizer of Adaptive subgradient methods(AdaGrad).

References

  • Duchi, J., Hazan, E., & Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul), 2121-2159.
  • Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
optimize

Return of result from this optimization function.

Override.

Parameters:
  • params_dictlist of parameters.
  • grads_listlist of gradation.
  • learning_rate – Learning rate.
Returns:

list of optimized parameters.

pydbm.optimization.optparams.adam module

class pydbm.optimization.optparams.adam.Adam

Bases: pydbm.optimization.opt_params.OptParams

Adaptive Moment Estimation(Adam).

References

  • Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
optimize

Return of result from this optimization function.

Override.

Parameters:
  • params_dictlist of parameters.
  • grads_listlist of gradation.
  • learning_rate – Learning rate.
Returns:

list of optimized parameters.

pydbm.optimization.optparams.nadam module

class pydbm.optimization.optparams.nadam.Nadam

Bases: pydbm.optimization.opt_params.OptParams

Nesterov-accelerated Adaptive Moment Estimation (Nadam).

References

  • Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  • Dozat, T. (2016). Incorporating nesterov momentum into adam., Workshop track - ICLR 2016.
optimize

Return of result from this optimization function.

Override.

Parameters:
  • params_dictlist of parameters.
  • grads_listlist of gradation.
  • learning_rate – Learning rate.
Returns:

list of optimized parameters.

pydbm.optimization.optparams.nag module

class pydbm.optimization.optparams.nag.NAG

Bases: pydbm.optimization.opt_params.OptParams

Optimizer of the Nesterov’s Accelerated Gradient(NAG).

References

  • Bengio, Y., Boulanger-Lewandowski, N., & Pascanu, R. (2013, May). Advances in optimizing recurrent networks. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 8624-8628). IEEE.
optimize

Return of result from this optimization function.

Override.

Parameters:
  • params_dictlist of parameters.
  • grads_listlist of gradation.
  • learning_rate – Learning rate.
Returns:

list of optimized parameters.

pydbm.optimization.optparams.rms_prop module

class pydbm.optimization.optparams.rms_prop.RMSProp

Bases: pydbm.optimization.opt_params.OptParams

Adaptive RootMean-Square (RMSProp) gradient decent algorithm.

References

  • Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
optimize

Return of result from this optimization function.

Override.

Parameters:
  • params_dictlist of parameters.
  • grads_listlist of gradation.
  • learning_rate – Learning rate.
Returns:

list of optimized parameters.

pydbm.optimization.optparams.sgd module

class pydbm.optimization.optparams.sgd.SGD

Bases: pydbm.optimization.opt_params.OptParams

Stochastic Gradient Descent.

References

  • Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
optimize

Return of result from this optimization function.

Override.

Parameters:
  • params_dictlist of parameters.
  • grads_listlist of gradation.
  • learning_rate – Learning rate.
Returns:

list of optimized parameters.

Module contents