pydbm.approximation package

Submodules

pydbm.approximation.contrastive_divergence module

class pydbm.approximation.contrastive_divergence.ContrastiveDivergence

Bases: pydbm.approximation.interface.approximate_interface.ApproximateInterface

Contrastive Divergence.

Conceptually, the positive phase is to the negative phase what waking is to sleeping.

In relation to RBM, Contrastive Divergence(CD) is a method for approximation of the gradients of the log-likelihood(Hinton, G. E. 2002).

The procedure of this method is similar to Markov Chain Monte Carlo method(MCMC). However, unlike MCMC, the visbile variables to be set first in visible layer is not randomly initialized but the observed data points in training dataset are set to the first visbile variables. And, like Gibbs sampler, drawing samples from hidden variables and visible variables is repeated k times. Empirically (and surprisingly), k is considered to be 1.

References

  • Hinton, G. E. (2002). Training products of experts by minimizing contrastive divergence. Neural computation, 14(8), 1771-1800.
approximate_inference

Inference with function approximation.

Parameters:
  • graph – Graph of neurons.
  • learning_rate – Learning rate.
  • learning_attenuate_rate – Attenuate the learning_rate by a factor of this value every attenuate_epoch.
  • attenuate_epoch – Attenuate the learning_rate by a factor of learning_attenuate_rate every attenuate_epoch.
  • observed_data_arr – observed data points.
  • training_count – Training counts.
  • r_batch_size – Batch size. If this value is 0, the inferencing is a recursive learning. If this value is more than 0, the inferencing is a mini-batch recursive learning. If this value is ‘-1’, the inferencing is not a recursive learning.
Returns:

Graph of neurons.

approximate_learn

learning with function approximation.

Parameters:
  • graph – Graph of neurons.
  • learning_rate – Learning rate.
  • learning_attenuate_rate – Attenuate the learning_rate by a factor of this value every attenuate_epoch.
  • attenuate_epoch – Attenuate the learning_rate by a factor of learning_attenuate_rate every attenuate_epoch.
  • observed_data_arr – observed data points.
  • training_count – Training counts.
  • batch_size – Batch size (0: not mini-batch)
Returns:

Graph of neurons.

get_reconstruct_error_list

getter

reconstruct_error_list

getter

set_readonly

setter

pydbm.approximation.rt_rbm_cd module

class pydbm.approximation.rt_rbm_cd.RTRBMCD

Bases: pydbm.approximation.interface.approximate_interface.ApproximateInterface

Recurrent Temporal Restricted Boltzmann Machines based on Contrastive Divergence.

Conceptually, the positive phase is to the negative phase what waking is to sleeping.

The RTRBM (Sutskever, I., et al. 2009) is a probabilistic time-series model which can be viewed as a temporal stack of RBMs, where each RBM has a contextual hidden state that is received from the previous RBM and is used to modulate its hidden units bias.

Parameters:
  • graph.weights_arr – $W$ (Connection between v^{(t)} and h^{(t)})
  • graph.visible_bias_arr – $b_v$ (Bias in visible layer)
  • graph.hidden_bias_arr – $b_h$ (Bias in hidden layer)
  • graph.rnn_hidden_weights_arr – $W’$ (Connection between h^{(t-1)} and b_h^{(t)})
  • graph.rnn_visible_weights_arr – $W’‘$ (Connection between h^{(t-1)} and b_v^{(t)})
  • graph.hat_hidden_activity_arr – $hat{h}^{(t)}$ (RNN with hidden units)
  • graph.pre_hidden_activity_arr – $hat{h}^{(t-1)}$

References

  • Boulanger-Lewandowski, N., Bengio, Y., & Vincent, P. (2012). Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription. arXiv preprint arXiv:1206.6392.
  • Lyu, Q., Wu, Z., Zhu, J., & Meng, H. (2015, June). Modelling High-Dimensional Sequences with LSTM-RTRBM: Application to Polyphonic Music Generation. In IJCAI (pp. 4138-4139).
  • Lyu, Q., Wu, Z., & Zhu, J. (2015, October). Polyphonic music modelling with LSTM-RTRBM. In Proceedings of the 23rd ACM international conference on Multimedia (pp. 991-994). ACM.
  • Sutskever, I., Hinton, G. E., & Taylor, G. W. (2009). The recurrent temporal restricted boltzmann machine. In Advances in Neural Information Processing Systems (pp. 1601-1608).
approximate_inference

Inference with function approximation.

Parameters:
  • graph – Graph of neurons.
  • learning_rate – Learning rate.
  • learning_attenuate_rate – Attenuate the learning_rate by a factor of this value every attenuate_epoch.
  • attenuate_epoch – Attenuate the learning_rate by a factor of learning_attenuate_rate every attenuate_epoch.
  • observed_data_arr – observed data points.
  • training_count – Training counts.
  • r_batch_size – Batch size. If this value is 0, the inferencing is a recursive learning. If this value is more than 0, the inferencing is a mini-batch recursive learning. If this value is ‘-1’, the inferencing is not a recursive learning.
  • seq_len – The length of sequences. If None, this value will be considered as observed_data_arr.shape[1].
Returns:

Graph of neurons.

approximate_learn

learning with function approximation.

Parameters:
  • graph – Graph of neurons.
  • learning_rate – Learning rate.
  • learning_attenuate_rate – Attenuate the learning_rate by a factor of this value every attenuate_epoch.
  • attenuate_epoch – Attenuate the learning_rate by a factor of learning_attenuate_rate every attenuate_epoch.
  • observed_data_arr – Observed data points.
  • training_count – Training counts.
  • batch_size – Batch size (0: not mini-batch)
Returns:

Graph of neurons.

back_propagation

Details of the backpropagation through time algorithm.

batch_size = 0
batch_step = 0
computable_loss

getter

compute_loss

Compute loss.

Parameters:
  • batch_observed_arrnp.ndarray of observed data points.
  • inferenced_arrnp.ndarray of reconstructed feature points.
Returns:

loss.

get_computable_loss

getter

get_opt_params

getter

get_reconstrct_error_list

getter

graph = None
learning_rate = 0.5
memorize_activity

Memorize activity.

Parameters:
  • observed_data_arr – Observed data points in positive phase.
  • negative_visible_activity_arr – visible acitivty in negative phase.
negative_visible_activity_arr = None
opt_params

getter

r_batch_size = 0
r_batch_step = 0
reconstruct_error_list

getter

rnn_learn

Learning for RNN.

Parameters:observed_data_list – observed data points.
set_computable_loss

setter

set_opt_params

setter

set_readonly

setter

wake_sleep_inference

Sleeping, waking, and inferencing.

Parameters:observed_data_arr – feature points.
wake_sleep_learn

Waking, sleeping, and learning.

Standing on the premise that the settings of the activation function and weights operation are common.

The binary activity is unsupported.

Parameters:observed_data_list – observed data points.

pydbm.approximation.shape_bm_cd module

class pydbm.approximation.shape_bm_cd.ShapeBMCD

Bases: pydbm.approximation.interface.approximate_interface.ApproximateInterface

Contrastive Divergence for Shape-Boltzmann machine(Shape-BM).

Conceptually, the positive phase is to the negative phase what waking is to sleeping.

The concept of Shape Boltzmann Machine (Eslami, S. A., et al. 2014) provided inspiration to this library.

The usecases of Shape-BM are image segmentation, object detection, inpainting and graphics. Shape-BM is the model for the task of modeling binary shape images, in that samples from the model look realistic and it can generalize to generate samples that differ from training examples.

References

  • Eslami, S. A., Heess, N., Williams, C. K., & Winn, J. (2014). The shape boltzmann machine: a strong model of object shape. International Journal of Computer Vision, 107(2), 155-176.
approximate_inference

Inference with function approximation.

Parameters:
  • graph – Graph of neurons.
  • learning_rate – Learning rate.
  • learning_attenuate_rate – Attenuate the learning_rate by a factor of this value every attenuate_epoch.
  • attenuate_epoch – Attenuate the learning_rate by a factor of learning_attenuate_rate every attenuate_epoch.
  • observed_data_arr – Observed data points.
  • training_count – Training counts.
  • r_batch_size – Batch size. If this value is 0, the inferencing is a recursive learning. If this value is more than 0, the inferencing is a mini-batch recursive learning. If this value is ‘-1’, the inferencing is not a recursive learning.
Returns:

Graph of neurons.

approximate_learn

learning with function approximation.

Parameters:
  • graph – Graph of neurons.
  • learning_rate – Learning rate.
  • learning_attenuate_rate – Attenuate the learning_rate by a factor of this value every attenuate_epoch.
  • attenuate_epoch – Attenuate the learning_rate by a factor of learning_attenuate_rate every attenuate_epoch.
  • observed_data_arr – Observed data points.
  • training_count – Training counts.
  • batch_size – Batch size (0: not mini-batch)
Returns:

Graph of neurons.

get_reconstrct_error_list

getter

reconstruct_error_list

getter

set_readonly

setter

Module contents