pydbm.approximation.rtrbmcd package¶
Submodules¶
pydbm.approximation.rtrbmcd.lstm_rt_rbm_cd module¶

class
pydbm.approximation.rtrbmcd.lstm_rt_rbm_cd.
LSTMRTRBMCD
¶ Bases:
pydbm.approximation.rt_rbm_cd.RTRBMCD
LSTM RTRBM based on Contrastive Divergence.
Conceptually, the positive phase is to the negative phase what waking is to sleeping.
LSTMRTRBM model integrates the ability of LSTM in memorizing and retrieving useful history information, together with the advantage of RBM in high dimensional data modelling(Lyu, Q., Wu, Z., Zhu, J., & Meng, H. 2015, June). Like RTRBM, LSTMRTRBM also has the recurrent hidden units.
Parameters:  graph.weights_arr – $W$ (Connection between v^{(t)} and h^{(t)})
 graph.visible_bias_arr – $b_v$ (Bias in visible layer)
 graph.hidden_bias_arr – $b_h$ (Bias in hidden layer)
 graph.rnn_hidden_weights_arr – $W’$ (Connection between h^{(t1)} and b_h^{(t)})
 graph.rbm_hidden_weights_arr – $W_{R}$ (Connection between h^{(t1)} and h^{(t)})
 graph.hat_hidden_activity_arr – $hat{h}^{(t)}$ (RNN with hidden units)
 graph.hidden_activity_arr_list – $hat{h}^{(t)} (t = 0, 1, …)$
 graph.v_hat_weights_arr – $W_2$ (Connection between v^{(t)} and hat{h}^{(t)})
 graph.hat_weights_arr – $W_3$ (Connection between hat{h}^{(t1)} and hat{h}^{(t)})
 graph.rnn_hidden_bias_arr – $b_{hat{h}^{(t)}}$ (Bias of RNN hidden layers.)
$$hat{h}^{(t)} = sig (W_2 v^{(t)} + W_3 hat{h}^{(t1)} + b_{hat{h}})
References
 BoulangerLewandowski, N., Bengio, Y., & Vincent, P. (2012). Modeling temporal dependencies in highdimensional sequences: Application to polyphonic music generation and transcription. arXiv preprint arXiv:1206.6392.
 Lyu, Q., Wu, Z., Zhu, J., & Meng, H. (2015, June). Modelling HighDimensional Sequences with LSTMRTRBM: Application to Polyphonic Music Generation. In IJCAI (pp. 41384139).
 Lyu, Q., Wu, Z., & Zhu, J. (2015, October). Polyphonic music modelling with LSTMRTRBM. In Proceedings of the 23rd ACM international conference on Multimedia (pp. 991994). ACM.
 Sutskever, I., Hinton, G. E., & Taylor, G. W. (2009). The recurrent temporal restricted boltzmann machine. In Advances in Neural Information Processing Systems (pp. 16011608).

back_propagation
¶ Details of the backpropagation through time algorithm.
Override.

compute_loss
¶ Compute loss.
Parameters:  batch_observed_arr – np.ndarray of observed data points.
 inferenced_arr – np.ndarray of reconstructed feature points.
Returns: loss.

memorize_activity
¶ Memorize activity.
Override.
Parameters:  observed_data_arr – Observed data points in positive phase.
 negative_visible_activity_arr – visible acitivty in negative phase.

rnn_learn
¶ Learning for RNN.
Parameters: observed_data_list – observed data points.
pydbm.approximation.rtrbmcd.rnn_rbm_cd module¶

class
pydbm.approximation.rtrbmcd.rnn_rbm_cd.
RNNRBMCD
¶ Bases:
pydbm.approximation.rt_rbm_cd.RTRBMCD
Recurrent Neural Network Restricted Boltzmann Machines(RNNRBM) based on Contrastive Divergence.
Conceptually, the positive phase is to the negative phase what waking is to sleeping.
The RTRBM can be understood as a sequence of conditional RBMs whose parameters are the output of a deterministic RNN, with the constraint that the hidden units must describe the conditional distributions and convey temporal information. This constraint can be lifted by combining a full RNN with distinct hidden units.
RNNRBM (BoulangerLewandowski, N., et al. 2012), which is the more structural expansion of RTRBM, has also hidden units.
Parameters:  graph.weights_arr – $W$ (Connection between v^{(t)} and h^{(t)})
 graph.visible_bias_arr – $b_v$ (Bias in visible layer)
 graph.hidden_bias_arr – $b_h$ (Bias in hidden layer)
 graph.rnn_hidden_weights_arr – $W’$ (Connection between h^{(t1)} and b_h^{(t)})
 graph.rnn_visible_weights_arr – $W’‘$ (Connection between h^{(t1)} and b_v^{(t)})
 graph.hat_hidden_activity_arr – $hat{h}^{(t)}$ (RNN with hidden units)
 graph.pre_hidden_activity_arr_list – $hat{h}^{(t)} (t = 0, 1, …)$
 graph.v_hat_weights_arr – $W_2$ (Connection between v^{(t)} and hat{h}^{(t)})
 graph.hat_weights_arr – $W_3$ (Connection between hat{h}^{(t1)} and hat{h}^{(t)})
 graph.rnn_hidden_bias_arr – $b_{hat{h}^{(t)}}$ (Bias of RNN hidden layers.)
 $$hat{h}^{ (t)} = sig (W_2 v^{(t)} + W_3 hat{h}^{(t1)} + b_{hat{h}}) –
References
 BoulangerLewandowski, N., Bengio, Y., & Vincent, P. (2012). Modeling temporal dependencies in highdimensional sequences: Application to polyphonic music generation and transcription. arXiv preprint arXiv:1206.6392.
 Lyu, Q., Wu, Z., Zhu, J., & Meng, H. (2015, June). Modelling HighDimensional Sequences with LSTMRTRBM: Application to Polyphonic Music Generation. In IJCAI (pp. 41384139).
 Lyu, Q., Wu, Z., & Zhu, J. (2015, October). Polyphonic music modelling with LSTMRTRBM. In Proceedings of the 23rd ACM international conference on Multimedia (pp. 991994). ACM.
 Sutskever, I., Hinton, G. E., & Taylor, G. W. (2009). The recurrent temporal restricted boltzmann machine. In Advances in Neural Information Processing Systems (pp. 16011608).

back_propagation
¶ Details of the backpropagation through time algorithm.
Override.

compute_loss
¶ Compute loss.
Parameters:  batch_observed_arr – np.ndarray of observed data points.
 inferenced_arr – np.ndarray of reconstructed feature points.
Returns: loss.

memorize_activity
¶ Memorize activity.
Override.
Parameters:  observed_data_arr – Observed data points in positive phase.
 negative_visible_activity_arr – visible acitivty in negative phase.

rnn_learn
¶ Learning for RNN.
Parameters: observed_data_list – observed data points.