accelbrainbase.observabledata._mxnet package

Subpackages

Submodules

accelbrainbase.observabledata._mxnet.adversarial_model module

class accelbrainbase.observabledata._mxnet.adversarial_model.AdversarialModel

Bases: accelbrainbase.observable_data.ObservableData

The abstract class to build the Generative Adversarial Networks(GANs).

The Generative Adversarial Networks(GANs) (Goodfellow et al., 2014) framework establishes a min-max adversarial game between two neural networks – a generative model, G, and a discriminative model, D. The discriminator model, D(x), is a neural network that computes the probability that a observed data point x in data space is a sample from the data distribution (positive samples) that we are trying to model, rather than a sample from our generative model (negative samples). Concurrently, the generator uses a function G(z) that maps samples z from the prior p(z) to the data space. G(z) is trained to maximally confuse the discriminator into believing that samples it generates come from the data distribution. The generator is trained by leveraging the gradient of D(x) w.r.t. x, and using that to modify its parameters.

References

  • Fang, W., Zhang, F., Sheng, V. S., & Ding, Y. (2018). A method for improving CNN-based image recognition using DCGAN. Comput. Mater. Contin, 57, 167-178.
  • Gauthier, J. (2014). Conditional generative adversarial nets for convolutional face generation. Class Project for Stanford CS231N: Convolutional Neural Networks for Visual Recognition, Winter semester, 2014(5), 2.
  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., … & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672-2680).
  • Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431-3440).
  • Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I., & Frey, B. (2015). Adversarial autoencoders. arXiv preprint arXiv:1511.05644.
  • Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784.
  • Mogren, O. (2016). C-RNN-GAN: Continuous recurrent neural networks with adversarial training. arXiv preprint arXiv:1611.09904.
  • Rifai, S., Vincent, P., Muller, X., Glorot, X., & Bengio, Y. (2011, June). Contractive auto-encoders: Explicit invariance during feature extraction. In Proceedings of the 28th International Conference on International Conference on Machine Learning (pp. 833-840). Omnipress.
  • Rifai, S., Mesnil, G., Vincent, P., Muller, X., Bengio, Y., Dauphin, Y., & Glorot, X. (2011, September). Higher order contractive auto-encoder. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 645-660). Springer, Berlin, Heidelberg.
  • Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved techniques for training gans. In Advances in neural information processing systems (pp. 2234-2242).
  • Yang, L. C., Chou, S. Y., & Yang, Y. H. (2017). MidiNet: A convolutional generative adversarial network for symbolic-domain music generation. arXiv preprint arXiv:1703.10847.
  • Zhao, J., Mathieu, M., & LeCun, Y. (2016). Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126.
  • Warde-Farley, D., & Bengio, Y. (2016). Improving generative adversarial networks with denoising feature matching.
get_init_deferred_flag()

getter for bool that means initialization in this class will be deferred or not.

get_initializer()

getter for mxnet.initializer.

inference(observed_arr)

Inference samples drawn by IteratableData.generate_inferenced_samples().

Parameters:observed_arr – rank-2 Array like or sparse matrix as the observed data points. The shape is: (batch size, feature points)
Returns:mxnet.ndarray of inferenced feature points.
init_deferred_flag

getter for bool that means initialization in this class will be deferred or not.

initializer

getter for mxnet.initializer.

learn(iteratable_data)

Learn the observed data points for vector representation of the input images.

Parameters:iteratable_data – is-a IteratableData.
model

is-a mxnet.gluon.hybrid.hybridblock.HybridBlock.

set_init_deferred_flag(value)

setter for bool that means initialization in this class will be deferred or not.

set_initializer(value)

setter for mxnet.initializer.

accelbrainbase.observabledata._mxnet.convolutional_neural_networks module

class accelbrainbase.observabledata._mxnet.convolutional_neural_networks.ConvolutionalNeuralNetworks(computable_loss, initializer=None, learning_rate=1e-05, learning_attenuate_rate=1.0, attenuate_epoch=50, hidden_units_list=[Conv2D(None -> 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)), Conv2D(None -> 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))], input_nn=None, input_result_height=None, input_result_width=None, input_result_channel=None, output_nn=None, hidden_dropout_rate_list=[0.5, 0.5], hidden_batch_norm_list=[None, None], optimizer_name='SGD', hidden_activation_list=['relu', 'relu'], hidden_residual_flag=False, hidden_dense_flag=False, dense_axis=1, ctx=gpu(0), hybridize_flag=True, regularizatable_data_list=[], scale=1.0, not_init_flag=False, **kwargs)

Bases: mxnet.gluon.block.HybridBlock, accelbrainbase.observable_data.ObservableData

Convolutional Neural Networks.

References

  • Dumoulin, V., & V,kisin, F. (2016). A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285.
  • Kamyshanska, H., & Memisevic, R. (2014). The potential energy of an autoencoder. IEEE transactions on pattern analysis and machine intelligence, 37(6), 1261-1273.
  • Masci, J., Meier, U., Cireşan, D., & Schmidhuber, J. (2011, June). Stacked convolutional auto-encoders for hierarchical feature extraction. In International Conference on Artificial Neural Networks (pp. 52-59). Springer, Berlin, Heidelberg.
acc_arr

getter for list of accuracies.

batch_size

getter for batch size.

collect_params(select=None)

Overrided collect_params in mxnet.gluon.HybridBlok.

compute_acc(prob_arr, batch_target_arr)

Compute accuracy.

Parameters:
  • prob_arr – Softmax probabilities.
  • batch_target_arr – t-hot vectors.
Returns:

Tuple data. - Accuracy. - inferenced label. - real label.

compute_loss(pred_arr, labeled_arr)

Compute loss.

Parameters:
  • pred_arrmxnet.ndarray or mxnet.symbol.
  • labeled_arrmxnet.ndarray or mxnet.symbol.
Returns:

loss.

extract_learned_dict()

Extract (pre-) learned parameters.

Returns:dict of the parameters.
forward_propagation(F, x)

Hybrid forward with Gluon API.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • xmxnet.ndarray of observed data points.
Returns:

mxnet.ndarray or mxnet.symbol of inferenced feature points.

get_acc_list()

getter for list of accuracies.

get_batch_size()

getter for batch size.

get_init_deferred_flag()

getter for bool that means initialization in this class will be deferred or not.

get_initializer()

getter for mxnet.initializer.

get_input_nn()

getter for mxnet.gluon.hybridblock.HybridBlock in input layer.

get_loss_arr()

getter for for list of accuracies.

get_output_nn()

getter for mxnet.gluon.hybridblock.HybridBlock in output layer.

hybrid_forward(F, x)

Hybrid forward with Gluon API.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • xmxnet.ndarray of observed data points.
Returns:

mxnet.ndarray or mxnet.symbol of inferenced feature points.

inference(observed_arr)

Inference the labels.

Parameters:observed_arr – rank-4 Array like or sparse matrix as the observed data points. The shape is: (batch size, channel, height, width)
Returns:mxnet.ndarray of inferenced feature points.
init_deferred_flag

getter for bool that means initialization in this class will be deferred or not.

initializer

getter for mxnet.initializer.

input_nn

getter for mxnet.gluon.hybridblock.HybridBlock in input layer.

learn(iteratable_data)

Learn the observed data points for vector representation of the input images.

Parameters:iteratable_data – is-a IteratableData.
loss_arr

getter for for list of accuracies.

output_nn

getter for mxnet.gluon.hybridblock.HybridBlock in output layer.

regularize()

Regularization.

set_batch_size(value)

ssetter for batch size.

set_init_deferred_flag(value)

setter for bool that means initialization in this class will be deferred or not.

set_initializer(value)

setter for mxnet.initializer.

set_input_nn(value)

setter for mxnet.gluon.hybridblock.HybridBlock in input layer.

set_output_nn(value)

setter for mxnet.gluon.hybridblock.HybridBlock in output layer.

set_readonly(value)

setter

accelbrainbase.observabledata._mxnet.function_approximator module

class accelbrainbase.observabledata._mxnet.function_approximator.FunctionApproximator

Bases: accelbrainbase.observable_data.ObservableData

The function approximator for the Deep Q-Learning.

The convolutional neural networks(CNNs) are hierarchical models whose convolutional layers alternate with subsampling layers, reminiscent of simple and complex cells in the primary visual cortex.

Mainly, this class demonstrates that a CNNs can solve generalisation problems to learn successful control policies from observed data points in complex Reinforcement Learning environments. The network is trained with a variant of the Q-learning algorithm, with stochastic gradient descent to update the weights.

But there is no need for the function approximator to be a CNNs. We probide this interface that implements various models as function approximations, not limited to CNNs.

References

  • Dumoulin, V., & V,kisin, F. (2016). A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285.
  • Masci, J., Meier, U., Cireşan, D., & Schmidhuber, J. (2011, June). Stacked convolutional auto-encoders for hierarchical feature extraction. In International Conference on Artificial Neural Networks (pp. 52-59). Springer, Berlin, Heidelberg.
  • Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.
get_init_deferred_flag()

getter for bool that means initialization in this class will be deferred or not.

get_initializer()

getter for mxnet.initializer.

inference(observed_arr)

Inference samples drawn by IteratableData.generate_inferenced_samples().

Parameters:observed_arr – rank-2 Array like or sparse matrix as the observed data points. The shape is: (batch size, feature points)
Returns:mxnet.ndarray of inferenced feature points.
init_deferred_flag

getter for bool that means initialization in this class will be deferred or not.

initializer

getter for mxnet.initializer.

learn(iteratable_data)

Learn the observed data points for vector representation of the input images.

Parameters:iteratable_data – is-a IteratableData.
model

is-a mxnet.gluon.hybrid.hybridblock.HybridBlock.

set_init_deferred_flag(value)

setter for bool that means initialization in this class will be deferred or not.

set_initializer(value)

setter for mxnet.initializer.

accelbrainbase.observabledata._mxnet.lstm_networks module

class accelbrainbase.observabledata._mxnet.lstm_networks.LSTMNetworks(batch_size, seq_len, computable_loss, initializer=None, learning_rate=1e-05, learning_attenuate_rate=0.1, attenuate_epoch=50, hidden_n=200, output_n=1, dropout_rate=0.5, optimizer_name='SGD', input_adjusted_flag=True, observed_activation='tanh', input_gate_activation='sigmoid', forget_gate_activation='sigmoid', output_gate_activation='sigmoid', hidden_activation='tanh', output_activation='tanh', output_layer_flag=True, output_no_bias_flag=False, output_nn=None, ctx=gpu(0), hybridize_flag=True, regularizatable_data_list=[], scale=1.0, **kwargs)

Bases: mxnet.gluon.block.HybridBlock, accelbrainbase.observable_data.ObservableData

Long short term memory(LSTM) networks.

Originally, Long Short-Term Memory(LSTM) networks as a special RNN structure has proven stable and powerful for modeling long-range dependencies.

The Key point of structural expansion is its memory cell which essentially acts as an accumulator of the state information. Every time observed data points are given as new information and input to LSTM’s input gate, its information will be accumulated to the cell if the input gate is activated. The past state of cell could be forgotten in this process if LSTM’s forget gate is on. Whether the latest cell output will be propagated to the final state is further controlled by the output gate.

References

  • Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
  • Malhotra, P., Ramakrishnan, A., Anand, G., Vig, L., Agarwal, P., & Shroff, G. (2016). LSTM-based encoder-decoder for multi-sensor anomaly detection. arXiv preprint arXiv:1607.00148.
  • Zaremba, W., Sutskever, I., & Vinyals, O. (2014). Recurrent neural network regularization. arXiv preprint arXiv:1409.2329.
compute_loss(pred_arr, labeled_arr)

Compute loss.

Parameters:
  • pred_arrmxnet.ndarray or mxnet.symbol.
  • labeled_arrmxnet.ndarray or mxnet.symbol.
Returns:

loss.

extract_feature_points()

Extract the activities in hidden layer and reset it, considering this method will be called per one cycle in instances of time-series.

Returns:The mxnet.ndarray of array like or sparse matrix of feature points or virtual visible observed data points.
extract_learned_dict()

Extract (pre-) learned parameters.

Returns:dict of the parameters.
forward_propagation(F, x)

Hybrid forward with Gluon API.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • xmxnet.ndarray of observed data points.
Returns:

mxnet.ndarray or mxnet.symbol of inferenced feature points.

get_init_deferred_flag()

getter for bool that means initialization in this class will be deferred or not.

get_initializer()

getter for mxnet.initializer.

get_loss_arr()

getter for losses.

hidden_forward_propagate(F, observed_arr)

Forward propagation in LSTM gate.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • observed_arr – rank-3 tensor of observed data points.
Returns:

Predicted data points.

hybrid_forward(F, x)

Hybrid forward with Gluon API.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • xmxnet.ndarray of observed data points.
Returns:

mxnet.ndarray or mxnet.symbol of inferenced feature points.

inference(observed_arr)

Inference the feature points to reconstruct the time-series.

Parameters:observed_arr – rank-3 array like or sparse matrix as the observed data points.
Returns:mxnet.ndarray of inferenced feature points.
init_deferred_flag

getter for bool that means initialization in this class will be deferred or not.

initializer

getter for mxnet.initializer.

learn(iteratable_data)

Learn the observed data points for vector representation of the input time-series.

Parameters:iteratable_data – is-a IteratableData.
loss_arr

getter for losses.

output_forward_propagate(F, pred_arr)

Forward propagation in output layer.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • pred_arr – rank-3 tensor of predicted data points.
Returns:

rank-3 tensor of propagated data points.

regularize()

Regularization.

set_init_deferred_flag(value)

setter for bool that means initialization in this class will be deferred or not.

set_initializer(value)

setter for mxnet.initializer.

set_readonly(value)

setter for losses.

accelbrainbase.observabledata._mxnet.neural_networks module

class accelbrainbase.observabledata._mxnet.neural_networks.NeuralNetworks(computable_loss, initializer=None, learning_rate=1e-05, learning_attenuate_rate=1.0, attenuate_epoch=50, units_list=[100, 1], dropout_rate_list=[0.0, 0.5], optimizer_name='SGD', activation_list=['tanh', 'sigmoid'], hidden_batch_norm_list=[None, None], ctx=gpu(0), hybridize_flag=True, regularizatable_data_list=[], scale=1.0, output_no_bias_flag=False, all_no_bias_flag=False, not_init_flag=False, **kwargs)

Bases: mxnet.gluon.block.HybridBlock, accelbrainbase.observable_data.ObservableData

Neural Networks.

References

  • Kamyshanska, H., & Memisevic, R. (2014). The potential energy of an autoencoder. IEEE transactions on pattern analysis and machine intelligence, 37(6), 1261-1273.
compute_loss(pred_arr, labeled_arr)

Compute loss.

Parameters:
  • pred_arrmxnet.ndarray or mxnet.symbol.
  • labeled_arrmxnet.ndarray or mxnet.symbol.
Returns:

loss.

extract_learned_dict()

Extract (pre-) learned parameters.

Returns:dict of the parameters.
forward_propagation(F, x)

Hybrid forward with Gluon API.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • xmxnet.ndarray of observed data points.
Returns:

mxnet.ndarray or mxnet.symbol of inferenced feature points.

get_init_deferred_flag()

getter for bool that means initialization in this class will be deferred or not.

get_initializer()

getter for mxnet.initializer.

get_loss_arr()

getter for losses.

get_units_list()

getter for list of units in each layer.

hybrid_forward(F, x)

Hybrid forward with Gluon API.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • xmxnet.ndarray of observed data points.
Returns:

mxnet.ndarray or mxnet.symbol of inferenced feature points.

inference(observed_arr)

Inference samples drawn by IteratableData.generate_inferenced_samples().

Parameters:observed_arr – rank-2 Array like or sparse matrix as the observed data points. The shape is: (batch size, feature points)
Returns:mxnet.ndarray of inferenced feature points.
init_deferred_flag

getter for bool that means initialization in this class will be deferred or not.

initializer

getter for mxnet.initializer.

learn(iteratable_data)

Learn samples drawn by IteratableData.generate_learned_samples().

Parameters:iteratable_data – is-a IteratableData.
loss_arr

getter for losses.

regularize()

Regularization.

set_init_deferred_flag(value)

setter for bool that means initialization in this class will be deferred or not.

set_initializer(value)

setter for mxnet.initializer.

set_readonly(value)

setter

units_list

getter for list of units in each layer.

accelbrainbase.observabledata._mxnet.restricted_boltzmann_machines module

class accelbrainbase.observabledata._mxnet.restricted_boltzmann_machines.RestrictedBoltzmannMachines(computable_loss, visible_activation='sigmoid', hidden_activation='sigmoid', visible_dim=1000, hidden_dim=100, initializer=None, optimizer_name='SGD', learning_rate=0.005, learning_attenuate_rate=1.0, attenuate_epoch=50, visible_dropout_rate=0.0, hidden_dropout_rate=0.0, visible_batch_norm=None, hidden_batch_norm=None, regularizatable_data_list=[], ctx=gpu(0), **kwargs)

Bases: mxnet.gluon.block.HybridBlock, accelbrainbase.observable_data.ObservableData

Restricted Boltzmann Machines(RBM).

According to graph theory, the structure of RBM corresponds to a complete bipartite graph which is a special kind of bipartite graph where every node in the visible layer is connected to every node in the hidden layer. Based on statistical mechanics and thermodynamics(Ackley, D. H., Hinton, G. E., & Sejnowski, T. J. 1985), the state of this structure can be reflected by the energy function.

In relation to RBM, the Contrastive Divergence(CD) is a method for approximation of the gradients of the log-likelihood(Hinton, G. E. 2002). This algorithm draws a distinction between a positive phase and a negative phase. Conceptually, the positive phase is to the negative phase what waking is to sleeping.

The procedure of this method is similar to Markov Chain Monte Carlo method(MCMC). However, unlike MCMC, the visbile variables to be set first in visible layer is not randomly initialized but the observed data points in training dataset are set to the first visbile variables. And, like Gibbs sampler, drawing samples from hidden variables and visible variables is repeated k times. Empirically (and surprisingly), k is considered to be 1.

Note that this class does not support a Hybrid of imperative and symbolic programming. Only mxnet.ndarray is supported.

References

  • Ackley, D. H., Hinton, G. E., & Sejnowski, T. J. (1985). A learning algorithm for Boltzmann machines. Cognitive science, 9(1), 147-169.
  • Hinton, G. E. (2002). Training products of experts by minimizing contrastive divergence. Neural computation, 14(8), 1771-1800.
  • Le Roux, N., & Bengio, Y. (2008). Representational power of restricted Boltzmann machines and deep belief networks. Neural computation, 20(6), 1631-1649.
compute_loss(pred_arr, labeled_arr)

Compute loss.

Parameters:
  • pred_arrmxnet.ndarray or mxnet.symbol.
  • labeled_arrmxnet.ndarray or mxnet.symbol.
Returns:

loss.

extract_learned_dict()

Extract (pre-) learned parameters.

Returns:dict of the parameters.
feature_points_arr

getter for mxnet.narray of feature points in middle hidden layer.

forward_propagation(F, x)

Hybrid forward with Gluon API.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • xmxnet.ndarray of observed data points.
Returns:

mxnet.ndarray or mxnet.symbol of inferenced feature points.

get_feature_points_arr()

getter for mxnet.narray of feature points in middle hidden layer.

get_hidden_activity_arr()

getter for mxnet.ndarray of activities in hidden layer.

get_hidden_bias_arr()

getter for mxnet.ndarray of biases in hidden layer.

get_loss_arr()

getter for losses.

get_loss_list()

getter for list of losses in training.

get_test_loss_arr()

getter for list of losses in test.

get_visible_activity_arr()

getter for mxnet.ndarray of activities in visible layer.

get_visible_bias_arr()

getter for mxnet.ndarray of biases in visible layer.

get_weights_arr()

getter for mxnet.ndarray of weights matrics.

hidden_activity_arr

getter for mxnet.ndarray of activities in hidden layer.

hidden_bias_arr

getter for mxnet.ndarray of biases in hidden layer.

hybrid_forward(F, x)

Hybrid forward with Gluon API.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • xmxnet.ndarray of observed data points.
Returns:

mxnet.ndarray or mxnet.symbol of inferenced feature points.

inference(observed_arr)

Inference samples drawn by IteratableData.generate_inferenced_samples().

Parameters:observed_arr – rank-2 Array like or sparse matrix as the observed data points. The shape is: (batch size, feature points)
Returns:mxnet.ndarray of inferenced feature points.
learn(iteratable_data)

Learn samples drawn by IteratableData.generate_learned_samples().

Parameters:iteratable_data – is-a IteratableData.
loss_arr

getter for losses.

loss_list

getter for list of losses in training.

regularize()

Regularization.

set_hidden_activity_arr(value)

setter for mxnet.ndarray of activities in hidden layer.

set_hidden_bias_arr(value)

setter for mxnet.ndarray of biases in hidden layer.

set_readonly(value)

setter

set_visible_activity_arr(value)

setter for mxnet.ndarray of activities in visible layer.

set_visible_bias_arr(value)

setter for mxnet.ndarray of biases in visible layer.

set_weights_arr(value)

setter for mxnet.ndarray of weights matrics.

test_loss_list

getter for list of losses in test.

visible_activity_arr

getter for mxnet.ndarray of activities in visible layer.

visible_bias_arr

getter for mxnet.ndarray of biases in visible layer.

weights_arr

getter for mxnet.ndarray of weights matrics.

Module contents