accelbrainbase.observabledata._mxnet.convolutionalneuralnetworks package

Submodules

accelbrainbase.observabledata._mxnet.convolutionalneuralnetworks.convolutional_auto_encoder module

class accelbrainbase.observabledata._mxnet.convolutionalneuralnetworks.convolutional_auto_encoder.ConvolutionalAutoEncoder(encoder, decoder, computable_loss, initializer=None, learning_rate=1e-05, learning_attenuate_rate=1.0, attenuate_epoch=50, hidden_units_list=[], output_nn=None, hidden_dropout_rate_list=[], optimizer_name='SGD', hidden_activation_list=[], hidden_batch_norm_list=[], ctx=gpu(0), hybridize_flag=True, regularizatable_data_list=[], scale=1.0, tied_weights_flag=True, init_deferred_flag=None, wd=None, **kwargs)

Bases: accelbrainbase.observabledata._mxnet.convolutional_neural_networks.ConvolutionalNeuralNetworks

Convolutional Auto-Encoder.

A stack of Convolutional Auto-Encoder (Masci, J., et al., 2011) forms a convolutional neural network(CNN), which are among the most successful models for supervised image classification. Each Convolutional Auto-Encoder is trained using conventional on-line gradient descent without additional regularization terms.

In this library, Convolutional Auto-Encoder is also based on Encoder/Decoder scheme. The encoder is to the decoder what the Convolution is to the Deconvolution. The Deconvolution also called transposed convolutions “work by swapping the forward and backward passes of a convolution.” (Dumoulin, V., & Visin, F. 2016, p20.)

References

  • Dumoulin, V., & V,kisin, F. (2016). A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285.
  • Masci, J., Meier, U., Cireşan, D., & Schmidhuber, J. (2011, June). Stacked convolutional auto-encoders for hierarchical feature extraction. In International Conference on Artificial Neural Networks (pp. 52-59). Springer, Berlin, Heidelberg.
batch_size

getter for batch size.

collect_params(select=None)

Overrided collect_params in mxnet.gluon.HybridBlok.

computable_loss

getter for ComputableLoss.

compute_loss(pred_arr, labeled_arr)

Compute loss.

Parameters:
  • pred_arrmxnet.ndarray or mxnet.symbol.
  • labeled_arrmxnet.ndarray or mxnet.symbol.
Returns:

loss.

extract_feature_points()

Extract the activities in hidden layer and reset it.

Returns:The mxnet.ndarray of array like or sparse matrix of feature points or virtual visible observed data points.
extract_learned_dict()

Extract (pre-) learned parameters.

Returns:dict of the parameters.
forward_propagation(F, x)

Hybrid forward with Gluon API.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • xmxnet.ndarray of observed data points.
Returns:

mxnet.ndarray or mxnet.symbol of inferenced feature points.

get_batch_size()

getter for batch size.

get_computable_loss()

getter for ComputableLoss.

get_init_deferred_flag()

getter for bool that means initialization in this class will be deferred or not.

get_loss_arr()

getter for losses.

hybrid_forward(F, x)

Hybrid forward with Gluon API.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • xmxnet.ndarray of observed data points.
Returns:

mxnet.ndarray or mxnet.symbol of inferenced feature points.

inference(observed_arr)

Inference the feature points to reconstruct the observed data points.

Parameters:observed_arr – rank-4 array like or sparse matrix as the observed data points. The shape is: (batch size, channel, height, width)
Returns:mxnet.ndarray of inferenced feature points.
init_deferred_flag

getter for bool that means initialization in this class will be deferred or not.

learn(iteratable_data)

Learn the observed data points for vector representation of the input images.

Parameters:iteratable_data – is-a IteratableData.
loss_arr

getter for losses.

regularize()

Regularization.

set_batch_size(value)

setter for batch size.

set_computable_loss(value)

setter for ComputableLoss.

set_init_deferred_flag(value)

setter for bool that means initialization in this class will be deferred or not.

set_readonly(value)

setter

tie_weights()

Tie weights.

accelbrainbase.observabledata._mxnet.convolutionalneuralnetworks.drc_networks module

class accelbrainbase.observabledata._mxnet.convolutionalneuralnetworks.drc_networks.DRCNetworks(convolutional_auto_encoder, drcn_loss, initializer=None, learning_rate=1e-05, learning_attenuate_rate=1.0, attenuate_epoch=50, hidden_units_list=[], output_nn=None, hidden_dropout_rate_list=[], optimizer_name='SGD', hidden_activation_list=[], hidden_batch_norm_list=[], ctx=gpu(0), hybridize_flag=True, regularizatable_data_list=[], scale=1.0, tied_weights_flag=True, tol=3.0, est=1e-08, wd=0.0, **kwargs)

Bases: accelbrainbase.observabledata._mxnet.convolutional_neural_networks.ConvolutionalNeuralNetworks

Deep Reconstruction-Classification Networks(DRCN or DRCNetworks).

Deep Reconstruction-Classification Network(DRCN or DRCNetworks) is a convolutional network that jointly learns two tasks:

  1. supervised source label prediction.
  2. unsupervised target data reconstruction.

Ideally, a discriminative representation should model both the label and the structure of the data. Based on that intuition, Ghifary, M., et al.(2016) hypothesize that a domain-adaptive representation should satisfy two criteria:

  1. classify well the source domain labeled data.
  2. reconstruct well the target domain unlabeled data, which can be viewed as an approximate of the ideal discriminative representation.

The encoding parameters of the DRCN are shared across both tasks, while the decoding parameters are sepa-rated. The aim is that the learned label prediction function can perform well onclassifying images in the target domain thus the data reconstruction can beviewed as an auxiliary task to support the adaptation of the label prediction.

References

  • Ghifary, M., Kleijn, W. B., Zhang, M., Balduzzi, D., & Li, W. (2016, October). Deep reconstruction-classification networks for unsupervised domain adaptation. In European Conference on Computer Vision (pp. 597-613). Springer, Cham.
acc_arr

getter for accuracies.

collect_params(select=None)

Overrided collect_params in mxnet.gluon.HybridBlok.

compute_loss(decoded_arr, prob_arr, batch_observed_arr, batch_target_arr)

Compute loss.

Parameters:
  • decoded_arrmxnet.ndarray or mxnet.symbol of decoded feature points..
  • prob_arrmxnet.ndarray or mxnet.symbol of predicted labels data.
  • batch_observed_arrmxnet.ndarray or mxnet.symbol of observed data points.
  • batch_target_arrmxnet.ndarray or mxnet.symbol of label data.
Returns:

loss.

forward_propagation(F, x)

Hybrid forward with Gluon API.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • xmxnet.ndarray of observed data points.
Returns:

Tuple data.
  • mxnet.ndarray or mxnet.symbol of reconstrcted feature points.
  • mxnet.ndarray or mxnet.symbol of inferenced label.

get_acc_list()

getter for accuracies.

get_loss_arr()

getter for losses.

inference_auto_encoder(x)

Hybrid forward with Gluon API (Auto-Encoder only).

Parameters:xmxnet.ndarray of observed data points.
Returns:mxnet.ndarray or mxnet.symbol of reconstrcted feature points.
learn(iteratable_data)

Learn the observed data points with domain adaptation.

Parameters:iteratable_data – is-a DRCNIterator.
loss_arr

getter for losses.

regularize()

Regularization.

set_readonly(value)

setter

accelbrainbase.observabledata._mxnet.convolutionalneuralnetworks.mobilenet_3d_v2 module

class accelbrainbase.observabledata._mxnet.convolutionalneuralnetworks.mobilenet_3d_v2.MobileNet3DV2(computable_loss, initializer=None, learning_rate=1e-05, learning_attenuate_rate=1.0, attenuate_epoch=50, ctx=gpu(0), hybridize_flag=True, activation='relu6', filter_multiplier=1.0, input_filter_n=32, input_kernel_size=(1, 3, 3), input_strides=(1, 2, 2), input_padding=(1, 1, 1), bottleneck_dict_list=[{'filter_rate': 1, 'filter_n': 16, 'block_n': 1, 'stride': (1, 1, 1)}, {'filter_rate': 6, 'filter_n': 24, 'block_n': 2, 'stride': (1, 2, 2)}, {'filter_rate': 6, 'filter_n': 32, 'block_n': 3, 'stride': (1, 2, 2)}, {'filter_rate': 6, 'filter_n': 64, 'block_n': 4, 'stride': (1, 2, 2)}, {'filter_rate': 6, 'filter_n': 96, 'block_n': 3, 'stride': (1, 1, 1)}, {'filter_rate': 6, 'filter_n': 160, 'block_n': 3, 'stride': (1, 2, 2)}, {'filter_rate': 6, 'filter_n': 320, 'block_n': 1, 'stride': (1, 1, 1)}], hidden_filter_n=1280, pool_size=(1, 7, 7), output_nn=None, optimizer_name='SGD', shortcut_flag=True, global_shortcut_flag=False, output_batch_norm_flag=True, scale=1.0, init_deferred_flag=None, **kwargs)

Bases: accelbrainbase.observabledata._mxnet.convolutional_neural_networks.ConvolutionalNeuralNetworks

3D Mobilenet V2.

References

  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
  • Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., … & Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
  • Ji, S., Xu, W., Yang, M., & Yu, K. (2012). 3D convolutional neural networks for human action recognition. IEEE transactions on pattern analysis and machine intelligence, 35(1), 221-231.
  • Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. C. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4510-4520).
forward_propagation(F, x)

Hybrid forward with Gluon API.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • xmxnet.ndarray of observed data points.
Returns:

mxnet.ndarray or mxnet.symbol of inferenced feature points.

get_initializer()

getter for mxnet.initializer.

hybrid_forward(F, x)

Hybrid forward with Gluon API.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • xmxnet.ndarray of observed data points.
Returns:

mxnet.ndarray or mxnet.symbol of inferenced feature points.

inference(observed_arr)

Inference the labels.

Parameters:observed_arr – rank-4 Array like or sparse matrix as the observed data points. The shape is: (batch size, channel, height, width)
Returns:mxnet.ndarray of inferenced feature points.
initializer

getter for mxnet.initializer.

set_initializer(value)

setter for mxnet.initializer.

accelbrainbase.observabledata._mxnet.convolutionalneuralnetworks.mobilenet_v2 module

class accelbrainbase.observabledata._mxnet.convolutionalneuralnetworks.mobilenet_v2.MobileNetV2(computable_loss, initializer=None, learning_rate=1e-05, learning_attenuate_rate=1.0, attenuate_epoch=50, ctx=gpu(0), hybridize_flag=True, activation='relu6', filter_multiplier=1.0, input_filter_n=32, input_kernel_size=(3, 3), input_strides=(2, 2), input_padding=(1, 1), bottleneck_dict_list=[{'filter_rate': 1, 'filter_n': 16, 'block_n': 1, 'stride': 1}, {'filter_rate': 6, 'filter_n': 24, 'block_n': 2, 'stride': 2}, {'filter_rate': 6, 'filter_n': 32, 'block_n': 3, 'stride': 2}, {'filter_rate': 6, 'filter_n': 64, 'block_n': 4, 'stride': 2}, {'filter_rate': 6, 'filter_n': 96, 'block_n': 3, 'stride': 1}, {'filter_rate': 6, 'filter_n': 160, 'block_n': 3, 'stride': 2}, {'filter_rate': 6, 'filter_n': 320, 'block_n': 1, 'stride': 1}], hidden_filter_n=1280, pool_size=(7, 7), output_nn=None, optimizer_name='SGD', shortcut_flag=True, global_shortcut_flag=False, output_batch_norm_flag=True, regularizatable_data_list=[], scale=1.0, init_deferred_flag=None, **kwargs)

Bases: accelbrainbase.observabledata._mxnet.convolutional_neural_networks.ConvolutionalNeuralNetworks

Mobilenet V2.

References

  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
  • Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., … & Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
  • Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. C. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4510-4520).
forward_propagation(F, x)

Hybrid forward with Gluon API.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • xmxnet.ndarray of observed data points.
Returns:

mxnet.ndarray or mxnet.symbol of inferenced feature points.

get_initializer()

getter for mxnet.initializer.

hybrid_forward(F, x)

Hybrid forward with Gluon API.

Parameters:
  • Fmxnet.ndarray or mxnet.symbol.
  • xmxnet.ndarray of observed data points.
Returns:

mxnet.ndarray or mxnet.symbol of inferenced feature points.

inference(observed_arr)

Inference the labels.

Parameters:observed_arr – rank-4 Array like or sparse matrix as the observed data points. The shape is: (batch size, channel, height, width)
Returns:mxnet.ndarray of inferenced feature points.
initializer

getter for mxnet.initializer.

set_initializer(value)

setter for mxnet.initializer.

Module contents