pydbm.cnn.convolutionalneuralnetwork package¶
Subpackages¶
- pydbm.cnn.convolutionalneuralnetwork.convolutionalautoencoder package
- Submodules
- pydbm.cnn.convolutionalneuralnetwork.convolutionalautoencoder.contractive_convolutional_auto_encoder module
- pydbm.cnn.convolutionalneuralnetwork.convolutionalautoencoder.convolutional_ladder_networks module
- pydbm.cnn.convolutionalneuralnetwork.convolutionalautoencoder.repelling_convolutional_auto_encoder module
- Module contents
Submodules¶
pydbm.cnn.convolutionalneuralnetwork.convolutional_auto_encoder module¶
-
class
pydbm.cnn.convolutionalneuralnetwork.convolutional_auto_encoder.
ConvolutionalAutoEncoder
¶ Bases:
pydbm.cnn.convolutional_neural_network.ConvolutionalNeuralNetwork
Convolutional Auto-Encoder which is-a ConvolutionalNeuralNetwork.
A stack of Convolutional Auto-Encoder (Masci, J., et al., 2011) forms a convolutional neural network(CNN), which are among the most successful models for supervised image classification. Each Convolutional Auto-Encoder is trained using conventional on-line gradient descent without additional regularization terms.
In this library, Convolutional Auto-Encoder is also based on Encoder/Decoder scheme. The encoder is to the decoder what the Convolution is to the Deconvolution. The Deconvolution also called transposed convolutions “work by swapping the forward and backward passes of a convolution.” (Dumoulin, V., & Visin, F. 2016, p20.)
References
- Dumoulin, V., & V,kisin, F. (2016). A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285.
- Masci, J., Meier, U., Cireşan, D., & Schmidhuber, J. (2011, June). Stacked convolutional auto-encoders for hierarchical feature extraction. In International Conference on Artificial Neural Networks (pp. 52-59). Springer, Berlin, Heidelberg.
-
back_propagation
¶ Back propagation in CNN.
Override.
Parameters: Delta. – - Returns.
- Delta.
-
extract_feature_points_arr
¶ Extract feature points.
Returns: np.ndarray of feature points in hidden layer which means the encoded data.
-
forward_propagation
¶ Forward propagation in Convolutional Auto-Encoder.
Override.
Parameters: img_arr – np.ndarray of image file array. Returns: Propagated np.ndarray.
-
optimize
¶ Back propagation.
Parameters: - learning_rate – Learning rate.
- epoch – Now epoch.
pydbm.cnn.convolutionalneuralnetwork.residual_learning module¶
-
class
pydbm.cnn.convolutionalneuralnetwork.residual_learning.
ResidualLearning
¶ Bases:
pydbm.cnn.convolutional_neural_network.ConvolutionalNeuralNetwork
Deep Residual Learning Framework which hypothesize that “it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.” (He, K. et al., 2016, p771.)
References
- He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
-
forward_propagation
¶ Forward propagation in CNN.
Override.
Parameters: img_arr – np.ndarray of image file array. Returns: Propagated np.ndarray.
-
inference
¶ Inference the feature points to reconstruct the time-series.
Override.
Parameters: observed_arr – Array like or sparse matrix as the observed data points. Returns: Predicted array like or sparse matrix.
-
learn
¶ Learn with Deep Residual Learning Framework.
“With the residual learning reformulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear layers toward zero to approach identity mappings”. (He, K. et al., 2016, p771.)
Parameters: - observed_arr – np.ndarray of observed data points.
- target_arr – np.ndarray of labeled data. If None, the function of this cnn model is equivalent to Convolutional Auto-Encoder.
References
- He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).