pysummarization.iteratabledata package

Submodules

pysummarization.iteratabledata.token_iterator module

class pysummarization.iteratabledata.token_iterator.TokenIterator(vectorizable_token, token_arr, epochs=1000, batch_size=25, seq_len=5, test_size=0.3, norm_mode=None, noiseable_data=None)[source]

Bases: accelbrainbase.iteratable_data.IteratableData

batch_size

getter

epochs

getter

generate_inferenced_samples()[source]

Draw and generate data. The targets will be drawn from all image file sorted in ascending order by file name.

Returns:Tuple data. The shape is … - None. - None. - mxnet.ndarray of observed data points in test. - file path.
generate_learned_samples()[source]

Draw and generate data.

Returns:Tuple data. The shape is … - mxnet.ndarray of observed data points in training. - mxnet.ndarray of supervised data in training. - mxnet.ndarray of observed data points in test. - mxnet.ndarray of supervised data in test.
get_batch_size()[source]

getter

get_epochs()[source]

getter

get_norm_mode()[source]

getter

get_scale()[source]

getter

get_seq_len()[source]

getter

get_vectorizable_token()[source]

getter

norm_mode

getter

pre_normalize(arr)[source]

Normalize before observation.

Parameters:arr – Tensor.
Returns:Tensor.
scale

getter

seq_len

getter

set_batch_size(value)[source]

setter

set_epochs(value)[source]

setter

set_norm_mode(value)[source]

setter

set_readonly(value)[source]

setter

set_scale(value)[source]

setter

set_seq_len(value)[source]

setter

set_vectorizable_token(value)[source]

setter

vectorizable_token

getter

Module contents