Only top "num_words" most frequent words will be taken into account. Only words known by the tokenizer will be taken into account.

texts_to_sequences_generator(tokenizer, texts)

Arguments

tokenizer

Tokenizer

texts

Vector/list of texts (strings).

Value

Generator which yields individual sequences

See also