Only top "num_words" most frequent words will be taken into account. Only words known by the tokenizer will be taken into account.

texts_to_sequences(tokenizer, texts)

Arguments

tokenizer

Tokenizer

texts

Vector/list of texts (strings).

See also