Update tokenizer internal vocabulary based on a list of texts or list of sequences.

fit_text_tokenizer(object, x)

Arguments

object

Tokenizer returned by text_tokenizer()

x

Vector/list of strings, or a generator of strings (for memory-efficiency); Alternatively a list of "sequence" (a sequence is a list of integer word indices).

Note

Required before using texts_to_sequences(), texts_to_matrix(), or sequences_to_matrix().

See also