川崎のシステム開発・アプリ開発・Web制作

Text generating using LSTM

In this time , we generating texts using LSTM network First, import library and loading data:
Next step is tokenizer texts, to do that we need to define Tokenizer then using fit_on_texts to tokenizer words in dataset:
We got 263 words in total training data After tokenizer , we need to define training texts and label. Before that, i’ll be convenient if we convert sentences of words into sequences, then split each squence into many small squences where first sequence contain first 2 unit, then next sequence contain 3 units and so on.
By split a sequence into many sequences, we can create training texts and predict text sets. In each sequence, create train data and predict label, where all unit of sequence except the last unit is train data and the last unit, is predicted label.
Next build Bi-direction LSTM to predict next word and train in 500 epochs:
And check the result:
この記事を書いた人

コメントを残す

メールアドレスが公開されることはありません。 * が付いている欄は必須項目です