Skip to content

Commit

Permalink
Merge pull request fchollet#11 from hiroyachiba/master
Browse files Browse the repository at this point in the history
typos in 6.1
  • Loading branch information
fchollet authored Nov 20, 2017
2 parents 7e819b6 + dcaf783 commit 8a30b90
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion 6.1-one-hot-encoding-of-words-or-characters.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@
"samples = ['The cat sat on the mat.', 'The dog ate my homework.']\n",
"\n",
"# We create a tokenizer, configured to only take\n",
"# into account the top-1000 most common on words\n",
"# into account the top-1000 most common words\n",
"tokenizer = Tokenizer(num_words=1000)\n",
"# This builds the word index\n",
"tokenizer.fit_on_texts(samples)\n",
Expand Down
2 changes: 1 addition & 1 deletion 6.1-using-word-embeddings.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -589,7 +589,7 @@
"Additionally, we freeze the embedding layer (we set its `trainable` attribute to `False`), following the same rationale as what you are \n",
"already familiar with in the context of pre-trained convnet features: when parts of a model are pre-trained (like our `Embedding` layer), \n",
"and parts are randomly initialized (like our classifier), the pre-trained parts should not be updated during training to avoid forgetting \n",
"what they already know. The large gradient updated triggered by the randomly initialized layers would be very disruptive to the already \n",
"what they already know. The large gradient update triggered by the randomly initialized layers would be very disruptive to the already \n",
"learned features."
]
},
Expand Down

0 comments on commit 8a30b90

Please sign in to comment.