Skip to content

Commit

Permalink
typos
Browse files Browse the repository at this point in the history
  • Loading branch information
Hiroya Chiba committed Sep 18, 2017
1 parent c6a3dfc commit 51d69a6
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 6 deletions.
6 changes: 3 additions & 3 deletions 6.1-using-word-embeddings.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@
"legal document classification model, because the importance of certain semantic relationships varies from task to task.\n",
"\n",
"It is thus reasonable to __learn__ a new embedding space with every new task. Thankfully, backpropagation makes this really easy, and Keras makes it \n",
"even easier. It's just about learning the weights a layer: the `Embedding` layer."
"even easier. It's just about learning the weights of a layer: the `Embedding` layer."
]
},
{
Expand Down Expand Up @@ -151,7 +151,7 @@
"downstream model can exploit. Once fully trained, your embedding space will show a lot of structure -- a kind of structure specialized for \n",
"the specific problem you were training your model for.\n",
"\n",
"Let's apply this idea to the IMDB movie review sentiment prediction task that you are already familiar with. With, let's quickly prepare \n",
"Let's apply this idea to the IMDB movie review sentiment prediction task that you are already familiar with. Let's quickly prepare \n",
"the data. We will restrict the movie reviews to the top 10,000 most common words (like we did the first time we worked with this dataset), \n",
"and cut the reviews after only 20 words. Our network will simply learn 8-dimensional embeddings for each of the 10,000 words, turn the \n",
"input integer sequences (2D integer tensor) into embedded sequences (3D float tensor), flatten the tensor to 2D, and train a single `Dense` \n",
Expand Down Expand Up @@ -267,7 +267,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We get to a validation accuracy of ~76%, which is pretty good considering that we are only look at the first 20 words in every review. But \n",
"We get to a validation accuracy of ~76%, which is pretty good considering that we only look at the first 20 words in every review. But \n",
"note that merely flattening the embedded sequences and training a single `Dense` layer on top leads to a model that treats each word in the \n",
"input sequence separately, without considering inter-word relationships and structure sentence (e.g. it would likely treat both _\"this movie \n",
"is shit\"_ and _\"this movie is the shit\"_ as being negative \"reviews\"). It would be much better to add recurrent layers or 1D convolutional \n",
Expand Down
6 changes: 3 additions & 3 deletions 6.3-advanced-usage-of-recurrent-neural-networks.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@
"## A temperature forecasting problem\n",
"\n",
"Until now, the only sequence data we have covered has been text data, for instance the IMDB dataset and the Reuters dataset. But sequence \n",
"data is found in many more problems than just language processing. In all of our examples in this section, will be playing with a weather \n",
"data is found in many more problems than just language processing. In all of our examples in this section, we will be playing with a weather \n",
"timeseries dataset recorded at the Weather Station at the Max-Planck-Institute for Biogeochemistry in Jena, Germany: http://www.bgc-jena.mpg.de/wetter/.\n",
"\n",
"In this dataset, fourteen different quantities (such air temperature, atmospheric pressure, humidity, wind direction, etc.) are recorded \n",
Expand Down Expand Up @@ -221,12 +221,12 @@
"\n",
"* `lookback = 720`, i.e. our observations will go back 5 days.\n",
"* `steps = 6`, i.e. our observations will be sampled at one data point per hour.\n",
"* `delay = 144`, i.e. our targets will be 24 hour in the future.\n",
"* `delay = 144`, i.e. our targets will be 24 hours in the future.\n",
"\n",
"To get started, we need to do two things:\n",
"\n",
"* Preprocess the data to a format a neural network can ingest. This is easy: the data is already numerical, so we don't need to do any \n",
"vectorization. However each timeseries in the data is one a different scale (e.g. temperature is typically between -20 and +30, but \n",
"vectorization. However each timeseries in the data is on a different scale (e.g. temperature is typically between -20 and +30, but \n",
"pressure, measured in mbar, is around 1000). So we will normalize each timeseries independently so that they all take small values on a \n",
"similar scale.\n",
"* Write a Python generator that takes our current array of float data and yields batches of data from the recent past, alongside with a \n",
Expand Down

0 comments on commit 51d69a6

Please sign in to comment.