Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix typo #66

Merged
merged 1 commit into from
Oct 3, 2017
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions chapter04_convolutional-neural-networks/cnn-scratch.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@

(图片版权属于vdumoulin@github)

我们使用`nd.Convlution`来演示这个。
我们使用`nd.Convolution`来演示这个。

```{.python .input n=47}
from mxnet import nd
Expand Down Expand Up @@ -47,7 +47,7 @@ print('input:', data, '\n\nweight:', w, '\n\nbias:', b, '\n\noutput:', out)

当输入数据有多个通道的时候,每个通道会有对应的权重,然后会对每个通道做卷积之后在通道之间求和

$$conv(data, w, b) = \sum_i conv(data[:,i,:,:], w[0,i,:,:], b)$$
$$conv(data, w, b) = \sum_i conv(data[:,i,:,:], w[:,i,:,:], b)$$

```{.python .input n=49}
w = nd.arange(8).reshape((1,2,2,2))
Expand Down Expand Up @@ -215,7 +215,7 @@ for epoch in range(5):

## 结论

可以看到卷积神经网络比前面的多层感知的分类精度更好。事实上,如果你看懂了这一章,那你基本知道了计算视觉里最重要的几个想法。LeNet早在90年代就提出来了。不管你相信不详细,如果你5年前懂了这个而且开了家公司,那么你很可能现在已经把公司作价几千万卖个某大公司了。幸运的是,或者不幸的是,现在的算法已经更加高级些了,接下来我们会看到一些更加新的想法。
可以看到卷积神经网络比前面的多层感知的分类精度更好。事实上,如果你看懂了这一章,那你基本知道了计算视觉里最重要的几个想法。LeNet早在90年代就提出来了。不管你相信不相信,如果你5年前懂了这个而且开了家公司,那么你很可能现在已经把公司作价几千万卖个某大公司了。幸运的是,或者不幸的是,现在的算法已经更加高级些了,接下来我们会看到一些更加新的想法。

## 练习

Expand Down