Skip to content

Commit

Permalink
section nums in code comments
Browse files Browse the repository at this point in the history
  • Loading branch information
astonzhang committed Jan 16, 2019
1 parent 9240e36 commit bf8fb2c
Show file tree
Hide file tree
Showing 5 changed files with 6 additions and 9 deletions.
3 changes: 1 addition & 2 deletions chapter_convolutional-neural-networks/conv-layer.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,8 +95,7 @@ Y
虽然我们之前构造了`Conv2D`类,但由于`corr2d`使用了对单个元素赋值(`[i, j]=`)的操作因而无法自动求梯度。下面我们使用Gluon提供的`Conv2D`类来实现这个例子。

```{.python .input n=83}
# 构造一个输出通道数为1(将在“多输入通道和多输出通道”一节介绍通道),核数组形状是(1, 2)的二
# 维卷积层
# 构造一个输出通道数为1(将在5.3节介绍通道),核数组形状是(1, 2)的二维卷积层
conv2d = nn.Conv2D(1, kernel_size=(1, 2))
conv2d.initialize()
Expand Down
3 changes: 1 addition & 2 deletions chapter_convolutional-neural-networks/lenet.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,8 +78,7 @@ ctx
相应地,我们对[“softmax回归的从零开始实现”](../chapter_deep-learning-basics/softmax-regression-scratch.md)一节中描述的`evaluate_accuracy`函数略作修改。由于数据刚开始存在CPU使用的内存上,当`ctx`变量代表GPU及相应的显存时,我们通过[“GPU计算”](../chapter_deep-learning-computation/use-gpu.md)一节中介绍的`as_in_context`函数将数据复制到显存上,例如`gpu(0)`

```{.python .input}
# 本函数已保存在d2lzh包中方便以后使用。该函数将被逐步改进:它的完整实现将在“图像增广”一节中
# 描述
# 本函数已保存在d2lzh包中方便以后使用。该函数将被逐步改进:它的完整实现将在9.1节中描述
def evaluate_accuracy(data_iter, net, ctx):
acc_sum, n = nd.array([0], ctx=ctx), 0
for X, y in data_iter:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ from mxnet.gluon import nn
# 定义一个函数来计算卷积层。它初始化卷积层权重,并对输入和输出做相应的升维和降维
def comp_conv2d(conv2d, X):
conv2d.initialize()
# (1, 1)代表批量大小和通道数(“多输入通道和多输出通道”一节将介绍)均为1
# (1, 1)代表批量大小和通道数(5.3节将介绍)均为1
X = X.reshape((1, 1) + X.shape)
Y = conv2d(X)
return Y.reshape(Y.shape[2:]) # 排除不关心的前两维:批量和通道
Expand Down
5 changes: 2 additions & 3 deletions chapter_deep-learning-basics/softmax-regression-scratch.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,8 +108,7 @@ accuracy(y_hat, y)
类似地,我们可以评价模型`net`在数据集`data_iter`上的准确率。

```{.python .input n=13}
# 本函数已保存在d2lzh包中方便以后使用。该函数将被逐步改进:它的完整实现将在“图像增广”一节中
# 描述
# 本函数已保存在d2lzh包中方便以后使用。该函数将被逐步改进:它的完整实现将在9.1节中描述
def evaluate_accuracy(data_iter, net):
acc_sum, n = 0.0, 0
for X, y in data_iter:
Expand Down Expand Up @@ -145,7 +144,7 @@ def train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size,
if trainer is None:
d2l.sgd(params, lr, batch_size)
else:
trainer.step(batch_size) # “softmax回归的简洁实现”一节将用到
trainer.step(batch_size) # 3.7节将用到
y = y.astype('float32')
train_l_sum += l.asscalar()
train_acc_sum += (y_hat.argmax(axis=1) == y).sum().asscalar()
Expand Down
2 changes: 1 addition & 1 deletion chapter_deep-learning-computation/model-construction.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ class MLP(nn.Block):
# 声明带有模型参数的层,这里声明了两个全连接层
def __init__(self, **kwargs):
# 调用MLP父类Block的构造函数来进行必要的初始化。这样在构造实例时还可以指定其他函数
# 参数,如“模型参数的访问、初始化和共享”一节将介绍的模型参数params
# 参数,如4.2节将介绍的模型参数params
super(MLP, self).__init__(**kwargs)
self.hidden = nn.Dense(256, activation='relu') # 隐藏层
self.output = nn.Dense(10) # 输出层
Expand Down

0 comments on commit bf8fb2c

Please sign in to comment.