Skip to content

Commit

Permalink
qr code at the end of each sectioin
Browse files Browse the repository at this point in the history
  • Loading branch information
astonzhang committed Jan 23, 2019
1 parent ad814c9 commit b50c81b
Show file tree
Hide file tree
Showing 107 changed files with 325 additions and 86 deletions.
2 changes: 2 additions & 0 deletions chapter_appendix/aws.md
Original file line number Diff line number Diff line change
Expand Up @@ -188,6 +188,8 @@ ssh -i "/path/to/key.pem" ubuntu@ec2-xx-xxx-xxx-xxx.y.compute.amazonaws.com -L 8
* 云很方便,但不便宜。研究一下它的价格,看看如何节省开销。




## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/6154)

![](../img/qr_aws.svg)
2 changes: 2 additions & 0 deletions chapter_appendix/buy-gpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,8 @@ GPU的性能主要由以下3个参数构成。

* 浏览本节讨论区中大家有关机器配置方面的交流。



## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1177)

![](../img/qr_buy-gpu.svg)
4 changes: 4 additions & 0 deletions chapter_appendix/d2lzh.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,3 +58,7 @@
* `voc_label_indices`[语义分割和数据集](../chapter_computer-vision/semantic-segmentation-and-dataset.md)
* `voc_rand_crop`[语义分割和数据集](../chapter_computer-vision/semantic-segmentation-and-dataset.md)
* `VOCSegDataset`[语义分割和数据集](../chapter_computer-vision/semantic-segmentation-and-dataset.md)




6 changes: 4 additions & 2 deletions chapter_appendix/how-to-contribute.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,9 +75,7 @@ git push
* 如果觉得本书某些地方可以改进,尝试提交一个pull request。


## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/7570)

![](../img/qr_how-to-contribute.svg)


## 参考文献
Expand All @@ -89,3 +87,7 @@ git push
[3] 安装Git。https://git-scm.com/book/zh/v2

[4] GitHub网址。https://github.com/

## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/7570)

![](../img/qr_how-to-contribute.svg)
4 changes: 4 additions & 0 deletions chapter_appendix/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,3 +13,7 @@
how-to-contribute
d2lzh
```




2 changes: 2 additions & 0 deletions chapter_appendix/jupyter.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,8 @@ jupyter nbextension enable execute_time/ExecuteTime
* 尝试在本地编辑和运行本书的代码。




## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/6965)

![](../img/qr_jupyter.svg)
2 changes: 2 additions & 0 deletions chapter_appendix/math.md
Original file line number Diff line number Diff line change
Expand Up @@ -335,6 +335,8 @@ $$E(X) = \sum_{x} x P(X = x).$$
* 求函数$f(\boldsymbol{x}) = 3x_1^2 + 5e^{x_2}$的梯度。




## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/6966)

![](../img/qr_math.svg)
4 changes: 4 additions & 0 deletions chapter_appendix/notation.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,3 +53,7 @@
## 复杂度

* $\mathcal{O}$:大O符号(渐进符号)




2 changes: 2 additions & 0 deletions chapter_computational-performance/async-computation.md
Original file line number Diff line number Diff line change
Expand Up @@ -204,6 +204,8 @@ print('increased memory: %f MB' % (get_mem() - mem))
* 在“使用异步计算提升计算性能”一节中,我们提到使用异步计算可以使执行1000次计算的总耗时降为$t_1 + 1000 t_2 + t_3$。这里为什么要假设$1000t_2 > 999t_1$?




## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1881)

![](../img/qr_async-computation.svg)
2 changes: 2 additions & 0 deletions chapter_computational-performance/auto-parallelism.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,6 +96,8 @@ with d2l.Benchmark('Run and copy in parallel.'):
* 当运算符的计算量足够小时,仅在CPU或单块GPU上并行计算也可能提升计算性能。设计实验来验证这一点。




## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1883)

![](../img/qr_auto-parallelism.svg)
2 changes: 2 additions & 0 deletions chapter_computational-performance/hybridize.md
Original file line number Diff line number Diff line change
Expand Up @@ -210,6 +210,8 @@ net(x)
* 回顾前面几章中你感兴趣的模型,改用`HybridBlock`类或`HybridSequential`类实现。




## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1665)

![](../img/qr_hybridize.svg)
4 changes: 4 additions & 0 deletions chapter_computational-performance/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,3 +13,7 @@
multiple-gpus
multiple-gpus-gluon
```




2 changes: 2 additions & 0 deletions chapter_computational-performance/multiple-gpus-gluon.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,8 @@ train(num_gpus=2, batch_size=512, lr=0.2)
* 本节使用了ResNet-18模型。试试不同的迭代周期、批量大小和学习率。如果条件允许,使用更多GPU来计算。
* 有时候,不同设备的计算能力不一样,例如,同时使用CPU和GPU,或者不同GPU之间型号不一样。这时候,应该如何将小批量划分到内存或不同显卡的显存?



## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1885)

![](../img/qr_multiple-gpus-gluon.svg)
2 changes: 2 additions & 0 deletions chapter_computational-performance/multiple-gpus.md
Original file line number Diff line number Diff line change
Expand Up @@ -198,6 +198,8 @@ train(num_gpus=2, batch_size=256, lr=0.2)
* 将实验的模型预测部分改为用多GPU预测。




## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1884)

![](../img/qr_multiple-gpus.svg)
2 changes: 2 additions & 0 deletions chapter_computer-vision/anchor.md
Original file line number Diff line number Diff line change
Expand Up @@ -230,6 +230,8 @@ for i in output[0].asnumpy():
* 修改“标注训练集的锚框”与“输出预测边界框”两小节中的变量`anchors`,结果有什么变化?




## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/7024)

![](../img/qr_anchor.svg)
2 changes: 2 additions & 0 deletions chapter_computer-vision/bounding-box.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,8 @@ fig.axes.add_patch(bbox_to_rect(cat_bbox, 'red'));
* 找一些图像,尝试标注其中目标的边界框。比较标注边界框与标注类别所花时间的差异。




## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/7023)

![](../img/qr_bounding-box.svg)
6 changes: 4 additions & 2 deletions chapter_computer-vision/fcn.md
Original file line number Diff line number Diff line change
Expand Up @@ -234,13 +234,15 @@ d2l.show_images(imgs[::3] + imgs[1::3] + imgs[2::3], 3, n);
* 预测测试图像中所有像素的类别。
* 全卷积网络的论文中还使用了卷积神经网络的某些中间层的输出 [1]。试着实现这个想法。

## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/3041)

![](../img/qr_fcn.svg)


## 参考文献

[1] Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431-3440).

[2] Dumoulin, V., & Visin, F. (2016). A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285.

## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/3041)

![](../img/qr_fcn.svg)
6 changes: 4 additions & 2 deletions chapter_computer-vision/fine-tuning.md
Original file line number Diff line number Diff line change
Expand Up @@ -175,10 +175,12 @@ hotdog_w = nd.split(weight.data(), 1000, axis=0)[713]
hotdog_w.shape
```

## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/2272)

![](../img/qr_fine-tuning.svg)

## 参考文献

[1] GluonCV工具包。https://gluon-cv.mxnet.io/

## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/2272)

![](../img/qr_fine-tuning.svg)
2 changes: 2 additions & 0 deletions chapter_computer-vision/image-augmentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -245,6 +245,8 @@ train_with_data_aug(flip_aug, no_aug)
* 在基于CIFAR-10数据集的模型训练中增加不同的图像增广方法。观察实现结果。
* 查阅MXNet文档,Gluon的`transforms`模块还提供了哪些图像增广方法?



## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1666)

![](../img/qr_image-augmentation.svg)
4 changes: 4 additions & 0 deletions chapter_computer-vision/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,3 +23,7 @@
kaggle-gluon-cifar10
kaggle-gluon-dog
```




5 changes: 3 additions & 2 deletions chapter_computer-vision/kaggle-gluon-cifar10.md
Original file line number Diff line number Diff line change
Expand Up @@ -335,8 +335,9 @@ df.to_csv('submission.csv', index=False)

* 使用Kaggle比赛的完整CIFAR-10数据集。把批量大小`batch_size`和迭代周期数`num_epochs`分别改为128和300。可以在这个比赛中得到什么样的准确率和名次?
* 如果不使用图像增广的方法能得到什么样的准确率?
* 扫码直达讨论区,在社区交流方法和结果。你能发掘出其他更好的技巧吗?

## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1545)


build chapter_appendix chapter_computational-performance chapter_computer-vision chapter_convolutional-neural-networks chapter_deep-learning-basics chapter_deep-learning-computation chapter_introduction chapter_natural-language-processing chapter_optimization chapter_prerequisite chapter_recurrent-neural-networks contrib d2lzh data Dockerfile environment.yml img INFO.md Jenkinsfile LICENSE Makefile post_html.sh README.md setup.py STYLE_GUIDE.md TERMINOLOGY.md 扫码直达讨论区,在社区交流方法和结果。你能发掘出其他更好的技巧吗? ## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1545)

![](../img/qr_kaggle-gluon-cifar10.svg)
7 changes: 4 additions & 3 deletions chapter_computer-vision/kaggle-gluon-dog.md
Original file line number Diff line number Diff line change
Expand Up @@ -296,12 +296,13 @@ with open('submission.csv', 'w') as f:

* 使用Kaggle完整数据集,把批量大小`batch_size`和迭代周期数`num_epochs`分别调大些,可以在Kaggle上拿到什么样的结果?
* 使用更深的预训练模型,你能获得更好的结果吗?
* 扫码直达讨论区,在社区交流方法和结果。你能发掘出其他更好的技巧吗?

## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/2399)

![](../img/qr_kaggle-gluon-dog.svg)

## 参考文献

[1] Kaggle ImageNet Dogs比赛网址。https://www.kaggle.com/c/dog-breed-identification

build chapter_appendix chapter_computational-performance chapter_computer-vision chapter_convolutional-neural-networks chapter_deep-learning-basics chapter_deep-learning-computation chapter_introduction chapter_natural-language-processing chapter_optimization chapter_prerequisite chapter_recurrent-neural-networks contrib d2lzh data Dockerfile environment.yml img INFO.md Jenkinsfile LICENSE Makefile post_html.sh README.md setup.py STYLE_GUIDE.md TERMINOLOGY.md 扫码直达讨论区,在社区交流方法和结果。你能发掘出其他更好的技巧吗? ## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/2399)

![](../img/qr_kaggle-gluon-dog.svg)
2 changes: 2 additions & 0 deletions chapter_computer-vision/multiscale-object-detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,8 @@ display_anchors(fmap_w=1, fmap_h=1, s=[0.8])
* 给定一张输入图像,设特征图变量的形状为$1 \times c_i \times h \times w$,其中$c_i, h, w$分别为特征图的个数、高和宽。你能想到哪些将该变量变换为锚框的类别和偏移量的方法?输出的形状分别是什么?




## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/8859)

![](../img/qr_multiscale-object-detection.svg)
6 changes: 4 additions & 2 deletions chapter_computer-vision/neural-style.md
Original file line number Diff line number Diff line change
Expand Up @@ -273,10 +273,12 @@ d2l.plt.imsave('../img/neural-style-2.png', postprocess(output).asnumpy())
* 调整损失函数中的权值超参数,输出是否保留更多内容或减少更多噪点?
* 替换实验中的内容图像和样式图像,你能创作出更有趣的合成图像吗?

## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/3273)

![](../img/qr_neural-style.svg)

## 参考文献

[1] Gatys, L. A., Ecker, A. S., & Bethge, M. (2016). Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2414-2423).

## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/3273)

![](../img/qr_neural-style.svg)
6 changes: 4 additions & 2 deletions chapter_computer-vision/object-detection-dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,12 +78,14 @@ for ax, label in zip(axes, batch.label[0][0:10]):
* 查阅MXNet文档,`image.ImageDetIter``image.CreateDetAugmenter`这两个类的构造函数有哪些参数?它们的意义是什么?


## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/7022)

![](../img/qr_object-detection-dataset.svg)

## 参考文献

[1] im2rec工具。https://github.com/apache/incubator-mxnet/blob/master/tools/im2rec.py

[2] GluonCV 工具包。https://gluon-cv.mxnet.io/

## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/7022)

![](../img/qr_object-detection-dataset.svg)
6 changes: 4 additions & 2 deletions chapter_computer-vision/rcnn.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,9 +103,7 @@ Fast R-CNN通常需要在选择性搜索中生成较多的提议区域,以获

* 了解GluonCV工具包中有关本节中各个模型的实现 [6]

## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/7219)

![](../img/qr_rcnn.svg)



Expand All @@ -122,3 +120,7 @@ Fast R-CNN通常需要在选择性搜索中生成较多的提议区域,以获
[5] He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017, October). Mask r-cnn. In Computer Vision (ICCV), 2017 IEEE International Conference on (pp. 2980-2988). IEEE.

[6] GluonCV 工具包。https://gluon-cv.mxnet.io/

## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/7219)

![](../img/qr_rcnn.svg)
6 changes: 4 additions & 2 deletions chapter_computer-vision/semantic-segmentation-and-dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -203,10 +203,12 @@ for X, Y in train_iter:

* 回忆[“图像增广”](image-augmentation.md)一节中的内容。哪些在图像分类中使用的图像增广方法难以用于语义分割?

## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/7218)

![](../img/qr_semantic-segmentation-and-dataset.svg)

## 参考文献

[1] Pascal VOC2012数据集。http://host.robots.ox.ac.uk/pascal/VOC/voc2012/

## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/7218)

![](../img/qr_semantic-segmentation-and-dataset.svg)
6 changes: 4 additions & 2 deletions chapter_computer-vision/ssd.md
Original file line number Diff line number Diff line change
Expand Up @@ -367,12 +367,14 @@ d2l.plt.legend();
* 参考单发多框检测论文,还有哪些方法可以评价目标检测模型的精度 [1]


## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/2511)

![](../img/qr_ssd.svg)

## 参考文献

[1] Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016, October). Ssd: Single shot multibox detector. In European conference on computer vision (pp. 21-37). Springer, Cham.

[2] Lin, T. Y., Goyal, P., Girshick, R., He, K., & Dollár, P. (2018). Focal loss for dense object detection. IEEE transactions on pattern analysis and machine intelligence.

## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/2511)

![](../img/qr_ssd.svg)
6 changes: 4 additions & 2 deletions chapter_convolutional-neural-networks/alexnet.md
Original file line number Diff line number Diff line change
Expand Up @@ -143,10 +143,12 @@ d2l.train_ch5(net, train_iter, test_iter, batch_size, trainer, ctx, num_epochs)
* 修改批量大小,观察准确率和内存或显存的变化。


## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1258)

![](../img/qr_alexnet.svg)

## 参考文献

[1] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).

## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1258)

![](../img/qr_alexnet.svg)
6 changes: 4 additions & 2 deletions chapter_convolutional-neural-networks/batch-norm.md
Original file line number Diff line number Diff line change
Expand Up @@ -203,10 +203,12 @@ d2l.train_ch5(net, train_iter, test_iter, batch_size, trainer, ctx,
* 查看`BatchNorm`类的文档来了解更多使用方法,例如,如何在训练时使用基于全局平均的均值和方差。


## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1253)

![](../img/qr_batch-norm.svg)

## 参考文献

[1] Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167.

## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1253)

![](../img/qr_batch-norm.svg)
2 changes: 2 additions & 0 deletions chapter_convolutional-neural-networks/channels.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,6 +109,8 @@ Y2 = corr2d_multi_in_out(X, K)
* 当卷积窗口不为$1\times 1$时,如何用矩阵乘法实现卷积计算?




## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/6405)

![](../img/qr_channels.svg)
2 changes: 2 additions & 0 deletions chapter_convolutional-neural-networks/conv-layer.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,6 +152,8 @@ conv2d.weight.data().reshape((1, 2))
* 如何通过变化输入和核数组将互相关运算表示成一个矩阵乘法?
* 如何构造一个全连接层来进行物体边缘检测?



## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/6314)

![](../img/qr_conv-layer.svg)
6 changes: 4 additions & 2 deletions chapter_convolutional-neural-networks/densenet.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,10 +132,12 @@ d2l.train_ch5(net, train_iter, test_iter, batch_size, trainer, ctx,
* DenseNet被人诟病的一个问题是内存或显存消耗过多。真的会这样吗?可以把输入形状换成$224\times 224$,来看看实际的消耗。
* 实现DenseNet论文中的表1提出的不同版本的DenseNet [1]

## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1664)

![](../img/qr_densenet.svg)

## 参考文献

[1] Huang, G., Liu, Z., Weinberger, K. Q., & van der Maaten, L. (2017). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (Vol. 1, No. 2).

## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1664)

![](../img/qr_densenet.svg)
6 changes: 4 additions & 2 deletions chapter_convolutional-neural-networks/googlenet.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,9 +132,7 @@ d2l.train_ch5(net, train_iter, test_iter, batch_size, trainer, ctx,
* 对比AlexNet、VGG和NiN、GoogLeNet的模型参数尺寸。为什么后两个网络可以显著减小模型参数尺寸?


## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1662)

![](../img/qr_googlenet.svg)

## 参考文献

Expand All @@ -145,3 +143,7 @@ d2l.train_ch5(net, train_iter, test_iter, batch_size, trainer, ctx,
[3] Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2818-2826).

[4] Szegedy, C., Ioffe, S., Vanhoucke, V., & Alemi, A. A. (2017, February). Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 4, p. 12).

## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/1662)

![](../img/qr_googlenet.svg)
Loading

0 comments on commit b50c81b

Please sign in to comment.