Skip to content

Commit

Permalink
forum link (d2l-ai#1038)
Browse files Browse the repository at this point in the history
Co-authored-by: Ubuntu <ubuntu@ip-172-31-12-66.us-west-2.compute.internal>
  • Loading branch information
xiaotinghe and Ubuntu committed Dec 8, 2021
1 parent 9773ad8 commit bd27efe
Show file tree
Hide file tree
Showing 38 changed files with 62 additions and 62 deletions.
2 changes: 1 addition & 1 deletion chapter_appendix-tools-for-deep-learning/aws.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,4 +202,4 @@ jupyter notebook
1. 尝试使用不同的GPU服务器。它们有多快?
1. 尝试使用多GPU服务器。你能把事情扩大到什么程度?

[Discussions](https://discuss.d2l.ai/t/423)
[Discussions](https://discuss.d2l.ai/t/5733)
2 changes: 1 addition & 1 deletion chapter_appendix-tools-for-deep-learning/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,4 +157,4 @@ git push
1. 如果发现任何需要改进的地方(例如,缺少引用),请提交Pull请求。
1. 通常更好的做法是使用新分支创建Pull请求。学习如何用[Git分支](https://git-scm.com/book/en/v2/Git-Branching-Branches-in-a-Nutshell)来做这件事。

[Discussions](https://discuss.d2l.ai/t/426)
[Discussions](https://discuss.d2l.ai/t/5730)
2 changes: 1 addition & 1 deletion chapter_appendix-tools-for-deep-learning/jupyter.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,4 +109,4 @@ jupyter nbextension enable execute_time/ExecuteTime
1. 使用Jupyter Notebook通过端口转发来远程编辑和运行本书中的代码。
1. 对于两个方矩阵,测量$\mathbf{A}^\top \mathbf{B}$与$\mathbf{A} \mathbf{B}$在$\mathbb{R}^{1024 \times 1024}$中的运行时间。哪一个更快?

[Discussions](https://discuss.d2l.ai/t/421)
[Discussions](https://discuss.d2l.ai/t/5731)
2 changes: 1 addition & 1 deletion chapter_appendix-tools-for-deep-learning/sagemaker.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,4 +112,4 @@ git pull
1. 使用Amazon SageMaker编辑并运行任何需要GPU的部分。
1. 打开终端以访问保存本书所有notebooks的本地目录。

[Discussions](https://discuss.d2l.ai/t/422)
[Discussions](https://discuss.d2l.ai/t/5732)
6 changes: 3 additions & 3 deletions chapter_attention-mechanisms/attention-cues.md
Original file line number Diff line number Diff line change
Expand Up @@ -169,15 +169,15 @@ show_heatmaps(attention_weights, xlabel='Keys', ylabel='Queries')
1. 随机生成一个$10 \times 10$矩阵并使用`softmax`运算来确保每行都是有效的概率分布,然后可视化输出注意力权重。

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/1596)
[Discussions](https://discuss.d2l.ai/t/5763)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1592)
[Discussions](https://discuss.d2l.ai/t/5764)
:end_tab:

:begin_tab:`tensorflow`
[Discussions](https://discuss.d2l.ai/t/1710)
[Discussions](https://discuss.d2l.ai/t/5765)
:end_tab:


4 changes: 2 additions & 2 deletions chapter_attention-mechanisms/attention-scoring-functions.md
Original file line number Diff line number Diff line change
Expand Up @@ -470,9 +470,9 @@ d2l.show_heatmaps(d2l.reshape(attention.attention_weights, (1, 1, 2, 10)),
1. 当查询和键具有相同的矢量长度时,矢量求和作为评分函数是否比“点-积”更好?为什么?

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/346)
[Discussions](https://discuss.d2l.ai/t/5751)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1064)
[Discussions](https://discuss.d2l.ai/t/5752)
:end_tab:
4 changes: 2 additions & 2 deletions chapter_attention-mechanisms/bahdanau-attention.md
Original file line number Diff line number Diff line change
Expand Up @@ -378,9 +378,9 @@ d2l.show_heatmaps(attention_weights[:, :, :, :len(engs[-1].split()) + 1],
1. 修改实验以将加性注意力打分函数替换为缩放点积注意力,它如何影响训练效率?

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/347)
[Discussions](https://discuss.d2l.ai/t/5753)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1065)
[Discussions](https://discuss.d2l.ai/t/5754)
:end_tab:
4 changes: 2 additions & 2 deletions chapter_attention-mechanisms/multihead-attention.md
Original file line number Diff line number Diff line change
Expand Up @@ -343,9 +343,9 @@ attention(X, Y, Y, valid_lens, training=False).shape
1. 假设我们有一个完成训练的基于多头注意力的模型,现在希望修剪最不重要的注意力头以提高预测速度。如何设计实验来衡量注意力头的重要性呢?

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/1634)
[Discussions](https://discuss.d2l.ai/t/5757)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1635)
[Discussions](https://discuss.d2l.ai/t/5758)
:end_tab:
4 changes: 2 additions & 2 deletions chapter_attention-mechanisms/nadaraya-waston.md
Original file line number Diff line number Diff line change
Expand Up @@ -561,9 +561,9 @@ d2l.show_heatmaps(tf.expand_dims(
1. 为本节的核回归设计一个新的带参数的注意力汇聚模型。训练这个新模型并可视化其注意力权重。

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/1598)
[Discussions](https://discuss.d2l.ai/t/5759)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1599)
[Discussions](https://discuss.d2l.ai/t/5760)
:end_tab:
Original file line number Diff line number Diff line change
Expand Up @@ -313,9 +313,9 @@ $2\times 2$投影矩阵不依赖于任何位置的索引$i$。
1. 你能设计一种可学习的位置编码方法吗?

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/1651)
[Discussions](https://discuss.d2l.ai/t/5761)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1652)
[Discussions](https://discuss.d2l.ai/t/5762)
:end_tab:
4 changes: 2 additions & 2 deletions chapter_attention-mechanisms/transformer.md
Original file line number Diff line number Diff line change
Expand Up @@ -905,9 +905,9 @@ d2l.show_heatmaps(
1. 如果不使用卷积神经网络,如何设计基于transformer模型的图像分类任务?提示:可以参考Vision Transformer :cite:`Dosovitskiy.Beyer.Kolesnikov.ea.2021`

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/348)
[Discussions](https://discuss.d2l.ai/t/5755)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1066)
[Discussions](https://discuss.d2l.ai/t/5756)
:end_tab:
2 changes: 1 addition & 1 deletion chapter_computational-performance/hardware.md
Original file line number Diff line number Diff line change
Expand Up @@ -216,4 +216,4 @@ GPU内存的带宽要求甚至更高,因为它们的处理单元比CPU多得
1. 看看Turing T4GPU的性能数字。为什么从FP16到INT8和INT4的性能只翻倍?
1. 一个网络包从旧金山到阿姆斯特丹的往返旅行需要多长时间?提示:你可以假设距离为10000公里。

[Discussions](https://discuss.d2l.ai/t/2798)
[Discussions](https://discuss.d2l.ai/t/5717)
2 changes: 1 addition & 1 deletion chapter_computational-performance/parameterserver.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,4 +100,4 @@ $$\mathbf{g}_{i} = \sum_{k \in \text{workers}} \sum_{j \in \text{GPUs}} \mathbf{
1. 在计算仍在进行中,可否允许执行异步通信?它将如何影响性能?
1. 怎样处理在长时间运行的计算过程中丢失了一台服务器这种问题?尝试设计一种容错机制来避免重启计算这种解决方案?

[Discussions](https://discuss.d2l.ai/t/2807)
[Discussions](https://discuss.d2l.ai/t/5774)
2 changes: 1 addition & 1 deletion chapter_convolutional-neural-networks/why-conv.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,4 +148,4 @@ $$[\mathsf{H}]_{i,j,d} = \sum_{a = -\Delta}^{\Delta} \sum_{b = -\Delta}^{\Delta}
1. 卷积层也适合于文本数据吗?为什么?
1. 证明在 :eqref:`eq_2d-conv-discrete`中,$f * g = g * f$。

[Discussions](https://discuss.d2l.ai/t/1846)
[Discussions](https://discuss.d2l.ai/t/5767)
4 changes: 2 additions & 2 deletions chapter_deep-learning-computation/deferred-init.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,11 +117,11 @@ net(X)
1. 如果输入具有不同的维度,你需要做什么?提示:查看参数绑定的相关内容。

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/1834)
[Discussions](https://discuss.d2l.ai/t/5770)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1834)
[Discussions](https://discuss.d2l.ai/t/5770)
:end_tab:

:begin_tab:`tensorflow`
Expand Down
2 changes: 1 addition & 1 deletion chapter_introduction/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -755,4 +755,4 @@ Canny边缘检测器 :cite:`Canny.1987` 和SIFT特征提取器 :cite:`Lowe.2004`
1. 如果把人工智能的发展看作一场新的工业革命,那么算法和数据之间的关系是什么?它类似于蒸汽机和煤吗?根本区别是什么?
1. 你还可以在哪里应用端到端的训练方法,比如 :numref:`fig_ml_loop` 、物理、工程和计量经济学?

[Discussions](https://discuss.d2l.ai/t/2088)
[Discussions](https://discuss.d2l.ai/t/1744)
2 changes: 1 addition & 1 deletion chapter_multilayer-perceptrons/backprop.md
Original file line number Diff line number Diff line change
Expand Up @@ -180,4 +180,4 @@ $\mathbf{W}^{(2)}$的当前值。
1. 你能把它划分到多个GPU上吗?
1. 与小批量训练相比,有哪些优点和缺点?

[Discussions](https://discuss.d2l.ai/t/1816)
[Discussions](https://discuss.d2l.ai/t/5769)
Original file line number Diff line number Diff line change
Expand Up @@ -64,4 +64,4 @@
1. 我们如何利用BERT来训练语言模型?
1. 我们能在机器翻译中利用BERT吗?

[Discussions](https://discuss.d2l.ai/t/396)
[Discussions](https://discuss.d2l.ai/t/5729)
Original file line number Diff line number Diff line change
Expand Up @@ -256,9 +256,9 @@ for X, Y in train_iter:
1. 我们如何更改超参数以减小词表大小?

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/394)
[Discussions](https://discuss.d2l.ai/t/5721)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1388)
[Discussions](https://discuss.d2l.ai/t/5722)
:end_tab:
Original file line number Diff line number Diff line change
Expand Up @@ -381,9 +381,9 @@ predict_snli(net, vocab, ['he', 'is', 'good', '.'], ['he', 'is', 'bad', '.'])
1. 假设我们想要获得任何一对句子的语义相似级别(例如,0到1之间的连续值)。我们应该如何收集和标注数据集?你能设计一个有注意力机制的模型吗?

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/395)
[Discussions](https://discuss.d2l.ai/t/5727)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1530)
[Discussions](https://discuss.d2l.ai/t/5728)
:end_tab:
Original file line number Diff line number Diff line change
Expand Up @@ -314,9 +314,9 @@ d2l.train_ch13(net, train_iter, test_iter, loss, trainer, num_epochs,
1. 如何根据一对序列的长度比值截断它们?将此对截断方法与`SNLIBERTDataset`类中使用的方法进行比较。它们的利弊是什么?

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/397)
[Discussions](https://discuss.d2l.ai/t/5715)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1526)
[Discussions](https://discuss.d2l.ai/t/5718)
:end_tab:
Original file line number Diff line number Diff line change
Expand Up @@ -174,9 +174,9 @@ def load_data_imdb(batch_size, num_steps=500):
1. 你能实现一个函数来将[Amazon reviews](https://snap.stanford.edu/data/web-Amazon.html)的数据集加载到数据迭代器中进行情感分析吗?

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/391)
[Discussions](https://discuss.d2l.ai/t/5725)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1387)
[Discussions](https://discuss.d2l.ai/t/5726)
:end_tab:
Original file line number Diff line number Diff line change
Expand Up @@ -263,9 +263,9 @@ d2l.predict_sentiment(net, vocab, 'this movie is so bad')
1. 在输入表示中添加位置编码。它是否提高了分类的精度?

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/393)
[Discussions](https://discuss.d2l.ai/t/5719)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1425)
[Discussions](https://discuss.d2l.ai/t/5720)
:end_tab:
Original file line number Diff line number Diff line change
Expand Up @@ -207,9 +207,9 @@ predict_sentiment(net, vocab, 'this movie is so bad')
1. 是否可以通过spaCy词元化来提高分类精度?你需要安装Spacy(`pip install spacy`)和英语语言包(`python -m spacy download en`)。在代码中,首先导入Spacy(`import spacy`)。然后,加载Spacy英语软件包(`spacy_en = spacy.load('en')`)。最后,定义函数`def tokenizer(text): return [tok.text for tok in spacy_en.tokenizer(text)]`并替换原来的`tokenizer`函数。请注意GloVe和spaCy中短语标记的不同形式。例如,短语标记“new york”在GloVe中的形式是“new-york”,而在spaCy词元化之后的形式是“new york”。

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/392)
[Discussions](https://discuss.d2l.ai/t/5723)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1424)
[Discussions](https://discuss.d2l.ai/t/5724)
:end_tab:
Original file line number Diff line number Diff line change
Expand Up @@ -84,4 +84,4 @@ $$\sum_{w \in \mathcal{V}} P(w \mid w_c) = 1.$$
1. 验证 :eqref:`eq_hi-softmax-sum-one`是否有效。
1. 如何分别使用负采样和分层softmax训练连续词袋模型?

[Discussions](https://discuss.d2l.ai/t/382)
[Discussions](https://discuss.d2l.ai/t/5741)
Original file line number Diff line number Diff line change
Expand Up @@ -344,9 +344,9 @@ len(vocab)
1. 如果我们不过滤出一些不常见的词元,词量会有多大?

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/389)
[Discussions](https://discuss.d2l.ai/t/5737)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1496)
[Discussions](https://discuss.d2l.ai/t/5738)
:end_tab:
Original file line number Diff line number Diff line change
Expand Up @@ -261,9 +261,9 @@ encoded_pair.shape, encoded_pair_cls.shape, encoded_pair_crane[0][:3]
2. 将BERT输入序列的最大长度设置为512(与原始BERT模型相同)。使用原始BERT模型的配置,如$\text{BERT}_{\text{LARGE}}$。运行此部分时是否遇到错误?为什么?

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/390)
[Discussions](https://discuss.d2l.ai/t/5742)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1497)
[Discussions](https://discuss.d2l.ai/t/5743)
:end_tab:
4 changes: 2 additions & 2 deletions chapter_natural-language-processing-pretraining/bert.md
Original file line number Diff line number Diff line change
Expand Up @@ -418,9 +418,9 @@ class BERTModel(nn.Module):
1. 在BERT的原始实现中,`BERTEncoder`中的位置前馈网络(通过`d2l.EncoderBlock`)和`MaskLM`中的全连接层都使用高斯误差线性单元(Gaussian error linear unit,GELU) :cite:`Hendrycks.Gimpel.2016`作为激活函数。研究GELU与ReLU之间的差异。

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/388)
[Discussions](https://discuss.d2l.ai/t/5749)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1490)
[Discussions](https://discuss.d2l.ai/t/5750)
:end_tab:
2 changes: 1 addition & 1 deletion chapter_natural-language-processing-pretraining/glove.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,4 +94,4 @@ $$\mathbf{u}_j^\top \mathbf{v}_i + b_i + c_j \approx \log\, x_{ij}.$$
1. 如果词$w_i$和$w_j$在同一上下文窗口中同时出现,我们如何使用它们在文本序列中的距离来重新设计计算条件概率$p_{ij}$的方法?提示:参见GloVe论文 :cite:`Pennington.Socher.Manning.2014`的第4.2节。
1. 对于任何一个词,它的中心词偏置和上下文偏置在数学上是等价的吗?为什么?

[Discussions](https://discuss.d2l.ai/t/385)
[Discussions](https://discuss.d2l.ai/t/5736)
Original file line number Diff line number Diff line change
Expand Up @@ -221,9 +221,9 @@ get_analogy('do', 'did', 'go', glove_6b50d)
1. 当词表非常大时,我们怎样才能更快地找到相似的词或完成一个词的类比呢?

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/387)
[Discussions](https://discuss.d2l.ai/t/5745)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1336)
[Discussions](https://discuss.d2l.ai/t/5746)
:end_tab:
Original file line number Diff line number Diff line change
Expand Up @@ -145,9 +145,9 @@ print(segment_BPE(tokens, symbols))
1. 如何扩展字节对编码的思想来提取短语?

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/386)
[Discussions](https://discuss.d2l.ai/t/5747)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/386)
[Discussions](https://discuss.d2l.ai/t/5748)
:end_tab:
Original file line number Diff line number Diff line change
Expand Up @@ -345,9 +345,9 @@ for batch in data_iter:
1. 本节代码中的哪些其他超参数可能会影响数据加载速度?

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/383)
[Discussions](https://discuss.d2l.ai/t/5734)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1330)
[Discussions](https://discuss.d2l.ai/t/5735)
:end_tab:
Original file line number Diff line number Diff line change
Expand Up @@ -275,9 +275,9 @@ get_similar_tokens('chip', 3, net[0])
1. 当训练语料库很大时,在更新模型参数时,我们经常对当前小批量的*中心词*进行上下文词和噪声词的采样。换言之,同一中心词在不同的训练迭代轮数可以有不同的上下文词或噪声词。这种方法的好处是什么?尝试实现这种训练方法。

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/384)
[Discussions](https://discuss.d2l.ai/t/5739)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1335)
[Discussions](https://discuss.d2l.ai/t/5740)
:end_tab:
Original file line number Diff line number Diff line change
Expand Up @@ -121,4 +121,4 @@ $$\frac{\partial \log\, P(w_c \mid \mathcal{W}_o)}{\partial \mathbf{v}_{o_i}} =
1. 英语中的一些固定短语由多个单词组成,例如“new york”。如何训练它们的词向量?提示:查看word2vec论文的第四节 :cite:`Mikolov.Sutskever.Chen.ea.2013`
1. 让我们以跳元模型为例来思考word2vec设计。跳元模型中两个词向量的点积与余弦相似度之间有什么关系?对于语义相似的一对词,为什么它们的词向量(由跳元模型训练)的余弦相似度可能很高?

[Discussions](https://discuss.d2l.ai/t/381)
[Discussions](https://discuss.d2l.ai/t/5744)
6 changes: 3 additions & 3 deletions chapter_optimization/adadelta.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,13 +150,13 @@ d2l.train_concise_ch11(trainer, {'learning_rate':5.0, 'rho': 0.9}, data_iter)
1. 将Adadelta的收敛行为与AdaGrad和RMSProp进行比较。

:begin_tab:`mxnet`
[Discussions](https://discuss.d2l.ai/t/357)
[Discussions](https://discuss.d2l.ai/t/5771)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1076)
[Discussions](https://discuss.d2l.ai/t/5772)
:end_tab:

:begin_tab:`tensorflow`
[Discussions](https://discuss.d2l.ai/t/1077)
[Discussions](https://discuss.d2l.ai/t/5773)
:end_tab:
2 changes: 1 addition & 1 deletion chapter_recurrent-modern/beam-search.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,4 +166,4 @@ $\alpha$通常设置为$0.75$。
1. 在 :numref:`sec_rnn_scratch`中,我们基于用户提供的前缀,
通过使用语言模型来生成文本。这个例子中使用了哪种搜索策略?你能改进吗?

[Discussions](https://discuss.d2l.ai/t/2786)
[Discussions](https://discuss.d2l.ai/t/5768)
2 changes: 1 addition & 1 deletion chapter_recurrent-modern/gru.md
Original file line number Diff line number Diff line change
Expand Up @@ -423,5 +423,5 @@ d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, strategy)
:end_tab:

:begin_tab:`pytorch`
[Discussions](https://discuss.d2l.ai/t/1056)
[Discussions](https://discuss.d2l.ai/t/2763)
:end_tab:
2 changes: 1 addition & 1 deletion chapter_recurrent-neural-networks/rnn-concise.md
Original file line number Diff line number Diff line change
Expand Up @@ -287,5 +287,5 @@ d2l.train_ch8(net, train_iter, vocab, lr, num_epochs, strategy)
:end_tab:

:begin_tab:`tensorflow`
[Discussions](https://discuss.d2l.ai/t/2211)
[Discussions](https://discuss.d2l.ai/t/5766)
:end_tab:

0 comments on commit bd27efe

Please sign in to comment.