Skip to content

Commit

Permalink
Update markdown show problem
Browse files Browse the repository at this point in the history
  • Loading branch information
HarleysZhang committed Nov 4, 2020
1 parent f7ae646 commit 7d202b5
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions 深度学习/神经网络压缩算法总结.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ def residual_unit(data, num_filter, stride, dim_match, num_bits=1):

+ 直接根据权重的正负进行二值化:$x^{b}=sign(x)$。符号函数 `sign(x)` 定义如下:
$$
sign(x) = \left \{\begin{matrix}
sign(x) = \left\{ \begin{matrix}
-1 & x < 0 \\
0 & x = 0 \\
1 & x > 0
Expand All @@ -130,7 +130,7 @@ $$
### 4.3,二值连接算法改进

之前的二值连接算法只对权重进行了二值化,但是网络的中间输出值依然是单精度的,于是 Rastegari 等人对此进行了改进,提出用**单精度对角阵与二值矩阵之积来近似表示原矩阵的算法**,以提升二值网络的分类性能,弥补二值网络在精度上弱势。该算法将原卷积运算分解为如下过程:
$$I*W\approx (I*B)\alpha$$
$$I \times W\approx (I \times B)\alpha$$

其中 $I\epsilon \mathbb{R}^{c\times w_{in}\times h_{in}}$ 为该层的输入张量,$I \epsilon \mathbb{R}^{c\times w\times h}$ 为该层的一个滤波器,$B=sign(W)\epsilon \{+1, -1\}^{c \times w\times h}$为该滤波器所对应的二值权重。

Expand Down

0 comments on commit 7d202b5

Please sign in to comment.