Skip to content
This repository has been archived by the owner on Feb 12, 2022. It is now read-only.

A UsingWarning about call flatten_parameters() #120

Open
wxhiff opened this issue May 29, 2020 · 0 comments
Open

A UsingWarning about call flatten_parameters() #120

wxhiff opened this issue May 29, 2020 · 0 comments

Comments

@wxhiff
Copy link

wxhiff commented May 29, 2020

Hi,

When running !python -u main.py --epochs 500 --nlayers 3 --emsize 200 --nhid 1000 --alpha 0 --beta 0 --dropoute 0 --dropouth 0.25 --dropouti 0.1 --dropout 0.1 --wdrop 0.5 --wdecay 1.2e-6 --bptt 150 --batch_size 128 --optimizer adam --lr 2e-3 --data data/pennchar --save PTBC.pt --when 300 400 I get the following warnings:

-----------------------------------------------------------------------------------------
 | end of epoch  28 | time: 317.33s | valid loss  1.01 | valid ppl     2.75 | valid bpc    1.462
 -----------------------------------------------------------------------------------------
Saving model (new best validation)
/pytorch/aten/src/ATen/native/cudnn/RNN.cpp:1269: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
/pytorch/aten/src/ATen/native/cudnn/RNN.cpp:1269: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
.
.
.
-----------------------------------------------------------------------------------------
| end of epoch  29 | time: 3123.33s | valid loss  1.00 | valid ppl     2.75 | valid bpc    1.462
-----------------------------------------------------------------------------------------
Saving model (new best validation)
/pytorch/aten/src/ATen/native/cudnn/RNN.cpp:1269: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
/pytorch/aten/src/ATen/native/cudnn/RNN.cpp:1269: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().

It looks at work, But fill with UsingWarning. I run it in Google Colaboratory, pytorch1.5. How fix it?Must pytorch0.4?

Thank you!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant