Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: One of the differentiated Variables appears to not have been used in the graph #3

Open
dragen1860 opened this issue Dec 21, 2017 · 10 comments

Comments

@dragen1860
Copy link

Learner nParams: 32901
Traceback (most recent call last):
File "main.py", line 38, in
results = importlib.import_module(opt['metaLearner']).run(opt, data)
File "/home/i/meta/FewShotLearning/model/lstm/train-lstm.py", line 121, in run
opt['batchSize'][opt['nTrainShot']])
File "/home/i/conda/envs/py27/lib/python2.7/site-packages/torch/nn/modules/module.py", line 325, in call
result = self.forward(*input, **kwargs)
File "/home/i/meta/FewShotLearning/model/lstm/metaLearner.py", line 174, in forward
torch.autograd.grad(loss, self.lstm2.parameters())
File "/home/i/conda/envs/py27/lib/python2.7/site-packages/torch/autograd/init.py", line 158, in grad
inputs, only_inputs, allow_unused)
RuntimeError: One of the differentiated Variables appears to not have been used in the graph

I run your code with py27 env. However the error occurs and I don't know what goes wrong?

@SinghGauravKumar
Copy link

Hi @dragen1860 @gitabcworld How did you solve this?

@dragen1860
Copy link
Author

Hi, I did n't solve this problem. Have you got any solution?

@elviswf
Copy link

elviswf commented Feb 4, 2018

I change metaLearner.py, line 174 like following:
torch.autograd.grad(loss, self.lstm2.parameters(), allow_unused=True, retain_graph=True)

It runs but Grads lstm + lstm2 print None .
Pricision goes up.
avg / total 0.21 0.22 0.19 7500

Not sure yet.

@dragen1860
Copy link
Author

@elviswf can it work now? how about your final precision?

@elviswf
Copy link

elviswf commented Feb 4, 2018

@dragen1860 I just get the script run. I think you can get it run too. The final precision is just like above. Some params may need changed. I will try it in next week since I'm working on another project. If I get something new I will update this comment.

@dragen1860
Copy link
Author

@elviswf How is your latest progress?

@gitabcworld
Copy link
Owner

Hi! I am sorry I could not work at this code for long. As @elviswf I am really busy working with other projects and I have not been able to dedicate more time to this project. So any help will be appreciated. I will try to do the changes @elviswf proposes and see if it solves the backprop problem as soon as possible.

@Jorewang
Copy link

Have anyone solve this problem(One of the differentiated Variables appears to not have been used in the graph)?

@lwzhaojun
Copy link

Have you encountered this problem?
RuntimeError: CUDA out of memory. Tried to allocate 18.00 MiB (GPU 0; 8.00 GiB total capacity; 5.57 GiB already allocated; 10.05 MiB free; 607.43 MiB cached)
This problem(One of the differentiated Variables appears to not have been used in the graph) can be solved by adding allow_unused=True.

@lwzhaojun
Copy link

lwzhaojun commented Dec 17, 2019

I change metaLearner.py, line 174 like following:
torch.autograd.grad(loss, self.lstm2.parameters(), allow_unused=True, retain_graph=True)

It runs but Grads lstm + lstm2 print None .
Pricision goes up.
avg / total 0.21 0.22 0.19 7500

Not sure yet.
Hello.Have you encountered this problem?
RuntimeError: CUDA out of memory. Tried to allocate 18.00 MiB (GPU 0; 8.00 GiB total capacity; 5.57 GiB already allocated; 10.05 MiB free; 607.43 MiB cached)
Isn't memory released? I tried reducing batch-size but it didn't work either. do you have any good advice? Can you help me?
Grateful.friend.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants