Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

: Parameter indices which did not receive grad for rank 0: 338 339 340 341 342 343 344 345 346 347 348 349 350 #251

Open
ashwani-ver opened this issue Jun 12, 2024 · 3 comments

Comments

@ashwani-ver
Copy link

Can anyone help me with this.
I am training on custom dataset and i have changes image dataset class as per my need.
The error below is coming when i am starting training the model.

[rank0]: making sure all forward function outputs participate in calculating loss.
[rank0]: If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable).
[rank0]: Parameter indices which did not receive grad for rank 0: 338 339 340 341 342 343 344 345 346 347 348 349 350
[rank0]: In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error

i have changed the get_item method
def getitem(self, i):
example = dict()
original_image = self.preprocess_image(self.labels["file_path_"][i])
noisy_image = self.add_noise(original_image)

    original_image = (original_image / 127.5 - 1.0).astype(np.float32)
    noisy_image = (noisy_image / 127.5 - 1.0).astype(np.float32)
    
    example["image"] = noisy_image
    example["target"] = original_image
    return example
@SHAREN111
Copy link

I didn't modify the code, and I encountered the same issue as you when training VQVAE with a custom dataset. Have you resolved your issue? Looking forward to your response.

@ashwani-ver
Copy link
Author

I didn't modify the code, and I encountered the same issue as you when training VQVAE with a custom dataset. Have you resolved your issue? Looking forward to your response.

yes,

from pytorch_lightning.plugins import DDPPlugin

go to line number 523 in main.py and add the below code just below the line number 523 and before 526.

ddp_plugin = DDPPlugin(find_unused_parameters=True)
trainer_kwargs["plugins"] = [ddp_plugin]
trainer = Trainer.from_argparse_args(trainer_opt, **trainer_kwargs)

@SHAREN111
Copy link

I didn't modify the code, and I encountered the same issue as you when training VQVAE with a custom dataset. Have you resolved your issue? Looking forward to your response.

yes,

from pytorch_lightning.plugins import DDPPlugin

go to line number 523 in main.py and add the below code just below the line number 523 and before 526.

ddp_plugin = DDPPlugin(find_unused_parameters=True) trainer_kwargs["plugins"] = [ddp_plugin] trainer = Trainer.from_argparse_args(trainer_opt, **trainer_kwargs)

Thank you for your generosity and kindness in helping me resolve the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants