You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(py36) C:\Users\c0116\Anaconda3\envs\py36\HDGan-master\train\train_gan>sh train_birds.sh device=0
mkdir: created directory '../../Models/HDGAN_256_birds'
WARNING:root:Setting up a new session...
WARNING:root:Setting up a new session...
WARNING:root:Setting up a new session...
WARNING:root:Setting up a new session...
WARNING:root:Setting up a new session...
WARNING:root:Setting up a new session...
WARNING:root:Setting up a new session...
WARNING:root:Setting up a new session...
WARNING:root:Setting up a new session...
WARNING:root:Setting up a new session...
....\HDGan\proj_utils\torch_utils.py:23: UserWarning: volatile was removed and now has no effect. Use with torch.no_grad(): instead.
x.volatile = volatile
Traceback (most recent call last):
File "train_worker.py", line 113, in
train_gans((dataset_train, dataset_test), model_root, model_name, netG, netD, args)
File "....\HDGan\HDGan.py", line 242, in train_gans
d_plot_dict[key].plot(to_numpy(img_loss).mean())
Launch python -m visdom.server -port 43426 to monitor File "....\HDGan\proj_utils\torch_utils.py", line 41, in to_numpy
return x.cpu().numpy()
RuntimeError: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead.
Namespace(KL_COE=4, batch_size=10, cuda=True, d_lr=0.0002, dataset='birds', device_id=0, display_freq=200, epoch_decay=100, finest_size=256, g_lr=0.0002, gpus='0', init_256generator_from='', load_from_epoch=0, maxepoch=600, model_name='HDGAN_256', ncritic=1, noise_dim=100, num_emb=4, num_resblock=1, reuse_weights=False, save_freq=5, test_sample_num=4, verbose_per_iter=50, visdom_port=8097)
Init HDGAN Generator
side output at [64, 128, 256]
Init HDGAN Discriminator
Add adversarial loss at scale [64, 128, 256]
Parallel models in [0] GPUS
Init basic data loader train
8855 samples (batch_size = 10)
[64, 128, 256] output resolutions
4 embeddings used
Init basic data loader test
2933 samples (batch_size = 10)
[64, 128, 256] output resolutions
1 embeddings used
Start training ...
The text was updated successfully, but these errors were encountered:
An error occurred when I tried to execute train_birds.sh. I did not change the file or model.
Please tell me what kind of correction should be made.
environment
windows 10
Anaconda 3.6
Python 3.6
Pytorch 0.3.1
Tensorflow 1.4.1
(py36) C:\Users\c0116\Anaconda3\envs\py36\HDGan-master\train\train_gan>sh train_birds.sh device=0
mkdir: created directory '../../Models/HDGAN_256_birds'
WARNING:root:Setting up a new session...
WARNING:root:Setting up a new session...
WARNING:root:Setting up a new session...
WARNING:root:Setting up a new session...
WARNING:root:Setting up a new session...
WARNING:root:Setting up a new session...
WARNING:root:Setting up a new session...
WARNING:root:Setting up a new session...
WARNING:root:Setting up a new session...
WARNING:root:Setting up a new session...
....\HDGan\proj_utils\torch_utils.py:23: UserWarning: volatile was removed and now has no effect. Use
with torch.no_grad():
instead.x.volatile = volatile
Traceback (most recent call last):
File "train_worker.py", line 113, in
train_gans((dataset_train, dataset_test), model_root, model_name, netG, netD, args)
File "....\HDGan\HDGan.py", line 242, in train_gans
d_plot_dict[key].plot(to_numpy(img_loss).mean())
Launch python -m visdom.server -port 43426 to monitor File "....\HDGan\proj_utils\torch_utils.py", line 41, in to_numpy
return x.cpu().numpy()
RuntimeError: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead.
Namespace(KL_COE=4, batch_size=10, cuda=True, d_lr=0.0002, dataset='birds', device_id=0, display_freq=200, epoch_decay=100, finest_size=256, g_lr=0.0002, gpus='0', init_256generator_from='', load_from_epoch=0, maxepoch=600, model_name='HDGAN_256', ncritic=1, noise_dim=100, num_emb=4, num_resblock=1, reuse_weights=False, save_freq=5, test_sample_num=4, verbose_per_iter=50, visdom_port=8097)
The text was updated successfully, but these errors were encountered: