-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need help..error running sample prediction on KTH dataset #27
Comments
You need to change the batch_size=9 ,maybe it will work |
ok thanks will try that |
thanks that batch issue resolved but encountered other error for tensor shape mismatch.. InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [4,4,2,64] rhs shape= [4,4,6,64] please help |
Check this: Same problem. Uncomment line 118 in I used batch-size=8 and num-samples=816. It has worked for me. |
Hi, I switched to tf 1.10 and at least testing is working |
Only thing that worked for me. Being in 2021 |
Hi @alexlee-gk
I am trying to workout pretrained model on kth dataset, I have successfully downloaded and processed dataset by running following command
bash data/download_and_preprocess_dataset.sh kth
Also downloaded pretrained model
bash pretrained_models/download_model.sh kth ours_savp
and now trying to run sample prediction on kth dataset but getting following batch size error, command is
CUDA_VISIBLE_DEVICES=0 python scripts/generate.py --input_dir data/kth --dataset_hparams sequence_length=30 --checkpoint pretrained_models/kth/ours_savp --mode test --results_dir results_test_samples/kth
Error is
File "scripts/generate.py", line 193, in
main()
File "scripts/generate.py", line 130, in main
raise ValueError('batch_size should evenly divide the dataset size %d' % num_examples_per_epoch)
ValueError: batch_size should evenly divide the dataset size 819
I am confused whats went wrong..?
I am newbie in deep learning and Tensor-flow..
Please help
also want to try your models on my own dataset.. need your guidance
Thanks in advance
Avani
The text was updated successfully, but these errors were encountered: