-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
zipped ImageNet processing scripts #192
Comments
+1, I found that it need more 100G+ memory during the data preparation when i was training the Swin-Transformer。it's unbelieveable. there is some information for this question.Conditions: global_rank 6 cached 0/1281167 takes 0.00s per block it will up to the number of train and val images. should it cache the image data or the list. but my memory is out. So I think it's the image data.The matter as follows: run command follows:
My memory information as follow:(swap is equal the memory, but it was limited.)root@a41cbab8ac5e: |
please help me, very thanks! |
I succeed! I allocated my memory up to 200G but cannot support 8GPU with bs_56 for training as well. it out of memory. So I set 4GPUs with bs48. it works! maybe i should allocate more memory . Try it later.[2022-05-13 23:36:32 swin_small_patch4_window7_224](main.py 229): INFO Train: [16/300][860/6672] eta 0:47:01 lr 0.000421 time 0.5162 (0.4855) loss 5.1306 (4.9953) grad_norm 2.3225 (2.8221) mem 7785MB Memory (4 GPUs with bs_48, i think it has nothing with bs)root: |
wonder how to generate the zipped imageNet labels, e.g. train_map.txt ? |
Hi, can you provide processing scripts for zipped ImageNet ?
The text was updated successfully, but these errors were encountered: