-
Notifications
You must be signed in to change notification settings - Fork 298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training stops for KTO
after model loads into memory.
#1938
Comments
Seems a sudden death, I do think there is a memory problem. Can you please observe the memory usage when running this training? |
I had the same config with same dataset and model which worked. But i will check |
Here is my gpu usage at the crash point: After the crash both become 1 MiB. |
it also happens when quantized to 8 bit. there is ~77GB free memory. |
Okay so the the training works as expected on Azure servers but i these issues on TensorDock and Massed Compute. All the servers had 2xA100 80GB. |
Describe the bug
What the bug is, and how to reproduce, better with screenshots(描述bug以及复现过程,最好有截图)
The process stops after loading the model into memory and processing dataset. I also tried another dataset that worked before (15-25 days ago) but it's not working now. this same configuration worked 15-25 days ago.
I also tried using
trl==0.9.6
having but same issues. I also tried switching servers between different vendors and using H100s instead of A100s.Training arguments:
Logs:
Had to use pastebin because of github issue body limit.
Pastebin.
Your hardware and system info
Write your system info like CUDA version/system/GPU/torch version here(在这里给出硬件信息和系统信息,如CUDA版本,系统,GPU型号和torch版本等)
GPUs: 2xA100 from Massed Compute
Additional context
Add any other context about the problem here(在这里补充其他信息)
The text was updated successfully, but these errors were encountered: