-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
debugging out-of-memory exception #75
Comments
That's indeed one of the disadvantage of using a gc rather than ref-counting: gpu memory is handled via RAII mechanisms in libtorch and so is only collected when the gc triggers and collects the data rather than being collected as early as possible. |
Thanks, yes -- I found that adding this line at the end of the for loop:
keeps the memory from blowing up. It also improves the performance to the point that it's faster than Pytorch. yay. |
Hi Laurent, Still struggling with this issue; repeated calls to Caml.Gc.major () or Caml.Gc.full_major () don't seem to be helping. I'm guessing that the GC sees the Tensors only as their small pointers / Ocaml structures, not as the many MB of GPU RAM that they consume, so that it chooses not to deallocate them? For example, below change in GC state corresponds with losing > 10GB of GPU RAM -- thereby making the app fail.
Note: the code from Dec 12, with repeated calls to Caml.Gc.full_major(), consumes > 50x the GPU RAM as it naively ought to... excluding the 916MB allocated by default ... Am considering writing the critical code in C++ & deallocating through IIRC, as you mention. If there are examples of this in your source code, I would be happy to study them & report back. Thank you again for this excellent library! |
In wrapper_generated.ml, there are (deferred) calls to C.Tensor.free :
Is it possible to call C.Tensor.free ? Seems easier than writing in C++ |
It's indeed the case that the gc doesn't see the tensor as occupying a large amount of memory, however this should not be an issue as the call to |
Got it. I'll experiment more and make a minimal repository that
demonstrates the leak.
…On Sat, Apr 15, 2023, 1:06 AM Laurent Mazare ***@***.***> wrote:
It's indeed the case that the gc doesn't see the tensor as occupying a
large amount of memory, however this should not be an issue as the call to
Gc.full_major() should collect all the dangling memory regardless of
whether it uses a large amount or not (the gc knowing about the memory
usage is only useful to decide *when* to trigger the gc).
So if you still see memory usage increasing despite regular full major
collections, there is a deeper issue somewhere, it could be a bug in
ocaml-torch or that your code somehow retains references to the tensors. I
would suggest trying to reduce as much as possible the example until there
is no memory leak anymore and hopefully this should give an idea of what is
going on (and if you have some very short repro, that would be useful to
help debug the issue if it's within ocaml-torch).
—
Reply to this email directly, view it on GitHub
<#75 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAOLULUYZWI4XQJUSJ63VKDXBJJJFANCNFSM6AAAAAAS42O32M>
.
You are receiving this because you modified the open/close state.Message
ID: ***@***.***>
|
Can you try this? Calling Gc.full_major () does not decrease the memory allocation. Interestingly, memory allocation climbs the first several iterations then saturates by the 20th. |
What if this is some issue with gradient tracing (??) |
Sorry I didn't find the time to look at your repro so far, gradient tracing may indeed be a culprit though the issue usually happens if you have some form of global accumulator which I don't see in your code. Anyway you can give a try at running this whithin a |
Cool -- I tried isolating with a Tensor.no_grad( fun () -> ...) and it did
not do anything to the memory usage.
Tried running it on the Cpu, as you suggested. Memory peaks at 1.3GB, goes
down to ~950MB. This is roughly half the mem usage on the Gpu. So -
that's good ???
There is a python implementation of the same operations in the repo. It
does not leak, so far as I can tell.
…On Mon, Apr 24, 2023 at 10:14 PM Laurent Mazare ***@***.***> wrote:
Sorry I didn't find the time to look at your repro so far, gradient
tracing may indeed be a culprit though the issue usually happens if you
have some form of global accumulator which I don't see in your code. Anyway
you can give a try at running this whithin a Tensor.no_grad block to
deactivate gradient tracing. Might be worth looking at this PyTorch FAQ
<https://pytorch.org/docs/stable/notes/faq.html#my-model-reports-cuda-runtime-error-2-out-of-memory>
too in case there is anything related, the memory allocation climbing only
for the first iteration makes me more suspicious of the allocator doing
some caching, it could be interesting to check what happens when using a
cpu device rather than a gpu, as well as checking whether this also happens
when running similar code with the Python api.
—
Reply to this email directly, view it on GitHub
<#75 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAOLULUZ2ISAYIJSC2DM4T3XC5MTLANCNFSM6AAAAAAS42O32M>
.
You are receiving this because you modified the open/close state.Message
ID: ***@***.***>
|
I just tried your example and it seems to me that after adding a call to |
Yes, that makes sense -- I wonder what it's caching, though. Would be really nice to get my GPU ram back. FWIW, added a C++ test (thank you, gpt4), which uses even less memory. I suppose I can ffi it? Might be useful for other ocaml-torch users? |
Hello,
I've been trying to shift a hybrid ocaml-python program to mostly ocaml. Part of this program is a simple image collision test; when implemented it in ocaml, it is >2x slower than the python equivalent. Digging a bit, I noticed that the ocaml implementation needs a lot of memory. The following is a minimum working example that runs into the same memory leak / out-of-memory problem:
Would love to figure out how to get this working. I suspect variables are being allocated every loop, and the GC is not getting around to removing them. Would love to make it as performant as Python. (Can't believe I'm saying that!! -- perhaps by streamlining "d = th.sum((dbf - img)**2, (1,2))"?? )
Any advice much appreciated.
The text was updated successfully, but these errors were encountered: