You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With #2285 implemented, we'll have makeshift source references from deparsed code. But these have limitations: some objects are not parsable, including expression trees containing complex literals, and we don't know the original file organisation of the namespace objects. It would be better to create source references from an actual copy of the repo.
I think we could do this with pak::pkg_download(). It can download source tarballs and has the very nice property that downloads are cached on disk. I think this makes it reasonable to download sources when needed and build the source references from there. Then the user is able to navigate foreign code in the original context. This would only work when pak and Internet (or the cache) is available, so the makeshift source refs are still useful to provide generalised source availability.
The text was updated successfully, but these errors were encountered:
I now think this shoudn't use pak, instead we should download the sources, process them, and cache the result ourselves from rust async tasks or threads. This way we don't depend on pak and don't take up time in the runtime. We can also reparse these sources in another process for the same reason, and save the new sources to a cache on disk (https://crates.io/crates/directories) that can be retrieved from our runtime with readRDS().
As food for thought, I think the positron r-extension will definitely use pak (or at least the option/encouragement to do so). I realize that ark is meant to be usable outside of the context of positron, so the above doesn't mean that ark can assume pak is there. But if pak is available, should ark use the pak cache to satisfy the debugger's need for source? It feels like pak and renv are on some convergent evolutionary path to play well together, so that might be worth emulating.
I see what you mean. Actually we could just implement all/most of this in R and launch a subprocess to do the work, this way we're not interfering with the session and all the complexity is offloaded to pak as originally planned.
This means nice source refs will only be available if pak is installed in the user library, but we do have reparsed sources as a fallback if not the case so that seems acceptable.
Also we could even install pak in a private and versioned Ark library.
With #2285 implemented, we'll have makeshift source references from deparsed code. But these have limitations: some objects are not parsable, including expression trees containing complex literals, and we don't know the original file organisation of the namespace objects. It would be better to create source references from an actual copy of the repo.
I think we could do this with
pak::pkg_download()
. It can download source tarballs and has the very nice property that downloads are cached on disk. I think this makes it reasonable to download sources when needed and build the source references from there. Then the user is able to navigate foreign code in the original context. This would only work when pak and Internet (or the cache) is available, so the makeshift source refs are still useful to provide generalised source availability.The text was updated successfully, but these errors were encountered: