-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Writing an attachment forces me to end my chunk #33
Comments
That’s right. Is it a problem? |
Working through my implementation and in my case it isn't since the chunk is held in memory until I need to write it - so the attachment that comes in while I am still building a chunk is written out before the chunk is. This does mean that the timestamp of the first message in the chunk is after the attachment timestamp. So my original comment that you have to write out the chunk when you get an attachment isn't accurate if you are keeping a chunk in memory. However for writers that are ok streaming out a chunk and going back to fill in the record length, they would need to finalize the chunk before writing the attachment. Whether this is a "problem" is semantics I guess? It is something a writer implementation needs to be mindful of doing. |
After thinking through this more I dislike the fact that attachments cannot be compressed in MCAP files. I'd rather not see |
Another consideration: if attachments could live inside chunks, we wouldn't need a separate crc field on attachments. |
my 2c on recording.mcap.gz, is that nothing prevents you from attaching a compressed file. that's how I'd do it. |
Add to implementation notes thoughts on the mechanics of adding attachments while you are building/writing chunks. |
My writer is trying to produce a chunk for every second of data. If I've written a few messages and now want to write an attachment record, it seems I have to end my chunk, write out all the message indexes, write the attachment, and then start a new chunk.
The text was updated successfully, but these errors were encountered: