Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase block count limit 32768 #178

Open
gitownit opened this issue Mar 15, 2023 · 2 comments
Open

Increase block count limit 32768 #178

gitownit opened this issue Mar 15, 2023 · 2 comments

Comments

@gitownit
Copy link

This 32768 block limit make the block size a little giant when small random error recovery is the main goal, like for 512 byte/sector of a little defective hard drive (without AF).

Example :
One 200MB file result in about 32768 block count of 6000 byte size per block, while I would expect 1024 byte per block. So my recovery block possibilities are divided by six... and I let you imagine the recovery block lose with an 2GB filesize.
There it take about 7 seconds to create this PAR with about 10 recovery block, and I largely can wait ten more time to create a better efficient PAR file. (on an old core2 arch at about 3.6Ghz)

So, PLEASE! I suggest you to increase this tiny limit, that is not needed anymore twenty years later, while we are now working with files of gigas.

I have give a look at the source and spotted some conditions about this 32768, but I'm asking me if there are not other stuff under the hood to take into account.
I think having a 24-bit limit (or even 32-bit) for the block count is the change todo.

Thanks!

@animetosho
Copy link
Contributor

PAR2 is spec'd to use 16-bit GF, hence the file format is fundamentally restricted to 32768 input blocks.
To go beyond that, you need a different file format.

Note that the block size doesn't have to match some underlying sector size - they can be larger and still work fine.

There's a draft PAR3 specification which currently supports arbitrary sized GF, which enables more than 32768 input blocks.

@gitownit
Copy link
Author

Okay! so it seems we are stuck with this limit in PAR2, and we can't do anything without rewriting "from scratch".
I was not aware of this PAR3 project under construction, thanks to mention this one, I will follow and test it.

Bigger block size is also working, yes, but it waste bytes not needed to just repair a 512 bytes sector, and it also reduce the number of recovery blocks without having to increase the PAR2 file size.

But WARNING! the case I'm speaking about cannot be take "as usual", all can happen with a defective drive! from some tiny random sectors unreadable to very large surface, to even not detected anymore...
I just want to give me a little chance of recovering, and without using to much space.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants