Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Streams larger than uint.MaxValue cause broken zip files #209

Closed
kenkendk opened this issue Mar 9, 2017 · 15 comments
Closed

Streams larger than uint.MaxValue cause broken zip files #209

kenkendk opened this issue Mar 9, 2017 · 15 comments

Comments

@kenkendk
Copy link
Contributor

kenkendk commented Mar 9, 2017

I have tracked down an issue causing failures when attempting to read zip files with SharpCompress (files created AND read by SharpCompress).

The error message is Unknown header {value}, where {value} is some random bytes from the file. This is similar to issue #33, but they report it for a much smaller file (I have tested the file mentioned in the issue, and it does not appear to cause any errors).

The problem is that the Central Directory Entry is limited to storing the Size and HeaderOffset as uint values.
There are no checks in SharpCompress if this limit is exceeded, causing the creation to succeed, but then failing to read them later. In the example below, this is done with a single file of 4GB size, but it can also be achieved with many smaller files, as long as the HeaderOffset value becomes larger than 2^32.

There is another issue in that the number of files are limited to ushort, but this appears to have no effects other than reporting the wrong number of files, which is not directly exposed.

A workaround to reading such a file is using the forward-only interface, which does not read the Central Directory Entry, and this can correctly read the file contents unless there is a single file larger than 2^32.

I can think of two solutions:

  1. Prevent creating files with offsets larger than 2^32, simply throwing an exception if this is detected.

  2. Support zip64, which replaces the size and offset with 0xffffffff and stores a 64bit value in the extended information.

I think support for zip64 is the better choice here. Reading support for zip64 has already been added, but it requires that the correct zip64 records are written.

I will see if I can make a PR that adds zip64 support.

Example code that reproduces the issue:

using System;
using System.IO;
using System.Linq;
using SharpCompress.Archives;
using SharpCompress.Common;
using SharpCompress.Writers;
using SharpCompress.Writers.Zip;

namespace BreakTester
{
	class MainClass
	{
		public static void Main(string[] args)
		{
			// Set up the parameters
			var files = 1;
			var filesize = 4 * 1024 * 1024 * 1024L;
			var write_chunk_size = 1024 * 1024;

			var filename = "broken.zip";
			if (!File.Exists(filename))
				CreateBrokenZip(filename, files, filesize, write_chunk_size);

			var res = ReadBrokenZip(filename);
			if (res.Item1 != files)
				throw new Exception($"Incorrect number of items reported: {res.Item1}, should have been {files}");
			if (res.Item2 != filesize)
				throw new Exception($"Incorrect number of items reported: {res.Item2}, should have been {files * filesize}");
		}

		public static void CreateBrokenZip(string filename, long files, long filesize, long chunksize)
		{
			var data = new byte[chunksize];

			// We use the store method to make sure our data is not compressed,
			// and thus it is easier to make large files
			var opts = new ZipWriterOptions(CompressionType.None) { };

			using (var zip = File.OpenWrite(filename))
			using (var zipWriter = (ZipWriter)WriterFactory.Open(zip, ArchiveType.Zip, opts))
			{

				for (var i = 0; i < files; i++)
					using (var str = zipWriter.WriteToStream(i.ToString(), new ZipWriterEntryOptions() { DeflateCompressionLevel = SharpCompress.Compressors.Deflate.CompressionLevel.None }))
					{
						var left = filesize;
						while (left > 0)
						{
							var b = (int)Math.Min(left, data.Length);
							str.Write(data, 0, b);
							left -= b;
						}
					}
			}
		}

		public static Tuple<long, long> ReadBrokenZip(string filename)
		{
			using (var archive = ArchiveFactory.Open(filename))
			{
				return new Tuple<long, long>(
					archive.Entries.Count(),
					archive.Entries.Select(x => x.Size).Sum()
				);
			}
		}

		public static Tuple<long, long> ReadBrokenZipForwardOnly(string filename)
		{
			long count = 0;
			long size = 0;
			using (var fs = File.OpenRead(filename))
			using (var rd = ZipReader.Open(fs, new ReaderOptions() { LookForHeader = false }))
				while (rd.MoveToNextEntry())
				{
					count++;
					size += rd.Entry.Size;
				}

			return new Tuple<long, long>(count, size);
		}
	}
}
@adamhathcock
Copy link
Owner

Thanks for this.

If you could implement zip64 writing that would be great! I imagine both of the solutions you mentioned will be required (unless changing to zip64 from zip can be automatic)

I have to confess I haven't looked too deeply at the zip64 format at the moment.

@kenkendk
Copy link
Contributor Author

kenkendk commented Mar 9, 2017

Urgh, I just found an error from this commit:
6be6ef0

The values here are changed:

internal long CompressedSize { get; set; }

But the write uses the type here:

So now it writes invalid local headers, as they have 8 byte sizes instead of 4 byte sizes....

@kenkendk
Copy link
Contributor Author

kenkendk commented Mar 9, 2017

Urgh, and in the central directory too!

@adamhathcock
Copy link
Owner

Wow that is a bad error. The writing needs to know if it's zip64 or not and write the correct byte size.

I guess zip64 writing needs done asap.

@kenkendk
Copy link
Contributor Author

kenkendk commented Mar 9, 2017

zip64 maintains the size, but sets it to 0xffffffff and then writes an "extra" record with the sizes having 64bit values.

@kenkendk
Copy link
Contributor Author

kenkendk commented Mar 9, 2017

Thanks for the quick merge and update.

I will investigate some more, because there must be some mitigating factor, otherwise the filenames would be garbled in all produced zip files, and that would break my unittests, so I would have caught it there.

@adamhathcock
Copy link
Owner

If you've got some tests that can be added to SharpCompress, I would appreciate it. Thanks for finding this.

Fixed by #210

@kenkendk
Copy link
Contributor Author

kenkendk commented Mar 9, 2017

This issue is not fixed, the #210 was a problem with writing invalid headers.
There is still an issue with creating files larger than 4GB.

@kenkendk
Copy link
Contributor Author

kenkendk commented Mar 9, 2017

Phew, found the mitigating factor:

outputStream.Write(DataConverter.LittleEndian.GetBytes(Compressed), 0, 4); // compressed file size

This is still wrong, as it converts to 8 bytes, and then takes only 4. Fortunately little endian notation means that the value is written correctly on any x86 or ARM machine.

@kenkendk
Copy link
Contributor Author

kenkendk commented Mar 9, 2017

Nope, wrong again, the actual entry used is:

internal uint Compressed { get; set; }

Which has the uint type. I do not know where the other serialization methods are called.

@adamhathcock
Copy link
Owner

You can't zip files larger than 4 GB. The Zip64 support is currently read only

@kenkendk
Copy link
Contributor Author

kenkendk commented Mar 9, 2017

Sure you can, try the example code in this issue.

@adamhathcock
Copy link
Owner

If it works, then it's unintentional and you're finding the errors because of it.

@kenkendk
Copy link
Contributor Author

kenkendk commented Mar 9, 2017

Yes, that is why I opened the issue. You can create a zip file larger than 4gb without errors, and you see the errors only when you try to read the file.

I would expect some error to happen when crossing the 4gb limit, otherwise I will not discover the problem before I attempt to read the file.

@adamhathcock
Copy link
Owner

Okay, sorry. I thought this was a discussion of the continuation of the writing error.

Yes, something should be done to know files are too large for zip and/or implement zip64

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants