Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory layout optimized TIFF reader (WIP) #120

Draft
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

XanthosXanthopoulos
Copy link
Contributor

No description provided.

Copy link
Collaborator

@ktsitsi ktsitsi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally LGTM just some minor things.

@pytest.mark.parametrize("chunked,max_workers", [(False, 0), (True, 0), (True, 4)])
@pytest.mark.parametrize(
"compressor",
[tiledb.ZstdFilter(level=0)],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about other compressors?

from tiledb.cc import WebpInputFormat


@pytest.mark.parametrize("filename,num_series", [("UTM2GTIF.tiff", 1)])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about other test files?

tmp_path, filename, num_series, preserve_axes, chunked, max_workers, compressor
):
if isinstance(compressor, tiledb.WebpFilter) and filename == "UTM2GTIF.tiff":
pytest.skip(f"WebPFilter cannot be applied to {filename}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you comment or document why it cannot be applied?

Comment on lines +599 to +612
def tile_to_tiledb_exp(
tile: Tuple[Tuple[slice, ...], NDArray[Any]]
) -> Tuple[np.ndarray, ...]:
idx, data = tile
array_tile = axes_mapper.map_tile(idx)
out_array[array_tile] = axes_mapper.map_array(data)

# return a tuple containing the min-max values of the tile
return np.amin(data, axis=min_max_indices), np.amax(
data, axis=min_max_indices
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets move tile_to_tiledb_exp and tile_to_tiledb outside this function to make the code cleaner and modular.

keyframe.decode # init TiffPage.decode function under lock

decodeargs: Dict[str, Any] = {"_fullsize": bool(False)}
if keyframe.compression in {6, 7, 34892, 33007}: # JPEG
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's create constants instead of magic numbers. I get it that we mirror the internals of tifffile but let's make them more descriptive so we can edit easier in the future.

Comment on lines +233 to +239
"color": (
get_rgba(channel["Color"])
if "Color" in channel
else next(color_generator)
),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For future reference let's update linter versions in standalone PRs. It will make code review easier.

# Inflate the chunk size to account for copies during transformations, and while decoding
size = (
self.level_dtype(level=level).itemsize
* 2
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Define magic number 2, or comment it.

@@ -158,6 +158,20 @@ def original_metadata(self) -> Dict[str, Any]:

return metadata

def iter_mem_contig_tiles(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add function docstrings. Explain what are the contiguous tiles for future reference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants