Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(az): correctly remove nested file structures #449

Closed
wants to merge 3 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions HISTORY.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
## v0.18.1 (2024-02-26)

- Fixed import error due to incompatible `google-cloud-storage` by not using `transfer_manager` if it is not available. ([Issue #408](https://github.com/drivendataorg/cloudpathlib/issues/408), [PR #410](https://github.com/drivendataorg/cloudpathlib/pull/410))
- fix(az): correctly remove nested file structures ([Issue #448](https://github.com/drivendataorg/cloudpathlib/issues/448), [PR #449](https://github.com/drivendataorg/cloudpathlib/pull/449))

Includes all changes from v0.18.0.

Expand Down
36 changes: 22 additions & 14 deletions cloudpathlib/azure/azblobclient.py
Original file line number Diff line number Diff line change
Expand Up @@ -250,24 +250,32 @@ def _move_file(

return dst

def _remove(self, cloud_path: AzureBlobPath, missing_ok: bool = True) -> None:
def _remove(self, cloud_path: AzureBlobPath, missing_ok: bool = True) -> None: # type: ignore
container_client = self.service_client.get_container_client(cloud_path.container)
file_or_dir = self._is_file_or_dir(cloud_path)

if not file_or_dir:
if missing_ok:
return

raise FileNotFoundError(f"File does not exist: {cloud_path}")

if file_or_dir == "dir":
blobs = [
b.blob for b, is_dir in self._list_dir(cloud_path, recursive=True) if not is_dir
]
container_client = self.service_client.get_container_client(cloud_path.container)
container_client.delete_blobs(*blobs)
elif file_or_dir == "file":
blob = self.service_client.get_blob_client(
container=cloud_path.container, blob=cloud_path.blob
blobs = [(blob, is_dir) for blob, is_dir in self._list_dir(cloud_path, recursive=True)]

# need to delete files first to allow deleting the folders
files = [blob.blob for blob, is_dir in blobs if not is_dir]
container_client.delete_blobs(*files)

# folders need to be deleted from the deepest to the shallowest
folders = sorted(
(blob.blob for blob, is_dir in blobs if is_dir and blob.exists()), reverse=True
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little worried about perf since .exists adds a network call. This means that for large file trees (broad or deep) we potentially add a lot of calls for "fake" folders that don't actually exist on storage but _list_dir returns to act like a file system.

I believe that only explicitly created folders on blob storage (or maybe with certain parameters set) would still stick around without specific removal. To create these, you need to be using hierarchical namespaces/a Data Lake Gen2 Storage Account.

I think the fix may instead to be to use an azure-SDK API that will list all explicit blobs (files or folders) rather than _list_dir in determining what to remove. I'm not sure if list_blobs does that under accounts with the hierarchical namespaces, so I think we'd need to test that first to see if it works for your use cases.

Do you think you could dig in and see if those settings/accounts can repro the issue for you?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little worried about perf since .exists adds a network call. This means that for large file trees (broad or deep) we potentially add a lot of calls for "fake" folders that don't actually exist on storage but _list_dir returns to act like a file system.

I don't think the call to exists here is strictly necessary, but I might be misremembering.

I believe that only explicitly created folders on blob storage (or maybe with certain parameters set) would still stick around without specific removal. To create these, you need to be using hierarchical namespaces/a Data Lake Gen2 Storage Account.

Hmm, on further investigation, it seems the "problematic" accounts are using hierarchical namespaces. The directories seems to be "sticky" even without being explicitly created, which is a bit surprising.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's very helpful to know, thanks. I'd love to support that feature of blob storage—I'll set one up on our test account and see how it works. We may need more fixes than this to make cloudpathlib play nice with hierarchical namespace enabled storage accounts.

)
container_client.delete_blobs(*folders)

blob.delete_blob()
else:
# Does not exist
if not missing_ok:
raise FileNotFoundError(f"File does not exist: {cloud_path}")
# delete the cloud_path itself
if cloud_path.exists():
container_client.delete_blob(cloud_path.blob)

def _upload_file(
self, local_path: Union[str, os.PathLike], cloud_path: AzureBlobPath
Expand Down
4 changes: 4 additions & 0 deletions tests/mock_clients/mock_azureblob.py
Original file line number Diff line number Diff line change
Expand Up @@ -148,6 +148,10 @@ def exists(self):
def list_blobs(self, name_starts_with=None):
return mock_item_paged(self.root, name_starts_with)

def delete_blob(self, blob):
(self.root / blob).unlink()
delete_empty_parents_up_to_root(path=self.root / blob, root=self.root)

def delete_blobs(self, *blobs):
for blob in blobs:
(self.root / blob).unlink()
Expand Down
Loading