Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zeroize: use asm! to improve performance #841

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

cbeck88
Copy link

@cbeck88 cbeck88 commented Feb 28, 2023

The purpose of the change is to make calls to x.as_mut_slice().zeroize() considerably faster, particularly for types like [u8; n]. We take @sopium's proposed code from #743 without significant changes.

The reason it becomes faster is that the call to volatile_set before this change appears not to be easily optimizable, and (for example) leads to setting bytes one at a time, instead of the compiler consolidating them into SIMD instructions.

In the modified code, we don't use volatile_set, we instead loop over the slice setting the elements to Default::default(), and to ensure that the writes are not optimized out, we use an empty asm block. (There is discussion of the correct asm options to use here in the issue.)

Because the asm block potentially reads from the pointer and makes a syscall of some kind, the compiler cannot optimize out the zeroizing, or it could cause observable side-effects. In the improved code, we only create such an optimization barrier once, rather than after each byte that it is written.

The call to atomic_fence() is not changed.


This change may help give users a way to improve performance, if they have to zeroize very large arrays, or, frequently have to zeroize many small objects. We tested code-gen here in godbolt (in addition to the tests posted in the github issue) and found that this change is typically enough for llvm to start adding in SIMD optimizations that zero many bytes at once.

The purpose of the change is to make calls to `x.as_mut_slice().zeroize()`
considerably faster, particularly for types like `[u8; n]`.

The reason it becomes faster is that the call to `volatile_set` before
this change appears not to be easily optimizable, and (for example) leads
to setting bytes one at a time, instead of the compiler consolidating them
into SIMD instructions.

In the modified code, we don't use `volatile_set`, we instead loop over
the slice setting the elements to `Default::default()`, and to ensure
that the writes are not optimized out, we use an empty asm block.
There is discussion of the correct asm options to use here in the issue.

Because the asm block potentially reads from the pointer and makes a
syscall of some kind, the compiler cannot optimize out the zeroizing,
or it could cause observable side-effects. In the improved code, we only
create such an optimization barrier once, rather than after each byte that
it is written.

The call to `atomic_fence()` is not changed.

---

This change may help give users a way to improve performance, if they
have to zeroize very large objects, or, frequently have to zeroize many
small objects. We tested code-gen here in godbolt (in addition to the
tests posted in the github issue) and found that this change is
typically enough for llvm to start adding in SIMD optimizations that
zero many bytes at once.
@tarcieri
Copy link
Member

This should probably be feature gated to avoid a massive MSRV bump and disrupting existing users. That was very problematic the last time we bumped MSRV to add const generic support.

And if all the inline ASM is doing is providing an optimization barrier, it seems like core::hint::black_box could be used instead.

@cbeck88
Copy link
Author

cbeck88 commented Mar 1, 2023

I'm a little worried about this part of the documentation for core::hint::black_box: https://doc.rust-lang.org/beta/core/hint/fn.black_box.html#when-is-this-useful

Maybe we should just use the asm and feature gate it?

@tarcieri
Copy link
Member

tarcieri commented Mar 1, 2023

Oh wow, definitely an important detail about core::hint::black_box!

An ASM optimization barrier seems good then, although let me run this implementation by a few people.

It would definitely still be good to feature gate it in order to preserve MSRV.

@tarcieri
Copy link
Member

tarcieri commented Mar 1, 2023

Hmm, when did that warning get added?

It doesn't appear on the current stable docs. Is it new?

https://doc.rust-lang.org/stable/std/hint/fn.black_box.html

@cbeck88
Copy link
Author

cbeck88 commented Mar 1, 2023

Maybe I looked up the wrong docs, my url says "beta". Maybe they removed that later.

@cbeck88
Copy link
Author

cbeck88 commented Mar 1, 2023

Maybe we can look up the implementation, if it's very similar to Sopium's suggested barrier then maybe it's fine

@tarcieri
Copy link
Member

tarcieri commented Mar 1, 2023

This is the optimization barrier @chandlerc recommended (C++ version, similar idea): https://compiler-explorer.com/z/bh9WzvTPq

@tarcieri
Copy link
Member

tarcieri commented Mar 1, 2023

Maybe I looked up the wrong docs, my url says "beta". Maybe they removed that later.

To me that implies those docs were recently added. I guess we'll see what happens in the next release.

Comment on lines +478 to +482
core::arch::asm!(
"/* {ptr} */",
ptr = in(reg) self.as_mut_ptr(),
options(nostack, readonly, preserves_flags),
);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

asm! is only stable for x86/x86-64, ARM/AArch64, and RISC-V, so its usage needs to be gated for those platforms

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

- No FFI or inline assembly! **WASM friendly** (and tested)!

This guarantee needs to be updated if this change is merged.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Elsewhere we feature-gate asm, so we could perhaps maintain that guarantee so long as the asm feature is off

@tarcieri tarcieri changed the title improve performance for slice zeroization (issue #743) zeroize: use asm! to improve performance Mar 4, 2023
@elichai
Copy link
Contributor

elichai commented May 18, 2023

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants