Follow up to #19079, which made test names fully qualified.
This fixes tests that now-redundant information in their test names. For example here's a fully qualified test name before the changes in this commit:
"priority_queue.test.std.PriorityQueue: shrinkAndFree"
and the same test's name after the changes in this commit:
"priority_queue.test.shrinkAndFree"
Use inline to vastly simplify the exposed API. This allows a
comptime-known endian parameter to be propogated, making extra functions
for a specific endianness completely unnecessary.
* 128-bit integer multiplication with overflow
* more instruction encodings used by std inline asm
* implement the `try_ptr` air instruction
* follow correct stack frame abi
* enable full panic handler
* enable stack traces
This reverts commit 0c99ba1eab, reversing
changes made to 5f92b070bf.
This caused a CI failure when it landed in master branch due to a
128-bit `@byteSwap` in std.mem.
* Consistent decryption tail for all AEADs
* Remove outdated note
This was previously copied here from another function. There used
to be another comment on the tag verification linking to issue #1776,
but that one was not copied over. As it stands, this note seems fairly
misleading/irrelevant.
* Prettier docs
* Add note about plaintext contents to docs
* Capitalization
* Fixup missing XChaChaPoly docs
* Small documentation fix of ChaCha variants
Previous documentation was seemingly copy-pasted and left
behind some errors where the number of rounds was not
properly updated.
* Suggest `std.crypto.utils.secureZero` on `@memset` docs
* Revert previous change
Most of this migration was performed automatically with `zig fmt`. There
were a few exceptions which I had to manually fix:
* `@alignCast` and `@addrSpaceCast` cannot be automatically rewritten
* `@truncate`'s fixup is incorrect for vectors
* Test cases are not formatted, and their error locations change
Support for 64-bit counters was a hack built upon the version with
a 32-bit counter, that emulated a larger counter by splitting the
input into large blocks.
This is fragile, particularily if the initial counter is set to
a non-default value and if we have parallelism.
Simply add a comptime parameter to check if we have a 32 bit or a
64 bit counter instead.
Also convert a couple while() loops to for(), and change @panic()
to @compileError().
* std.crypto.chacha: support larger vectors on AVX2 and AVX512 targets
Ryzen 7 7700, ChaCha20/8 stream, long outputs:
Generic: 3268 MiB/s
AVX2 : 6023 MiB/s
AVX512 : 8086 MiB/s
Bump the rand.chacha buffer a tiny bit to take advantage of this.
More than 8 blocks doesn't seem to make any measurable difference.
ChaChaPoly also gets a small performance boost from this, albeit
Poly1305 remains the bottleneck.
Generic: 707 MiB/s
AVX2 : 981 MiB/s
AVX512 : 1202 MiB/s
aarch64 appears to generally benefit from 4-way vectorization.
Verified on Apple Silicon, but also on a Cortex A72.
These are great permutations, and there's nothing wrong with them
from a practical security perspective.
However, both were competing in the NIST lightweight crypto
competition.
Gimli didn't pass the 3rd selection round, and is not much used
in the wild besides Zig and libhydrogen. It will never be
standardized and is unlikely to get more traction in the future.
Xoodyak, that Xoodoo is the permutation of, was a finalist.
It has a lot of advantages and *might* be standardized without NIST.
But this is too early to tell, and too risky to commit to it
in a standard library.
For lightweight crypto, Ascon is the one that we know NIST will
standardize and that we can safely rely on from a usage perspective.
Switch to a traditional ChaCha-based CSPRNG, with an Ascon-based one
as an option for constrained systems.
Add a RNG benchmark by the way.
Gimli and Xoodoo served us well. Their code will be maintained,
but outside the standard library.
We already have a LICENSE file that covers the Zig Standard Library. We
no longer need to remind everyone that the license is MIT in every single
file.
Previously this was introduced to clarify the situation for a fork of
Zig that made Zig's LICENSE file harder to find, and replaced it with
their own license that required annual payments to their company.
However that fork now appears to be dead. So there is no need to
reinforce the copyright notice in every single file.
std/crypto: use finer-grained error sets in function signatures
Returning the `crypto.Error` error set for all crypto operations
was very convenient to ensure that errors were used consistently,
and to avoid having multiple error names for the same thing.
The flipside is that callers were forced to always handle all
possible errors, even those that could never be returned by a
function.
This PR makes all functions return union sets of the actual errors
they can return.
The error sets themselves are all limited to a single error.
Larger sets are useful for platform-specific APIs, but we don't have
any of these in `std/crypto`, and I couldn't find any meaningful way
to build larger sets.
See https://eprint.iacr.org/2019/1492.pdf for justification.
8 rounds ChaCha20 provides a 2.5x speedup, and is still believed
to be safe.
Round-reduced versions are actually deployed (ex: Android filesystem
encryption), and thanks to the magic of comptime, it doesn't take much
to support them.
This also makes the ChaCha20 code more consistent with the Salsa20 code,
removing internal functions that were not part of the public API any more.
No breaking changes; the public API remains backwards compatible.
Let's follow the road paved by the removal of 'z'/'Z', the Formatter
pattern is nice enough to let us remove the remaining four special cases
and declare u8 slices free from any special casing!
- use `PascalCase` for all types. So, AES256GCM is now Aes256Gcm.
- consistently use `_length` instead of mixing `_size` and `_length` for the
constants we expose
- Use `minimum_key_length` when it represents an actual minimum length.
Otherwise, use `key_length`.
- Require output buffers (for ciphertexts, macs, hashes) to be of the right
size, not at least of that size in some functions, and the exact size elsewhere.
- Use a `_bits` suffix instead of `_length` when a size is represented as a
number of bits to avoid confusion.
- Functions returning a constant-sized slice are now defined as a slice instead
of a pointer + a runtime assertion. This is the case for most hash functions.
- Use `camelCase` for all functions instead of `snake_case`.
No functional changes, but these are breaking API changes.
Brings a 30% speed boost on x86_64 even though we still process only
one block at a time for now.
Only enabled on x86_64 since the non-vectorized implementation seems
to currently perform better on some architectures (at least on aarch64).
But the non-vectorized implementation still gets a little speed boost
as well (~17%) with these changes.
- 1MiB objects on the stack doesn't play well with wasmtime.
Reduce these to 512KiB so that the webassembly benchmarks can run.
- Pass expected results to a blackBox() function. Without this, in
release-fast mode, the compiler could detected unused return values,
and would produce results that didn't make sense for siphash.
- Add AEAD constructions to the benchmarks.
- Inline chacha20Core() makes it 4 times faster.
- benchmarkSignatures() -> benchmarkSignature() for consistency.
Instead of having all primitives and constructions share the same namespace,
they are now organized by category and function family.
Types within the same category are expected to share the exact same API.
* Factor redundant code in std/crypto/chacha20
* Add support for XChaCha20, and the XChaCha20-Poly1305 construction.
XChaCha20 is a 24-byte version of ChaCha20, is widely implemented
and is on the standards track:
https://tools.ietf.org/html/draft-irtf-cfrg-xchacha-03
* Add support for encryption/decryption with the authentication tag
detached from the ciphertext
* Add wrappers with an API similar to the Gimli AEAD type, so that
we can use and benchmark AEADs with a common API.
* `std.mem.Compare` is now `std.math.Order` and the enum tags
renamed to follow new style convention.
* `std.mem.compare` is renamed to `std.mem.order`.
* new function `std.math.order`