Rewrite GHASH to use 128-bit multiplication over non-reversed
integers, and up to 8 blocks aggregated reduction.
lib/std/crypto/benchmark.zig results:
Xeon E5:
Before: 1604 MiB/s
After: 4005 MiB/s
Apple M1:
Before: 2769 MiB/s
After: 6014 MiB/s
This also makes AES-GCM faster by the way.
Comptime code can't execute assembly code, so we need some way to
force comptime code to use the generic path. This should be replaced
with whatever is implemented for #868, when that day comes.
I am seeing that the result for the hash is incorrect in stage1 and
crashes stage2, so presumably this never worked correctly. I will follow
up on that soon.
This gets us most of the way back to the performance I had when
I was using the LLVM intrinsics:
- Intel Intel(R) Core(TM) i7-1068NG7 CPU @ 2.30GHz:
190.67 MB/s (w/o intrinsics) -> 1285.08 MB/s
- AMD EPYC 7763 (VM) @ 2.45 GHz:
240.09 MB/s (w/o intrinsics) -> 1360.78 MB/s
- Apple M1:
216.96 MB/s (w/o intrinsics) -> 2133.69 MB/s
Minor changes to this source can swing performance from 400 MB/s to
1400 MB/s or... 20 MB/s, depending on how it interacts with the
optimizer. I have a sneaking suspicion that despite LLVM inheriting
GCC's extremely strict inline assembly semantics, its passes are
rather skittish around inline assembly (and almost certainly, its
instruction cost models can assume nothing)
There's probably plenty of room to optimize these further in the
future, but for the moment this gives ~3x improvement on Intel
x86-64 processors, ~5x on AMD, and ~10x on M1 Macs.
These extensions are very new - Most processors prior to 2020 do
not support them.
AVX-512 is a slightly older alternative that we could use on Intel
for a much bigger performance bump, but it's been fused off on
Intel's latest hybrid architectures and it relies on computing
independent SHA hashes in parallel. In contrast, these SHA intrinsics
provide the usual single-threaded, single-stream interface, and should
continue working on new processors.
AArch64 also has SHA-512 intrinsics that we could take advantage
of in the future
Similar to what was done for EdDSA, allow incremental creation
and verification of ECDSA signatures.
Doing so for ECDSA is trivial, and can be useful for TLS as well
as the future package manager.
This commit accepts unusual parameters like EcdsaP384Sha256.
Some certifictes(below certs are in /etc/ssl/certs/ca-certificates.crt on Ubuntu 22.04) use EcdsaP384Sha256 to sign itself.
- Subject: C=GR, L=Athens, O=Hellenic Academic and Research Institutions Cert. Authority, CN=Hellenic Academic and Research Institutions ECC RootCA 2015
- Subject: C=US, ST=Texas, L=Houston, O=SSL Corporation, CN=SSL.com EV Root Certification Authority ECC
- Subject: C=US, ST=Texas, L=Houston, O=SSL Corporation, CN=SSL.com Root Certification Authority ECC
In verify(), hash array `h` is allocated to be larger than the scalar.encoded_length.
The array is regarded as big-endian.
Hash values are filled in the back of the array and the rest bytes in front are filled with zero.
In sign(), the hash array is allocated and filled as same as verify().
In deterministicScalar(), hash bytes are insufficient to generate `k`
To generate `k` without narrowing its value range,
this commit uses algorithm stage h. in "Section 3.2 Generation of k" in RFC6979.
This reverts commit 7cbd586ace.
This is causing a fail to build from source:
```
./lib/std/fmt.zig:492:17: error: cannot format optional without a specifier (i.e. {?} or {any})
@compileError("cannot format optional without a specifier (i.e. {?} or {any})");
^
./src/link/MachO/Atom.zig:544:26: note: called from here
log.debug(" RELA({s}) @ {x} => %{d} in object({d})", .{
^
```
I looked at the code to fix it but none of those args are optionals.
Key blinding allows public keys to be augmented with a secret
scalar, making multiple signatures from the same signer unlinkable.
https://datatracker.ietf.org/doc/draft-dew-cfrg-signature-key-blinding/
This is required by privacy-preserving applications such as Tor
onion services and the PrivacyPass protocol.
Gimli was a game changer. A permutation that is large enough to be
used in sponge-like constructions, yet small enough to be compact
to implement and fast on a wide range of platforms.
And Gimli being part of the Zig standard library was awesome.
But since then, Gimli entered the NIST Lightweight Cryptography
Competition, competing againt other candidates sharing a similar set
of properties.
Unfortunately, Gimli didn't pass the 3rd round.
There are no practical attacks against Gimli when used correctly, but
NIST's decision means that Gimli is unlikely to ever get any traction.
So, maybe the time has come to move Gimli from the standard library
to another repository.
We shouldn't do it without providing an alternative, though.
And the best candidate for this is probably Xoodoo.
Xoodoo is the core function of Xoodyak, one of the finalists of the
NIST LWC competition, and the most direct competitor to Gimli. It is
also a 384-bit permutation, so it can easily be used everywhere Gimli
was used with no parameter changes.
It is the building block of Xoodyak (for actual encryption and hashing)
as well as Charm, that some Zig applications are already using.
Like Gimli that it was heavily inspired from, it is compact and
suitable for constrained environments.
This change adds the Xoodoo permutation to std.crypto.core.
The set of public functions includes everything required to later
implement existing Xoodoo-based constructions.
In order to prepare for the Gimli deprecation, the default
CSPRNG was changed to a Xoodoo-based that works exactly the same way.
A hash function cascade was a common way to avoid length-extension
attacks with traditional hash functions such as the SHA-2 family.
Add `std.crypto.hash.composition` to do exactly that using arbitrary
hash functions, and pre-define the common SHA2-based ones.
With this, we can now sign and verify Bitcoin signatures in pure Zig.
std.crypto.ecc: add support for the secp256k1 curve
Usage of the secp256k1 elliptic curve recently grew exponentially,
since this is the curve used by Bitcoin and other popular blockchains
such as Ethereum.
With this, Zig has support for all the widely deployed elliptic curves
today.
For 25519, it's very likely that applications would ever need the
serialized representation. Expose the value as an integer as in
other curves. Rename the internal representation from `field_size`
to `field_order` for consistency.
Also fix a common typo in `scalar.sub()`.
This valid zig code produces reasonable LLVM IR, however, on the
wasm32-wasi target, when using the wasmtime runtime, the number of
locals of the `isSquare` function exceeds 50000, causing wasmtime
to refuse to execute the binary.
The `inline` keyword in Zig is intended to be used only where it is
semantically necessary; not as an optimization hint. Otherwise, this may
produce unwanted binary bloat for the -OReleaseSmall use case.
In the future, it is possible that we may end up with both `inline`
keyword, which operates as it does in status quo, and additionally
`callconv(.inline_hint)` which has no semantic impact, but may be
observed by optimization passes.
In this commit, I also cleaned up `isSquare` by eliminating an
unnecessary mutable variable, replacing it with several local constants.
Closes#11947.
ECDSA is the most commonly used signature scheme today, mainly for
historical and conformance reasons. It is a necessary evil for
many standard protocols such as TLS and JWT.
It is tricky to implement securely and has been the root cause of
multiple security disasters, from the Playstation 3 hack to multiple
critical issues in OpenSSL and Java.
This implementation combines lessons learned from the past with
recent recommendations.
In Zig, the NIST curves that ECDSA is almost always instantied with
use formally verified field arithmetic, giving us peace of mind
even on edge cases. And the API rejects neutral elements where it
matters, and unconditionally checks for non-canonical encoding for
scalars and group elements. This automatically eliminates common
vulnerabilities such as https://sk.tl/2LpS695v .
ECDSA's security heavily relies on the security of the random number
generator, which is a concern in some environments.
This implementation mitigates this by computing deterministic
nonces using the conservative scheme from Pornin et al. with the
optional addition of randomness as proposed in Ericsson's
"Deterministic ECDSA and EdDSA Signatures with Additional Randomness"
document. This approach mitigates both the implications of a weak RNG
and the practical implications of fault attacks.
Project Wycheproof is a Google project to test crypto libraries against
known attacks by triggering edge cases. It discovered vulnerabilities
in virtually all major ECDSA implementations.
The entire set of ECDSA-P256-SHA256 test vectors from Project Wycheproof
is included here. Zero defects were found in this implementation.
The public API differs from the Ed25519 one. Instead of raw byte strings
for keys and signatures, we introduce Signature, PublicKey and SecretKey
structures.
The reason is that a raw byte representation would not be optimal.
There are multiple standard representations for keys and signatures,
and decoding/encoding them may not be cheap (field elements have to be
converted from/to the montgomery domain).
So, the intent is to eventually move ed25519 to the same API, which
is not going to introduce any performance regression, but will bring
us a consistent API, that we can also reuse for RSA.
After P-256, here comes P-384, also known as secp384r1.
Like P-256, it is required for TLS, and is the current NIST recommendation for key exchange and signatures, for better or for worse.
Like P-256, all the finite field arithmetic has been computed and verified to be correct by fiat-crypto.
Add the ability to generate a random, canonical curve25519 scalar,
like we do for p256.
Also leverage the existing CompressedScalar type to represent these
scalars.
* edwards25519: fix X coordinate of the base point
Reported by @OfekShochat -- Thanks!
* edwards25519: reduce public scalar when the top bit is set, not cleared
This is an optimization for the unexpected case of a scalar
larger than the field size.
Fixes#11563
* edwards25519: add a test implicit reduction of invalid scalars
This is the x25519 counterpart to `edwards25519.clearCofactor()`.
It is useful to check for low-order points in protocols where it matters and where clamping cannot work, such as PAKEs.
Fixes#11353
The renderer treats comments and doc comments differently since doc
comments are parsed into the Ast. This commit adds a check after getting
the text for the doc comment and trims whitespace at the end before
rendering.
The `a = 0,` in the test is here to avoid a ParseError while parsing the
test.