Add module for mapping ASN1 types to Zig types. See
`asn1.Tag.fromZig` for the mapping. Add DER encoder and decoder.
See `asn1/test.zig` for example usage of every ASN1 type.
This implementation allows ASN1 tags to be overriden with `asn1_tag`
and `asn1_tags`:
```zig
const MyContainer = (enum | union | struct) {
field: u32,
pub const asn1_tag = asn1.Tag.init(...);
// This specifies a tag's class, and if explicit, additional encoding
// rules.
pub const asn1_tags = .{
.field = asn1.FieldTag.explicit(0, .context_specific),
};
};
```
Despite having an enum tag type, ASN1 frequently uses OIDs as enum
values. This is supported via an `pub const oids` field.
```zig
const MyEnum = enum {
a,
pub const oids = asn1.Oid.StaticMap(MyEnum).initComptime(.{
.a = "1.2.3.4",
});
};
```
Futhermore, a container may choose to implement encoding and decoding
however it deems fit. This allows for derived fields since Zig has a far
more powerful type system than ASN1.
```zig
// ASN1 has no standard way of tagging unions.
const MyContainer = union(enum) {
derived: PowerfulZigType,
const WeakAsn1Type = ...;
pub fn encodeDer(self: MyContainer, encoder: *der.Encoder) !void {
try encoder.any(WeakAsn1Type{...});
}
pub fn decodeDer(decoder: *der.Decoder) !MyContainer {
const weak_asn1_type = try decoder.any(WeakAsn1Type);
return .{ .derived = PowerfulZigType{...} };
}
};
```
An unfortunate side-effect is that decoding and encoding cannot have
complete complete error sets unless we limit what errors users may
return. Luckily, PKI ASN1 types are NOT recursive so the inferred
error set should be sufficient.
Finally, other encodings are possible, but this patch only implements
a buffered DER encoder and decoder.
In an effort to keep the changeset minimal this PR does not actually
use the DER parser for stdlib PKI, but a tested example of how it may
be used for Certificate is available
[here.](https://github.com/clickingbuttons/asn1/blob/69c5709d/src/Certificate.zig)
Closes#19775.
* std.crypto.hash.sha2: generalize sha512 truncation
Replace `Sha512224`, `Sha512256`, and `Sha512T224` with
`fn Sha512Truncated(digest_bits: comptime_int)`.
This required refactoring `Sha2x64(comptime params)` to
`Sha2x64(comptime iv: [8]u64, digest_bits: comptime_int)`
for user-specified `digest_bits`.
I left #19697 alone but added a compile-time check that digest_bits is
divisible by 8.
Remove docs which restate type name. Add module docs and reference where
IVs come from.
* std.crypto.sha2: make Sha512_224 and Sha512_256 pub
* make generic type implementation detail, add comments
* fix iv
* address @jedisct1 feedback
* fix typo
* renaming
* add truncation clarifying comment and Sha259T192 tests
* std.crypto: make ff.ct_unprotected.limbsCmpLt compile
* std.crypto: add ff.ct test
* fix testCt to work on x86
* disable test on stage2-c
---------
Co-authored-by: Frank Denis <124872+jedisct1@users.noreply.github.com>
this patch renames ComptimeStringMap to StaticStringMap, makes it
accept only a single type parameter, and return a known struct type
instead of an anonymous struct. initial motivation for these changes
was to reduce the 'very long type names' issue described here
https://github.com/ziglang/zig/pull/19682.
this breaks the previous API. users will now need to write:
`const map = std.StaticStringMap(T).initComptime(kvs_list);`
* move `kvs_list` param from type param to an `initComptime()` param
* new public methods
* `keys()`, `values()` helpers
* `init(allocator)`, `deinit(allocator)` for runtime data
* `getLongestPrefix(str)`, `getLongestPrefixIndex(str)` - i'm not sure
these belong but have left in for now incase they are deemed useful
* performance notes:
* i posted some benchmarking results here:
https://github.com/travisstaloch/comptime-string-map-revised/issues/1
* i noticed a speedup reducing the size of the struct from 48 to 32
bytes and thus use u32s instead of usize for all length fields
* i noticed speedup storing KVs as a struct of arrays
* latest benchmark shows these wall_time improvements for
debug/safe/small/fast builds: -6.6% / -10.2% / -19.1% / -8.9%. full
output in link above.
* define std.crypto.sha2.Sha512224
* rename blunder
* add sha512-224 and sha512-256 tests
* fix Sha2x64 for variations that aren't a multiple of 64 bits
signature/s:
Algorithm Before After
---------------+---------+-------
ecdsa-p256 3707 4396
ecdsa-p384 1067 1332
ecdsa-secp256k1 4490 5147
Add ECDSA to the benchmark by the way.
https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-185.pdf
This adds useful standard SHA3-based constructions from the
NIST SP 800-185 document:
- cSHAKE: similar to the SHAKE extensible hash function, but
with the addition of a context parameter.
- KMAC: SHAKE-based authentication / keyed XOF
- TupleHash: unambiguous hashing of tuples
These are required by recent protocols and specifications.
They also offer properties that none of the currently available
constructions in the stdlib offer, especially the ability to safely
hash tuples.
Other keyed hash functions/XOFs will fall back to using HMAC, which
is suboptimal from a performance perspective, but fine from a
security perspective.
These systems write the number of *bits* of their inputs as a u64.
However if `@sizeOf(usize) == 4`, an input message or associated data
whose size is > 512 MiB could overflow.
On 64-bit systems, it is safe to assume that no machine has more than
2 EiB of memory.
Fixes compilation errors in functions that are syntaxic sugar
to operate on serialized scalars.
Also make it explicit that square roots in fields whose size is
not congruent to 3 modulo 4 are not an error, they are just
not implemented yet.
Reported by @vitalonodo - Thanks!
ML-KEM is the Kyber post-quantum secure key encapsulation mechanism,
as being standardized by NIST.
Too bad, they decided to rename it; the "Kyber" name was so much
better!
This implements the current draft (NIST FIPS-203), which is already
being deployed even though the specification is not finalized.
Follow up to #19079, which made test names fully qualified.
This fixes tests that now-redundant information in their test names. For example here's a fully qualified test name before the changes in this commit:
"priority_queue.test.std.PriorityQueue: shrinkAndFree"
and the same test's name after the changes in this commit:
"priority_queue.test.shrinkAndFree"
Per last paragraph of RFC 8446, Section 5.2, the length of the inner content of an encrypted record must not exceed 2^14 + 1, while that of the whole encrypted record must not exceed 2^14 + 256.
Client for tls was using a function that wasn't declared on the
interface for it. The issue wasn't apparent because net stream
implemented that function.
I changed it to keep the interface promise of what's required to be
compatible with the tls client functionality.
The function returns the vector length, not the byte size of the vector or the bit size of individual elements. This distinction is very important and some usages of this function in the stdlib operated under these incorrect assumptions.
On some architectures, including AMD Zen CPUs, dividing a secret
by a constant denominator may not be a constant-time operation.
And most Kyber implementations, including ours, could leak the
hamming weight of the shared secret because of this. See:
https://kyberslash.cr.yp.to
Multiplications aren't guaranteed to be constant-time either, but
at least on the CPUs we currently support, it is.
* Take advantage of multi-object for loops.
* Remove use of BoundedArray since it had no meaningful impact on safety
or readability.
* Simplify some complex expressions, such as using `!` to invert a
boolean value.
The low-level `Curve25519.fromEdwards25519()` function assumed
that the X/Y coordinates were not scaled (Z=1).
But this is not guaranteed to be the case.
In most real-world applications, the coordinates are freshly decoded,
either directly or via the `X25519.fromEd25519()` function, so this
is not an issue.
However, since we offer the ability to do that conversion after
arbitrary computations, the assertion was not correct.
Use inline to vastly simplify the exposed API. This allows a
comptime-known endian parameter to be propogated, making extra functions
for a specific endianness completely unnecessary.