Our key pair creation API was ugly and inconsistent between ecdsa
keys and other keys.
The same `generate()` function can now be used to generate key pairs,
and that function cannot fail.
For deterministic keys, a `generateDeterministic()` function is
available for all key types.
Fix comments and compilation of the benchmark by the way.
Fixes#21002
By default, programs built in debug mode that open a https connection
will append secrets to the file specified in the SSLKEYLOGFILE
environment variable to allow protocol debugging by external programs.
This was preventing TLSv1.2 from working in some cases, because servers
are allowed to send multiple handshake messages in the first handshake
record, whereas this inital loop was assuming that it only contained a
server hello.
This is mostly nfc cleanup as I was bisecting the client hello to find
the problematic part, and the only bug fix ended up being
key_share.x25519_kp.public_key ++
key_share.ml_kem768_kp.public_key.toBytes()
to
key_share.ml_kem768_kp.public_key.toBytes() ++
key_share.x25519_kp.public_key)
and the same swap in `KeyShare.exchange` as per some random blog that
says "a hybrid keyshare, constructed by concatenating the public KEM key
with the public X25519 key". I also note that based on the same blog
post, there was a draft version of this method that indeed had these
values swapped, and that used to be supported by this code, but it was
not properly fixed up when this code was updated from the draft spec.
Closes#21747
Note that the removed `error.TlsIllegalParameter` case is still caught
below when it is compared to a fixed-length string, but after checking
the proper protocol version requirement first.
On decryption tls client should remove zero byte padding after the
content type field. This padding is rarely used, the only site (from the
list of top domains) that I found using it is `tutanota.com`.
From [RFC](https://datatracker.ietf.org/doc/html/rfc8446#section-5.4):
> All encrypted TLS records can be padded.
> Padding is a string of zero-valued bytes appended to the ContentType
field before encryption.
> the receiving implementation scans the field from the end toward the
beginning until it finds a non-zero octet. This non-zero octet is the
content type of the message.
Currently we can't connect to that site:
```
$ zig run main.zig -- tutanota.com
error: TlsInitializationFailed
/usr/local/zig/zig-linux-x86_64-0.14.0-dev.208+854e86c56/lib/std/crypto/tls/Client.zig:476:45: 0x121fbed in init__anon_10331 (http_get_std)
if (inner_ct != .handshake) return error.TlsUnexpectedMessage;
^
/usr/local/zig/zig-linux-x86_64-0.14.0-dev.208+854e86c56/lib/std/http/Client.zig:1357:99: 0x1161f0b in connectTcp (http_get_std)
conn.data.tls_client.* = std.crypto.tls.Client.init(stream, client.ca_bundle, host) catch return error.TlsInitializationFailed;
^
/usr/local/zig/zig-linux-x86_64-0.14.0-dev.208+854e86c56/lib/std/http/Client.zig:1492:14: 0x11271e1 in connect (http_get_std)
} orelse return client.connectTcp(host, port, protocol);
^
/usr/local/zig/zig-linux-x86_64-0.14.0-dev.208+854e86c56/lib/std/http/Client.zig:1640:9: 0x111a24e in open (http_get_std)
try client.connect(valid_uri.host.?.raw, uriPort(valid_uri, protocol), protocol);
^
/home/ianic/Code/tls.zig/example/http_get_std.zig:28:19: 0x1118f8c in main (http_get_std)
var req = try client.open(.GET, uri, .{ .server_header_buffer = &server_header_buffer });
^
```
using this example:
```zig
const std = @import("std");
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
const allocator = gpa.allocator();
const args = try std.process.argsAlloc(allocator);
defer std.process.argsFree(allocator, args);
if (args.len > 1) {
const domain = args[1];
var client: std.http.Client = .{ .allocator = allocator };
defer client.deinit();
// Add https:// prefix if needed
const url = brk: {
const scheme = "https://";
if (domain.len >= scheme.len and std.mem.eql(u8, domain[0..scheme.len], scheme))
break :brk domain;
var url_buf: [128]u8 = undefined;
break :brk try std.fmt.bufPrint(&url_buf, "https://{s}", .{domain});
};
const uri = try std.Uri.parse(url);
var server_header_buffer: [16 * 1024]u8 = undefined;
var req = try client.open(.GET, uri, .{ .server_header_buffer = &server_header_buffer });
defer req.deinit();
try req.send();
try req.wait();
}
}
```
When calculating how much ciphertext from the stream can fit into
user and internal buffers we should also take into account ciphertext
data which are already in internal buffer.
Fixes: 15226
Tested with
[this](https://github.com/ziglang/zig/issues/15226#issuecomment-2218809140).
Using client with different read buffers until I, hopefully, understood
what is happening.
Not relevant to this fix, but this
[part](95d9292a7a/lib/std/crypto/tls/Client.zig (L988-L991))
is still mystery to me. Why we don't use free_size in buf_cap
calculation. Seems like rudiment from previous implementation without iovec.
Per last paragraph of RFC 8446, Section 5.2, the length of the inner content of an encrypted record must not exceed 2^14 + 1, while that of the whole encrypted record must not exceed 2^14 + 256.
Client for tls was using a function that wasn't declared on the
interface for it. The issue wasn't apparent because net stream
implemented that function.
I changed it to keep the interface promise of what's required to be
compatible with the tls client functionality.
Use inline to vastly simplify the exposed API. This allows a
comptime-known endian parameter to be propogated, making extra functions
for a specific endianness completely unnecessary.
Most of this migration was performed automatically with `zig fmt`. There
were a few exceptions which I had to manually fix:
* `@alignCast` and `@addrSpaceCast` cannot be automatically rewritten
* `@truncate`'s fixup is incorrect for vectors
* Test cases are not formatted, and their error locations change
Individual max buffer sizes are well known, now that arithmetic doesn't
require allocations any more.
Also bump `main_cert_pub_key_buf`, so that e.g. `nodejs.org` public
keys can fit.
A minimal set of simple, safe functions for Montgomery arithmetic,
designed for cryptographic primitives.
Also update the current RSA cert validation to use it, getting rid
of the FixedBuffer hack and the previous limitations.
Make the check of the RSA public key a little bit more strict by
the way.
* When there is buffered cleartext, return it without calling the
underlying read function. This prevents buffer overflow due to space
used up by cleartext.
* Avoid clearing the buffer when the buffered cleartext could not be
completely given to the result read buffer, and there is some
buffered ciphertext left.
* Instead of rounding up the amount of bytes to ask for to the nearest
TLS record size, round down, with a minimum of 1. This prevents the
code path from being taken which requires extra memory copies.
* Avoid calling `@memcpy` with overlapping arguments.
closes#15590
Now they use slices or array pointers with any element type instead of
requiring byte pointers.
This is a breaking enhancement to the language.
The safety check for overlapping pointers will be implemented in a
future commit.
closes#14040
On CPUs without AES support, ChaCha is always faster and safer than
software AES.
Add `crypto.core.aes.has_hardware_support` to represent whether
AES acceleration is available or not, and in `tls.Client`, favor
AES-based ciphers only if hardware support is available.
This matches what BoringSSL is doing.