For C code the macros SIGRTMIN and SIGRTMAX provide these values. In
practice what looks like a constant is actually provided by a libc call.
So the Zig implementations are explicitly function calls.
glibc (and Musl) export a run-time minimum "real-time" signal number,
based on how many signals are reserved for internal implementation details
(generally threading). In practice, on Linux, sigrtmin() is 35 on glibc
with the older LinuxThread and 34 with the newer NPTL-based
implementation. Musl always returns 35. The maximum "real-time" signal
number is NSIG - 1 (64 on most Linux kernels, but 128 on MIPS).
When not linking a C Library, Zig can report the full range of "rt"
signals (none are reserved by Zig).
Fixes#21189
Dunno why the MIPS signal numbers are different, or why Zig had them
already special cased, but wrong.
We have the technology to test these constants. We should use it.
All the existing code that manipulates `ucontext_t` expects there to be a
glibc-compatible sigmask (1024-bit). The `ucontext_t` struct need to be
cleaned up so the glibc-dependent format is only used when linking
glibc/musl library, but that is a more involved change.
In practice, no Zig code looks at the sigset field contents, so it just
needs to be the right size.
By returning an initialized sigset (instead of taking the set as an output
parameter), these functions can be used to directly initialize the `mask`
parameter of a `Sigaction` instance.
The kernel ABI sigset_t is smaller than the glibc one. Define the
right-sized sigset_t and fixup the sigaction() wrapper to leverage it.
The Sigaction wrapper here is not an ABI, so relax it (drop the "extern"
and the "restorer" fields), the existing `k_sigaction` is the ABI
sigaction struct.
Linux defines `sigset_t` with a c_ulong, so it can be 32-bit or 64-bit,
depending on the platform. This can make a difference on big-endian
systems.
Patch up `ucontext_t` so that this change doesn't impact its layout.
AFAICT, its currently the glibc layout.
The code was using u32 and usize interchangably, which doesn't work on
64-bit systems. This:
`pub const sigset_t = [1024 / 32]u32;`
is not consistent with this:
`const shift = @as(u5, @intCast(s & (usize_bits - 1)));`
However, normal signal numbers are less than 31, so the bad math doesn't matter much. Also, despite support for 1024 signals in the set, only setting signals between 1 and NSIG (which is mostly 65, but sometimes 128) is defined. The existing tests only exercised signal numbers in the first 31 bits so they didn't trip over this:
The C library `sigaddset` will return `EINVAL` if given an out of bounds signal number. I made the Zig code just silently ignore any out of bounds signal numbers.
Moved all the `sigset` related declarations next to each in the source, too.
The `filled_sigset` seems non-standard to me. I think it is meant to be used like `empty_sigset`, but it only contains 31 set signals, which seems wrong (should be 64 or 128, aka `NSIG`). It's also unused. The oddly named but similar `all_mask` is used (by posix.zig) but sets all 1024 bits (which I understood to be undefined behavior but seems to work just fine). For comparison the musl `sigfillset` fills in 65 bits or 128 bits.
Linux kernel syscalls expect to be given the number of bits of sigset that
they're built for, not the full 1024-bit sigsets that glibc supports.
I audited the other syscalls in here that use `sigset_t` and they're all
using `NSIG / 8`.
Fixes#12715
Adds a CreateProcessFlags packed struct for all the possible flags to
CreateProcessW on windows. In addition, propagates the existing
`start_suspended` option in std.process.Child which was previously only
used on Darwin. Also adds a `create_no_window` option to std.process.Child
which is a commonly used flag for launching console executables on
windows without causing a new console window to "pop up".
This PR consistently maps .ACCES into AccessDenied and .PERM into
PermissionDenied. AccessDenied is returned if the file mode bit
(user/group/other rwx bits) disallow access (errno was `EACCES`).
PermissionDenied is returned if something else denies access (errno was
`EPERM`) (immutable bit, SELinux, capabilities, etc). This somewhat
subtle distinction is a POSIX thing.
Most of the change is updating std.posix Error Sets to contain both
errors, and then propagating the pair up through caller Error Sets.
Fixes#16782
Windows defines an `ACCESS_DENIED` error code. There is no
PERMISSION_DENIED (or its equivalent) which seems to only exist on POSIX
systems. Fix a couple Windows calls code to return `error.AccessDenied`
for `ACCESS_DENIED` and to stop mapping AccessDenied into
PermissionDenied.
[Incremental provided buffer
consumption](https://github.com/axboe/liburing/wiki/What's-new-with-io_uring-in-6.11-and-6.12#incremental-provided-buffer-consumption)
support is added in kernel 6.12.
IoUring.BufferGroup will now use incremental consumption whenever
kernel supports it.
Before, provided buffers are wholly consumed when picked. Each cqe
points to the different buffer. With this, cqe points to the part of the
buffer. Multiple cqe's can reuse same buffer.
Appropriate sizing of buffers becomes less important.
There are slight changes in BufferGroup interface (it now needs to track
current receive point for each buffer). Init requires allocator
instead of buffers slice, it will allocate buffers slice and head
pointers slice. Get and put now requires cqe becasue there we have
information will the buffer be reused.