Reorganize how the binOp and genBinOp functions work.
I've spent quite a while here reading exactly through the spec and so many
tests are enabled because of several critical issues the old design had.
There are some regressions that will take a long time to figure out individually
so I will ignore them for now, and pray they get fixed by themselves. When
we're closer to 100% passing is when I will start diving into them one-by-one.
- implements `airSlice`, `airBitAnd`, `airBitOr`, `airShr`.
- got a basic design going for the `airErrorName` but for some reason it simply returns
empty bytes. will investigate further.
- only generating `.got.zig` entries when not compiling an object or shared library
- reduced the total amount of ops a mnemonic can have to 3, simplifying the logic
this one is even harder to document then the last large overhaul.
TLDR;
- split apart Emit.zig into an Emit.zig and a Lower.zig
- created seperate files for the encoding, and now adding a new instruction
is as simple as just adding it to a couple of switch statements and providing the encoding.
- relocs are handled in a more sane maner, and we have a clear defining boundary between
lea_symbol and load_symbol now.
- a lot of different abstractions for things like the stack, memory, registers, and others.
- we're using x86_64's FrameIndex now, which simplifies a lot of the tougher design process.
- a lot more that I don't have the energy to document. at this point, just read the commit itself :p
LLVMABIAlignmentOfType(i128) reports 16 on this target, however the C
ABI uses align(4).
Clang in LLVM 17 does this:
%struct.foo = type { i32, i128 }
Clang in LLVM 18 does this:
%struct.foo = type <{ i32, i128 }>
Clang is working around the 16-byte alignment to use align(4) for the C
ABI by making the LLVM struct packed.
There is no way to know the expected parent pointer attributes (most
notably alignment) from the type of the field pointer, so provide them
in the first argument.
This commit changes how we represent comptime-mutable memory
(`comptime var`) in the compiler in order to implement the intended
behavior that references to such memory can only exist at comptime.
It does *not* clean up the representation of mutable values, improve the
representation of comptime-known pointers, or fix the many bugs in the
comptime pointer access code. These will be future enhancements.
Comptime memory lives for the duration of a single Sema, and is not
permitted to escape that one analysis, either by becoming runtime-known
or by becoming comptime-known to other analyses. These restrictions mean
that we can represent comptime allocations not via Decl, but with state
local to Sema - specifically, the new `Sema.comptime_allocs` field. All
comptime-mutable allocations, as well as any comptime-known const allocs
containing references to such memory, live in here. This allows for
relatively fast checking of whether a value references any
comptime-mtuable memory, since we need only traverse values up to
pointers: pointers to Decls can never reference comptime-mutable memory,
and pointers into `Sema.comptime_allocs` always do.
This change exposed some faulty pointer access logic in `Value.zig`.
I've fixed the important cases, but there are some TODOs I've put in
which are definitely possible to hit with sufficiently esoteric code. I
plan to resolve these by auditing all direct accesses to pointers (most
of them ought to use Sema to perform the pointer access!), but for now
this is sufficient for all realistic code and to get tests passing.
This change eliminates `Zcu.tmp_hack_arena`, instead using the Sema
arena for comptime memory mutations, which is possible since comptime
memory is now local to the current Sema.
This change should allow `Decl` to store only an `InternPool.Index`
rather than a full-blown `ty: Type, val: Value`. This commit does not
perform this refactor.
A pointer type already has an alignment, so this information does not
need to be duplicated on the function type. This already has precedence
with addrspace which is already disallowed on function types for this
reason. Also fixes `@TypeOf(&func)` to have the correct addrspace and
alignment.
Instead of creating Module.Decl objects, directly create InternPool
pointer values using the anon_decl Addr encoding.
The LLVM backend needed code to notice the alignment of the pointer and
lower accordingly. The other backends likely need a similar change.
This changeset fixes the handling of alignment in several places. The
new rules are:
* `@alignOf(T)` where `T` is a runtime zero-bit type is at least 1,
maybe greater.
* Zero-bit fields in `extern` structs *do* force alignment, potentially
offsetting following fields.
* Zero-bit fields *do* have addresses within structs which can be
observed and are consistent with `@offsetOf`.
These are not necessarily all implemented correctly yet (see disabled
test), but this commit fixes all regressions compared to master, and
makes one new test pass.
Most of this migration was performed automatically with `zig fmt`. There
were a few exceptions which I had to manually fix:
* `@alignCast` and `@addrSpaceCast` cannot be automatically rewritten
* `@truncate`'s fixup is incorrect for vectors
* Test cases are not formatted, and their error locations change