Previously, various doc comments heavily disagreed with the
implementation on both what lives where on the filesystem at what time,
and how that was represented in code. Notably, the combination of emit
paths outside the cache and `disable_lld_caching` created a kind of
ad-hoc "cache disable" mechanism -- which didn't actually *work* very
well, 'most everything still ended up in this cache. There was also a
long-standing issue where building using the LLVM backend would put a
random object file in your cwd.
This commit reworks how emit paths are specified in
`Compilation.CreateOptions`, how they are represented internally, and
how the cache usage is specified.
There are now 3 options for `Compilation.CacheMode`:
* `.none`: do not use the cache. The paths we have to emit to are
relative to the compiler cwd (they're either user-specified, or
defaults inferred from the root name). If we create any temporary
files (e.g. the ZCU object when using the LLVM backend) they are
emitted to a directory in `local_cache/tmp/`, which is deleted once
the update finishes.
* `.whole`: cache the compilation based on all inputs, including file
contents. All emit paths are computed by the compiler (and will be
stored as relative to the local cache directory); it is a CLI error to
specify an explicit emit path. Artifacts (including temporary files)
are written to a directory under `local_cache/tmp/`, which is later
renamed to an appropriate `local_cache/o/`. The caller (who is using
`--listen`; e.g. the build system) learns the name of this directory,
and can get the artifacts from it.
* `.incremental`: similar to `.whole`, but Zig source file contents, and
anything else which incremental compilation can handle changes for, is
not included in the cache manifest. We don't need to do the dance
where the output directory is initially in `tmp/`, because our digest
is computed entirely from CLI inputs.
To be clear, the difference between `CacheMode.whole` and
`CacheMode.incremental` is unchanged. `CacheMode.none` is new
(previously it was sort of poorly imitated with `CacheMode.whole`). The
defined behavior for temporary/intermediate files is new.
`.none` is used for direct CLI invocations like `zig build-exe foo.zig`.
The other cache modes are reserved for `--listen`, and the cache mode in
use is currently just based on the presence of the `-fincremental` flag.
There are two cases in which `CacheMode.whole` is used despite there
being no `--listen` flag: `zig test` and `zig run`. Unless an explicit
`-femit-bin=xxx` argument is passed on the CLI, these subcommands will
use `CacheMode.whole`, so that they can put the output somewhere without
polluting the cwd (plus, caching is potentially more useful for direct
usage of these subcommands).
Users of `--listen` (such as the build system) can now use
`std.zig.EmitArtifact.cacheName` to find out what an output will be
named. This avoids having to synchronize logic between the compiler and
all users of `--listen`.
It turns out that LLD caching hasn't been in use for a while. On master,
it is currently only enabled when you compile via the build system,
passing `-fincremental`, using LLD (and so LLVM if there's a ZCU). That
case never happens, because `-fincremental` is only useful when you're
using a backend *other* than the LLVM backend. My previous commits
accidentally re-enabled this logic in some cases, exposing bugs; that
ultimately led to this realisation. So, let's just delete that logic --
less LLVM-related cruft to maintain.
Unfortunately, the self-hosted SPIR-V backend is quite tightly coupled
with the self-hosted SPIR-V linker through its `Object` concept (which
is much like `llvm.Object`). Reworking this would be too much work for
this branch. So, for now, I have introduced a special case (similar to
the LLVM backend's special case) to the codegen logic when using this
backend. We will want to delete this special case at some point, but it
need not block this work.
My original goal here was just to get the self-hosted Wasm backend
compiling again after the pipeline change, but it turned out that from
there it was pretty simple to entirely eliminate the shared state
between `codegen.wasm` and `link.Wasm`. As such, this commit not only
fixes the backend, but makes it the second backend (after CBE) to
support the new 1:N:1 threading model.
As of this commit, every backend other than self-hosted Wasm and
self-hosted SPIR-V compiles and (at least somewhat) functions again.
Those two backends are currently disabled with panics.
Note that `Zcu.Feature.separate_thread` is *not* enabled for the fixed
backends. Avoiding linker references from codegen is a non-trivial task,
and can be done after this branch.
The idea here is that instead of the linker calling into codegen,
instead codegen should run before we touch the linker, and after MIR is
produced, it is sent to the linker. Aside from simplifying the call
graph (by preventing N linkers from each calling into M codegen
backends!), this has the huge benefit that it is possible to
parallellize codegen separately from linking. The threading model can
look like this:
* 1 semantic analysis thread, which generates AIR
* N codegen threads, which process AIR into MIR
* 1 linker thread, which emits MIR to the binary
The codegen threads are also responsible for `Air.Legalize` and
`Air.Liveness`; it's more efficient to do this work here instead of
blocking the main thread for this trivially parallel task.
I have repurposed the `Zcu.Feature.separate_thread` backend feature to
indicate support for this 1:N:1 threading pattern. This commit makes the
C backend support this feature, since it was relatively easy to divorce
from `link.C`: it just required eliminating some shared buffers. Other
backends don't currently support this feature. In fact, they don't even
compile -- the next few commits will fix them back up.
Similar to the previous commit, this commit untangles LLD integration
from the self-hosted linkers. Despite the big network of functions which
were involved, it turns out what was going on here is quite simple. The
LLD linking logic is actually very self-contained; it requires a few
flags from the `link.File.OpenOptions`, but that's really about it. We
don't need any of the mutable state on `Elf`/`Coff`/`Wasm`, for
instance. There was some legacy code trying to handle support for using
self-hosted codegen with LLD, but that's not a supported use case, so
I've just stripped it out.
For now, I've just pasted the logic for linking the 3 targets we
currently support using LLD for into this new linker implementation,
`link.Lld`; however, it's almost certainly possible to combine some of
the logic and simplify this file a bit. But to be honest, it's not
actually that bad right now.
This commit ends up eliminating the distinction between `flush` and
`flushZcu` (formerly `flushModule`) in linkers, where the latter
previously meant something along the lines of "flush, but if you're
going to be linking with LLD, just flush the ZCU object file, don't
actually link"?. The distinction here doesn't seem like it was properly
defined, and most linkers seem to treat them as essentially identical
anyway. Regardless, all calls to `flushZcu` are gone now, so it's
deleted -- one `flush` to rule them all!
The end result of this commit and the preceding one is that LLVM and LLD
fit into the pipeline much more sanely:
* If we're using LLVM for the ZCU, that state is on `zcu.llvm_object`
* If we're using LLD to link, then the `link.File` is a `link.Lld`
* Calls to "ZCU link functions" (e.g. `updateNav`) lower to calls to the
LLVM object if it's available, or otherwise to the `link.File` if it's
available (neither is available under `-fno-emit-bin`)
* After everything is done, linking is finalized by calling `flush` on
the `link.File`; for `link.Lld` this invokes LLD, for other linkers it
flushes self-hosted linker state
There's one messy thing remaining, and that's how self-hosted function
codegen in a ZCU works; right now, we process AIR with a call sequence
something like this:
* `link.doTask`
* `Zcu.PerThread.linkerUpdateFunc`
* `link.File.updateFunc`
* `link.Elf.updateFunc`
* `link.Elf.ZigObject.updateFunc`
* `codegen.generateFunction`
* `arch.x86_64.CodeGen.generate`
So, we start in the linker, take a scenic detour through `Zcu`, go back
to the linker, into its implementation, and then... right back out, into
code which is generic over the linker implementation, and then dispatch
on the *backend* instead! Of course, within `arch.x86_64.CodeGen`, there
are some more places which switch on the `link` implementation being
used. This is all pretty silly... so it shall be my next target.
The main goal of this commit is to make it easier to decouple codegen
from the linkers by being able to do LLVM codegen without going through
the `link.File`; however, this ended up being a nice refactor anyway.
Previously, every linker stored an optional `llvm.Object`, which was
populated when using LLVM for the ZCU *and* linking an output binary;
and `Zcu` also stored an optional `llvm.Object`, which was used only
when we needed LLVM for the ZCU (e.g. for `-femit-llvm-bc`) but were not
emitting a binary.
This situation was incredibly silly. It meant there were N+1 places the
LLVM object might be instead of just 1, and it meant that every linker
had to start a bunch of methods by checking for an LLVM object, and just
dispatching to the corresponding method on *it* instead if it was not
`null`.
Instead, we now always store the LLVM object on the `Zcu` -- which makes
sense, because it corresponds to the object emitted by, well, the Zig
Compilation Unit! The linkers now mostly don't make reference to LLVM.
`Compilation` makes sure to emit the LLVM object if necessary before
calling `flush`, so it is ready for the linker. Also, all of the
`link.File` methods which act on the ZCU -- like `updateNav` -- now
check for the LLVM object in `link.zig` instead of in every single
individual linker implementation. Notably, the change to LLVM emit
improves this rather ludicrous call chain in the `-fllvm -flld` case:
* Compilation.flush
* link.File.flush
* link.Elf.flush
* link.Elf.linkWithLLD
* link.Elf.flushModule
* link.emitLlvmObject
* Compilation.emitLlvmObject
* llvm.Object.emit
Replacing it with this one:
* Compilation.flush
* llvm.Object.emit
...although we do currently still end up in `link.Elf.linkWithLLD` to do
the actual linking. The logic for invoking LLD should probably also be
unified at least somewhat; I haven't done that in this commit.
* The `codegen_nav`, `codegen_func`, `codegen_type` tasks are renamed to
`link_nav`, `link_func`, and `link_type`, to more accurately reflect
their purpose of sending data to the *linker*. Currently, `link_func`
remains responsible for codegen; this will change in an upcoming
commit.
* Don't go on a pointless detour through `PerThread` when linking ZCU
functions/`Nav`s; so, the `linkerUpdateNav` etc logic now lives in
`link.zig`. Currently, `linkerUpdateFunc` is an exception, because it
has broader responsibilities including codegen, but this will be
solved in an upcoming commit.
Note that `openLoadArchive` already has linker script support.
With this change I get a failure parsing a real archive in the self
hosted elf linker, rather than the previous behavior of getting an error
while trying to parse a pseudo archive that is actually a load script.
To my knowledge, the only platforms that actually *require* PIE are Fuchsia and
Android, and the latter *only* when building a dynamically-linked executable.
OpenBSD and macOS both strongly encourage using PIE by default, but it isn't
technically required. So for the latter platforms, we enable it by default but
don't enforce it.
Also, importantly, if we're building an object file or a static library, and the
user hasn't explicitly told us whether to build PIE or non-PIE code (and the
target doesn't require PIE), we should *not* default to PIE. Doing so produces
code that cannot be linked into non-PIE output. In other words, building an
object file or a static library as PIE is an optimization only to be done when
the user knows that it'll end up in a PIE executable in the end.
Closes#21837.
Linking it by default means that we produce binaries that, effectively, only run
on systems which have the Windows SDK installed because ucrtbased.dll is not
redistributable, and the Windows SDK is what actually installs ucrtbased.dll
into %SYSTEM32%. The resulting binaries also can't run under Wine because Wine
does not provide ucrtbased.dll.
It is also inconsistent with our behavior for *-windows-gnu where we always link
ucrtbase.dll. See #23983, #24019, and #24053 for more details.
So just use ucrtbase.dll regardless of mode. With this change, we can also drop
the implicit definition of the _DEBUG macro in zig cc, which has in some cases
been problematic for users.
Users who want to opt into the old behavior can do so, both for *-windows-msvc
and *-windows-gnu, by explicitly passing -lucrtbased and -D_DEBUG. We might
consider adding a more ergonomic flag like -fdebug-crt to the zig build-* family
of commands in the future.
Closes#24052.
This commit introduces a new flag to generate a new Zig project using
`zig init` without comments for users who are already familiar with the
Zig build system.
Additionally, the generated files are now different. Previously we would
generate a set of files that defined a static library and an executable,
which real-life experience has shown to cause confusion to newcomers.
The new template generates one Zig module and one executable both in
order to accommodate the two most common use cases, but also to suggest
that a library could use a CLI tool (e.g. a parser library could use a
CLI tool that provides syntax checking) and vice-versa a CLI tool might
want to expose its core functionality as a Zig module.
All references to C interoperability are removed from the template under
the assumption that if you're tall enough to do C interop, you're also
tall enough to find your way around the build system. Experienced users
will still be able to use the current template and adapt it with minimal
changes in order to perform more advanced operations. As an example, one
only needs to change `b.addExecutable` to `b.addLibrary` to switch from
generating an executable to a dynamic (or static) library.
`castTruncatedData` was a poorly worded error (all shrinking casts
"truncate bits", it's just that we assume those bits to be zext/sext of
the other bits!), and `negativeToUnsigned` was a pointless distinction
which forced the compiler to emit worse code (since two separate safety
checks were required for casting e.g. 'i32' to 'u16') and wasn't even
implemented correctly. This commit combines those safety panics into one
function, `integerOutOfBounds`. The name maybe isn't perfect, but that's
not hugely important; what matters is the new default message, which is
clearer than the old ones: "integer does not fit in destination type".