LLVM backend: generate DIGlobalVariable's for non-function globals and
rename linkage names when exporting functions and globals.
zig_llvm.cpp: add some wrappers to convert a handful of DI classes
into DINode's since DIGlobalVariable is not a DIScope like the others.
zig_llvm.cpp: add some wrappers to allow replacing the LinkageName of
DISubprogram and DIGlobalVariable.
zig_llvm.cpp: fix DI class mixup causing nonsense reinterpret_cast.
The end result is that GDB is now usable since you now no longer need
to manually cast every global nor fully qualify every export.
Introduce `Module.ensureFuncBodyAnalyzed` and corresponding `Sema`
function. This mirrors `ensureDeclAnalyzed` except also waits until the
function body has been semantically analyzed, meaning that inferred
error sets will have been populated.
Resolving error sets can now emit a "unable to resolve inferred error
set" error instead of producing an incorrect error set type. Resolving
error sets now calls `ensureFuncBodyAnalyzed`. Closes#11046.
`coerceInMemoryAllowedErrorSets` now does a lot more work to avoid
resolving an inferred error set if possible. Same with
`wrapErrorUnionSet`.
Inferred error set types no longer check the `func` field to determine if
they are equal. That was incorrect because an inline or comptime function
call produces a unique error set which has the same `*Module.Fn` value for
this field. Instead we use the `*Module.Fn.InferredErrorSet` pointers to
test equality of inferred error sets.
This implements type equality for error sets. This is done
through element-wise error set comparison.
Inferred error sets are always distinct types and other error sets are
always sorted. See #11022.
`Module.Union.getLayout` now additionally returns a `padding` field
which tells how many bytes are between the final field end offset and
the ending offset of the union. This is used by the LLVM backend to
explicitly insert padding.
LLVM backend: lowering of unions now inserts additional padding so that
LLVM's internals will agree on the ABI size to match what ABI size zig
wants unions to be. This is an alternative to calling LLVMABISizeOfType
and LLVMABIAlignmentOfType which end up crashing when recursive struct
definitions come into play. We no longer ever call these two functions
and the bindings are deleted to avoid future footgun firings.
LLVM backend: lowering of unions now represents untagged unions
consistently. Before it was tripping an assertion.
LLVM backend: switch cases call inttoptr on the case items and condition
if necessary. Prevents tripping an LLVM assertion.
After this commit, we are no longer tripping over any LLVM assertions.
Similar to how Type.eql was reworked in the previous commit, this commit
reworks Type.hash to check all the different kinds of tags that a Type
can be represented with. It also completes the implementation for all
types except error sets, which need to have Type.eql enhanced as well.
Do the fallible logic in Sema where we have access to error reporting
mechanisms, rather than in Type/Value.
We can't just do the best guess when resolving queries of "is this type
comptime only?" or "what is the ABI alignment of this field?". The
result needs to be accurate. So we need to keep the assertions that the
data is available active, and instead compute the necessary information
before such functions get called.
Unfortunately we are stuck with two versions of such functions because
the various backends need to be able to ask such queries of Types and
Values while assuming the result has already been computed and validated
by Sema.
The ZIR instruction `union_init_ptr` is renamed to `union_init`.
I made it always use by-value semantics for now, not taking the time to
invest in result location semantics, in case we decide to change the
rules for unions. This way is much simpler.
There is a new AIR instruction: union_init. This is for a comptime known
tag, runtime-known field value.
vector_init is renamed to aggregate_init, which solves a TODO comment.
This implements #10113 for the self-hosted compiler only. It removes the
ability to override alignment of packed struct fields, and removes the
ability to put pointers and arrays inside packed structs.
After this commit, nearly all the behavior tests pass for the stage2 llvm
backend that involve packed structs.
I didn't implement the compile errors or compile error tests yet. I'm
waiting until we have stage2 building itself and then I want to rework
the compile error test harness with inspiration from Vexu's arocc test
harness. At that point it should be a much nicer dev experience to work
on compile errors.
We now correctly implement exporting decls. This means it is possible to export
a decl with a different name than the decl that is doing the export.
This also sets the symbols with the correct flags, so when we emit a relocatable
object file, a linker can correctly resolve symbols and/or export the symbol to the host environment.
This commit also includes fixes to ensure relocations have the correct offset to how other
linkers will expect the offset, rather than what we use internally.
Other linkers accept the offset, relative to the section.
Internally we use an offset relative to the atom.
Prior to this commit, the AIR arg instruction kept a reference to a ZIR
string index for the corresponding parameter name. This is used by DWARF
emitting code. However, this is a design flaw because we want AIR
objects to be independent from ZIR.
This commit saves the parameter names into memory managed by
`Module.Fn`. This is sub-optimal because we should be able to get the
parameter names from the ZIR for a function without having them
redundantly stored along with `Fn` memory. However the current way that
ZIR param instructions are encoded does not support this case. They
appear in the same ZIR body as the function instruction, just before it.
Instead, they should be embedded within the function instruction, which
will allow this TODO to be solved. That improvement is too big for this
commit, however.
After this there is one last dependency to untangle, which is for inline
assembly. The issue for that is #10784.
For some errors if the found token is not on the same line as
the previous token, point to the end of the previous token.
This usually results in more helpful errors.
When Sema sees a store_node instruction, it now checks for
the possibility of this pattern:
%a = ret_ptr
%b = store(%a, %c)
Where %c is an error union. In such case we need to add to the
current function's inferred error set, if any.
Coercion from error union to error union will be handled ideally if the
operand is comptime known. In such case it does the appropriate
unwrapping, then wraps again.
In the future, coercion from error union to error union should do the
same thing for a runtime value; emitting a runtime branch to check if
the value is an error or not.
`Value.arrayLen` for structs returns the number of fields. This is so
that Liveness can use it for the `vector_init` instruction (soon to be
renamed to `aggregate_init`).
For example, a situation like this is allowed
```zig
extern "c" var stderrp: c_int;
```
In this case, `Module.Var` wrapping `stderrp` will have `lib_name`
populated with the library name where this import is expected.
`ExternFn` will contain a maybe-lib-name if it was defined with
the `extern` keyword like so
```zig
extern "c" fn write(usize, usize, usize) usize;
```
`lib_name` will live as long as `ExternFn` decl does.
Clarify that `astgen.advanceSourceCursor` already increments absolute
values of the line and columns numbers; i.e., `GenZir.calcLine` is thus
not only obsolete but wrong by design.
Incidentally, this clean up allows for specifying the `FnDecl` line
numbers for DWARF use correctly as relative values with respect to
the start of the parent `Decl`. This `Decl` in turn has its line number
information specified relatively to its parent `Decl`, and so on, until
we reach the global scope.
AstGen:
* rename the known_has_bits flag to known_non_opv to make it better
reflect what it actually means.
* add a known_comptime_only flag.
* make the flags take advantage of identifiers of primitives and the
fact that zig has no shadowing.
* correct the known_non_opv flag for function bodies.
Sema:
* Rename `hasCodeGenBits` to `hasRuntimeBits` to better reflect what it
does.
- This function got a bit more complicated in this commit because of
the duality of function bodies: on one hand they have runtime bits,
but on the other hand they require being comptime known.
* WipAnonDecl now takes a LazySrcDecl parameter and performs the type
resolutions that it needs during finish().
* Implement comptime `@ptrToInt`.
Codegen:
* Improved handling of lowering decl_ref; make it work for
comptime-known ptr-to-int values.
- This same change had to be made many different times; perhaps we
should look into merging the implementations of `genTypedValue`
across x86, arm, aarch64, and riscv.
This commit updates stage2 to enforce the property that the syntax
`fn()void` is a function *body* not a *pointer*. To get a pointer, the
syntax `*const fn()void` is required.
ZIR puts function alignment into the func instruction rather than the
decl because this way it makes it into function types. LLVM backend
respects function alignments.
Struct and Union have methods `fieldSrcLoc` to help look up source
locations of their fields. These trigger full loading, tokenization, and
parsing of source files, so should only be called once it is confirmed
that an error message needs to be printed.
There are some nice new error hints for explaining why a type is
required to be comptime, particularly for structs that contain function
body types.
`Type.requiresComptime` is now moved into Sema because it can fail and
might need to trigger field type resolution. Comptime pointer loading
takes into account types that do not have a well-defined memory layout
and does not try to compute a byte offset for them.
`fn()void` syntax no longer secretly makes a pointer. You get a function
body type, which requires comptime. However a pointer to a function body
can be runtime known (obviously).
Compile errors that report "expected pointer, found ..." are factored
out into convenience functions `checkPtrOperand` and `checkPtrType` and
have a note about function pointers.
Implemented `Value.hash` for functions, enum literals, and undefined values.
stage1 is not updated to this (yet?), so some workarounds and disabled
tests are needed to keep everything working. Should we update stage1 to
these new type semantics? Yes probably because I don't want to add too
much conditional compilation logic in the std lib for the different
backends.
When asking a struct or union whether the type requires comptime, it may
need to ask itself recursively, for example because of a field which is
a pointer to itself. This commit adds a field to each to keep track of
when computing the "requires comptime" value and returns `false` if the
check is already ongoing.
Previously, breaking from an outer block at comptime would result in
incorrect control flow. Now there is a mechanism, `error.ComptimeBreak`,
similar to `error.ComptimeReturn`, to send comptime control flow further
up the stack, to its matching block.
This commit also introduces a new log scope. To use it, pass
`--debug-log sema_zir` and you will see 1 line per ZIR instruction
semantically analyzed. This is useful when you want to understand what
comptime control flow is doing while debugging the compiler.
One more `switch` test case is passing.
It is the job of codegen backends to mark Decls that are referenced as
alive so that the frontend does not sweep them with the garbage. This
commit unifies the code between the backends with an added method on
Decl.
The implementation is more complete than before, switching on the Decl
val tag and recursing into sub-values.
As a result, two more array tests are passing.
`runtime_param_index` is used to get the parameter type from `fn_type`,
but this variable was not incremented for zero sized parameters, causing
two zero sized parameters of different type to cause miss complication.
resolveTypeForCodegen is called when we needed to resolve a type fully,
even through pointer. This commit fully implements this, even through
pointer fields on structs and unions.
The function has now also been renamed to resolveTypeFully
Previously the code asserted source files were already loaded, but this
is not the case when cached ZIR is loaded. Now it will trigger .zig
source code to be loaded for the purposes of hashing the source for
`CacheMode.whole`.
This additionally refactors stat_size, stat_inode, and stat_mtime fields
into using the `Cache.File.Stat` struct.
when using `CacheMode.whole`. Also, I verified that `addDepFilePost` is
in fact including the original C source file in addition to the files it
depends on.
The two CacheMode values are `whole` and `incremental`.
`incremental` is what we had before; `whole` is new.
Whole cache mode uses everything as inputs to the cache hash;
and when a hit occurs it skips everything including linking.
This is ideal for when source files change rarely and for backends that
do not have good incremental compilation support, for example
compiler-rt or libc compiled with LLVM with optimizations on.
This is the main motivation for the additional mode, so that we can have
LLVM-optimized compiler-rt/libc builds, without waiting for the LLVM
backend every single time Zig is invoked.
Incremental cache mode hashes only the input file path and a few target
options, intentionally relying on collisions to locate already-existing
build artifacts which can then be incrementally updated.
The bespoke logic for caching stage1 backend build artifacts
is removed since we now have a global caching mechanism for
when we want to cache the entire compilation, *including* linking.
Previously we had to get "creative" with libs.txt and a special
byte in the hash id to communicate flags, so that when the cached
artifacts were re-linked, we had this information from stage1
even though we didn't actually run it. Now that `CacheMode.whole`
includes linking, this extra information does not need to be
preserved for cache hits. So although this changeset introduces
complexity, it also removes complexity.
The main trickiness here comes from the inherent differences between the
two modes: `incremental` wants a directory immediately to operate on,
while `whole` doesn't know the output directory until the compilation is
complete. This commit deals with this problem mostly inside `update()`,
where, on a cache miss, it replaces `zig_cache_artifact_directory` with a
temporary directory, and then renames it into place once the compilation is
complete.
Items remaining before this branch can be merged:
* [ ] make sure these things make it into the cache manifest:
- @import files
- @embedFile files
- we already add dep files from c but make sure the main .c files make
it in there too, not just the included files
* [ ] double check that the emit paths of other things besides the binary
are working correctly.
* [ ] test `-fno-emit-bin` + `-fstage1`
* [ ] test `-femit-bin=foo` + `-fstage1`
* [ ] implib emit directory copies bin_file_emit directory in create() and needs
to be adjusted to be overridden as well.
* [ ] make sure emit-h is handled correctly in the cache hash
* [ ] Cache: detect duplicate files added to the manifest
Some preliminary performance measurements of wall clock time and
peak RSS used:
stage1 behavior (1077 tests), llvm backend, release build:
* cold global cache: 4.6s, 1.1 GiB
* warm global cache: 3.4s, 980 MiB
stage2 master branch behavior (575 tests), llvm backend, release build:
* cold global cache: 0.62s, 191 MiB
* warm global cache: 0.40s, 128 MiB
stage2 this branch behavior (575 tests), llvm backend, release build:
* cold global cache: 0.62s, 179 MiB
* warm global cache: 0.27s, 90 MiB
* `Module.Union.getLayout`: fixes to support components of the union
being 0 bits.
* Implement `@typeInfo` for unions.
* Add missing calls to `resolveTypeFields`.
* Fix explicitly-provided union tag types passing a `Zir.Inst.Ref`
where an `Air.Inst.Ref` was expected. We don't have any type safety
for this; these typess are aliases.
* Fix explicitly-provided `union(enum)` tag Values allocated to the
wrong arena.