* add `Module.setBlockBody` and related functions
* redo astgen for `and` and `or` to use fewer ZIR instructions and
require less processing for comptime known values
* Sema: rework `analyzeBody` function. See the new doc comments in this
commit. Divides ZIR instructions up into 3 categories:
- always noreturn
- never noreturn
- sometimes noreturn
The current plan is to avoid using async and related features in the
stage2 compiler so that we can bootstrap before implementing them.
Having this untested and incomplete code in the codebase increases
friction while working on stage2, in particular when preforming
larger refactors such as the current zir memory layout rework.
Therefore remove all async related code, leaving only error messages
in astgen.
These were previously implemented as a sub/sub_wrap instruction with a
lhs of 0. Making this separate instructions however allows us to save
some memory as there is no need to store a lhs.
* free Module.Fn ZIR code when destroying the owner Decl
* unreachable_safe and unreachable_unsafe are collapsed into one ZIR
instruction with a safety flag.
* astgen: emit an unreachable instruction for unreachable literals
* don't forget to call deinit on ZIR code
* astgen: implement some builtin functions
We are now passing this test:
```zig
export fn _start() noreturn {}
```
```
test.zig:1:30: error: expected noreturn, found void
```
I ran into an issue where we get an integer overflow trying to compute
node index offsets from the containing Decl. The problem is that the
parser adds the Decl node after adding the child nodes. For some things,
it is easy to reserve the node index and then set it later, however, for
this case, it is not a trivial code change, because depending on tokens
after parsing the decl determines whether we want to add a new node or
not.
Possible strategies here:
1. Rework the parser code to make sure that Decl nodes are before
children nodes in the AST node array.
2. Use signed integers for Decl node offsets.
3. Just flip the order of subtraction and addition. Expect Decl Node
index to be greater than children Node indexes.
I opted for (3) because it seems like the simplest thing to do. We'll
want to unify the logic for computing the offsets though because if the
logic gets repeated, it will probably get repeated wrong.
There are some `@panic("TODO")` in there but I'm trying to get the
branch to the point where collaborators can jump in.
Next is to repair the seam between LazySrcLoc and codegen's expected
absolute file offsets.
Next up is reworking the seam between the LazySrcLoc emitted by Sema
and the byte offsets currently expected by codegen.
And then the big one: updating astgen.zig to use the new memory layout.
The memory layout for ZIR instructions is completely reworked. See
zir.zig for those changes. Some new types:
* `zir.Code`: a "finished" set of ZIR instructions. Instead of allocating
each instruction independently, there is now a Tag and 8 bytes of
data available for all ZIR instructions. Small instructions fit
within these 8 bytes; larger ones use 4 bytes for an index into
`extra`. There is also `string_bytes` so that we can have 4 byte
references to strings. `zir.Inst.Tag` describes how to interpret
those 8 bytes of data.
- This is shared by all `Block` scopes.
* `Module.WipZirCode`: represents an in-progress `zir.Code`. In this
structure, the arrays are mutable, and get resized as we add/delete
things. There is extra state to keep track of things. This struct is
stored on the stack. Once it is finished, it produces an immutable
`zir.Code`, which will remain on the heap for the duration of a
function's existence.
- This is shared by all `GenZir` scopes.
* `Sema`: represents in-progress semantic analysis of a `zir.Code`.
This data is stored on the stack and is shared among all `Block`
scopes. It is now the main "self" argument to everything in the file
that was previously named `zir_sema.zig`.
Additionally, I moved some logic that was in `Module` into here.
`Module.Fn` now stores its parameter names inside the `zir.Code`,
instead of inside ZIR instructions. When the TZIR memory layout
reworking time comes, codegen will be able to reference this data
directly instead of duplicating it.
astgen.zig is (so far) almost entirely untouched, but nearly all of it
will need to be reworked to adhere to this new memory layout structure.
I have no benchmarks to report yet, as I am still working through
compile errors and fixing various things that I broke in this branch.
Overhaul of Source Locations:
Previously we used `usize` everywhere to mean byte offset, but sometimes
also mean other stuff. This was error prone and also made us do
unnecessary work, and store unnecessary bytes in memory.
Now there are more types involved into source locations, and more ways
to describe a source location.
* AllErrors.Message: embrace the assumption that files always have less
than 2 << 32 bytes.
* SrcLoc gets more complicated, to model more complicated source
locations.
* Introduce LazySrcLoc, which can model interesting source locations
with very little stored state. Useful for avoiding doing unnecessary
work when no compile errors occur.
Also, previously, we had `src: usize` on every ZIR instruction. This is
no longer the case. Each instruction now determines whether it even cares
about source location, and if so, how that source location is stored.
This requires more careful work inside `Sema`, but it results in fewer
bytes stored on the heap, without compromising accuracy and power of
compile error messages.
Miscellaneous:
* std.zig: string literals have more helpful result values for
reporting errors. There is now a lower level API and a higher level
API.
- side note: I noticed that the string literal logic needs some love.
There is some unnecessarily hacky code there.
* cut & pasted some TZIR logic that was in zir.zig to ir.zig. This
probably broke stuff and needs to get fixed.
* Removed type/Enum.zig, type/Union.zig, and type/Struct.zig. I don't
think this quite how this code will be organized. Need some more
careful planning about how to implement structs, unions, enums. They
need to be independent Decls, just like a top level function.
Conflicts:
* lib/std/zig/ast.zig
* lib/std/zig/parse.zig
* lib/std/zig/parser_test.zig
* lib/std/zig/render.zig
* src/Module.zig
* src/zir.zig
I resolved some of the conflicts by reverting a small portion of
@tadeokondrak's stage2 logic here regarding `callconv(.Inline)`.
It will need to get reworked as part of this branch.
This commit does not reach any particular milestone, it is
work-in-progress towards getting things to build.
There's a `@panic("TODO")` in translate-c that should be removed when
working on translate-c stuff.
The astgen for switch expressions did not respect the ZIR rules of only
referencing instructions that are in scope:
%14 = block_comptime_flat({
%15 = block_comptime_flat({
%16 = const(TypedValue{ .ty = comptime_int, .val = 1})
})
%17 = block_comptime_flat({
%18 = const(TypedValue{ .ty = comptime_int, .val = 2})
})
})
%19 = block({
%20 = ref(%5)
%21 = deref(%20)
%22 = switchbr(%20, [%15, %17], {
%15 => {
%23 = const(TypedValue{ .ty = comptime_int, .val = 1})
%24 = store(%10, %23)
%25 = const(TypedValue{ .ty = void, .val = {}})
%26 = break("label_19", %25)
},
%17 => {
%27 = const(TypedValue{ .ty = comptime_int, .val = 2})
%28 = store(%10, %27)
%29 = const(TypedValue{ .ty = void, .val = {}})
%30 = break("label_19", %29)
}
}, {
%31 = unreachable_safe()
}, special_prong=else)
})
In this snippet you can see that the comptime expr referenced %15 and
%17 which are not in scope. There also was no test coverage for runtime
switch expressions.
Switch expressions will have to be re-introduced to follow these rules
and with some test coverage. There is some usable code being deleted in
this commit; it will be useful to reference when re-implementing switch
later.
A few more improvements to do while we're at it:
* only use .ref result loc on switch target if any prongs obtain the
payload with |*syntax|
- this improvement should be done to if, while, and for as well.
- this will remove the needless ref/deref instructions above
* remove switchbr and add switch_block, which is both a block and a
switch branch.
- similarly we should remove loop and add loop_block.
This commit introduces a "force_comptime" flag into the GenZIR
scope. The main purpose of this will be to choose the "comptime"
variants of certain key zir instructions, such as function calls and
branches. We will be moving away from using the block_comptime_flat
ZIR instruction, and eventually deleting it.
This commit also contains miscellaneous fixes to this branch that bring
it to the state of passing all the tests.
on the break instruction operands. This involves a new TZIR instruction,
br_block_flat, which represents a break instruction where the operand is
the result of a flat block. See the doc comments on the instructions for
more details.
How it works: when adding break instructions in semantic analysis, the
underlying allocation is slightly padded so that it is the size of a
br_block_flat instruction, which allows the break instruction to later
be converted without removing instructions inside the parent body. The
extra type coercion instructions go into the body of the br_block_flat,
and backends are responsible for dispatching the instruction correctly
(it should map to the same function calls for related instructions).
Motivating test case:
```zig
export fn _start() noreturn {
var x: u64 = 1;
var y: u32 = 2;
var thing: u32 = 1;
const result = if (thing == 1) x else y;
exit();
}
```
The main idea here is for astgen to output ideal ZIR depending on
whether or not the sub-expressions of a block consume the result
location. Here, neither `x` nor `y` consume the result location of the
conditional expression block, and so the ZIR should communicate the
result of the condbr using break instructions, not with the result
location pointer.
With this commit, this is accomplished:
```
%22 = alloc_inferred()
%23 = block({
%24 = const(TypedValue{ .ty = type, .val = bool})
%25 = deref(%18)
%26 = const(TypedValue{ .ty = comptime_int, .val = 1})
%27 = cmp_eq(%25, %26)
%28 = as(%24, %27)
%29 = condbr(%28, {
%30 = deref(%4)
< there is no longer a store instruction here >
%31 = break("label_23", %30)
}, {
%32 = deref(%11)
< there is no longer a store instruction here >
%33 = break("label_23", %32)
})
})
%34 = store_to_inferred_ptr(%22, %23) <-- the store is only here
%35 = resolve_inferred_alloc(%22)
```
However if the result location gets consumed, the break instructions
change to break_void, and the result value is communicated only by the
stores, not by the break instructions.
Implementation:
* The GenZIR scope that conditional branches uses now has an optional
result location pointer field and a count of how many times the
result location ended up being an rvalue (not consumed).
* When rvalue() is called on a result location for a block, it
increments this counter. After generating the branches of a block,
astgen for the conditional branch checks this count and if it is 2
then the store_to_block_ptr instructions are elided and it calls
rvalue() using the block result (which will account for peer type
resolution on the break operands).
astgen has many functions disabled until they can be reworked with these
new semantics. That will be done before merging the branch.
There are some new rules for astgen to follow regarding result locations
and what you are allowed/required to do depending on which one is passed
to expr(). See the updated doc comments of ResultLoc for details.
I also changed naming conventions of stuff in this commit, sorry about
that.