This reverts commit 8bf3e1f8d0, which
introduced miscompilations for peer expressions any time they needed
coercions to runtime types.
I opened #11957 as a proposal to accomplish the goal of the reverted
commit.
Closes#11898
* Sema: Correctly determine whether array_cat lhs and rhs are single ptrs
Many-pointers are also not single-pointers and wouldn't be considered
here. This commit makes the conditions use the appropriately-named
isSinglePointer instead.
* Sema: Correctly obtain ArrayInfo for many-pointer concatenation
Many-pointers at comptime have a known size like slices and can be used
in array concatenation. This fixes a stage1 regression.
* test: Add comptime manyptr concatenation test
Co-authored-by: sin-ack <sin-ack@users.noreply.github.com>
Similar code was already in place for conditional branches. This updates
AstGen to do the same for labeled blocks. It takes advantage of the
`store_to_block_ptr` instructions by mutating them in place to become
`as` instructions, coercing the break operands before they are returned
from the block.
the 3 tests that called `testArray2DConstDoublePtr` started passing
after implementing `ptr_elem_val`. the rest of these I think were
already passing before.
* make it always return a fully qualified name. stage1 is inconsistent
about this.
* AstGen: fix anon_name_strategy to correctly be `func` when anon type
creation happens in the operand of the return expression.
* Sema: implement type names for the "function" naming strategy.
* Put "enum", "union", "opaque", or "struct" in place of "anon" when
creating respective anonymous Decl names.
* std.testing: add `expectStringStartsWith`. Didn't end up using it
after all.
Also this enables the real test runner for stage2 LLVM backend (sans
wasm32) since it works now.
This implements the `memcpy` instruction and also updates the inline memcpy calls
to make use of the same implementation. We use the fast-loop when the length is comptime known,
and use a runtime loop when the length is runtime known.
We also perform feature-dection to emit a simply wasm memory.copy instruction when the feature
'bulk-memory' is enabled. (off by default).
* use the real start code for LLVM backend with x86_64-linux
- there is still a check for zig_backend after initializing the TLS
area to skip some stuff.
* introduce new AIR instructions and implement them for the LLVM
backend. They are the same as `call` except with a modifier.
- call_always_tail
- call_never_tail
- call_never_inline
* LLVM backend calls hasRuntimeBitsIgnoringComptime in more places to
avoid unnecessarily depending on comptimeOnly being resolved for some
types.
* LLVM backend: remove duplicate code for setting linkage and value
name. The canonical place for this is in `updateDeclExports`.
* LLVM backend: do some assembly template massaging to make `%%`
rendered as `%`. More hacks will be needed to make inline assembly
catch up with stage1.
The core of this change is to re-use the escape sequence parsing logic
for parsing both string and character literals.
The actual fix is that UTF-8 encoding was missing for string literals
with \u{...} escape sequences.
* Sema: resolve type fully when emitting an alloc AIR instruction to
avoid tripping assertion for checking struct field alignment.
* LLVM backend: keep a reference to the LLVM target data alive during
lowering so that we can ask LLVM what it thinks the ABI alignment
and size of LLVM types are. We need this in order to lower tuples and
structs so that we can put in extra padding bytes when Zig disagrees
with LLVM about the size or alignment of something.
* LLVM backend: make the LLVM struct type packed that contains the most
aligned union field and the padding. This prevents the struct from
being too big according to LLVM. In the future, we may want to
consider instead emitting unions in a "flat" manner; putting the tag,
most aligned union field, and padding all in the same struct field
space.
* LLVM backend: make structs with 2 or fewer fields return isByRef=false.
This results in more efficient codegen. This required lowering of
bitcast to sometimes store the struct into an alloca, ptrcast, and
then load because LLVM does not allow bitcasting structs.
* enable more passing behavior tests.