Compare commits

...

6 commits

Author SHA1 Message Date
Adrià Arrufat
02c5f05e2f std: replace usages of std.mem.indexOf with std.mem.find 2025-12-05 14:31:27 +01:00
Adrià Arrufat
1a420a8dca std.ascii: rename indexOf functions to find
This aligns with the recent changes in std.mem.find
2025-12-05 14:31:27 +01:00
Aidan Welch
032e3c9254 std.Io.Timestamp: when creating a Clock.Timestamp actually set .raw instead of the non-existant .nanoseconds
Some checks are pending
ci / aarch64-linux-debug (push) Waiting to run
ci / aarch64-linux-release (push) Waiting to run
ci / aarch64-macos-debug (push) Waiting to run
ci / aarch64-macos-release (push) Waiting to run
ci / loongarch64-linux-debug (push) Waiting to run
ci / loongarch64-linux-release (push) Waiting to run
ci / riscv64-linux-debug (push) Waiting to run
ci / riscv64-linux-release (push) Waiting to run
ci / s390x-linux-debug (push) Waiting to run
ci / s390x-linux-release (push) Waiting to run
ci / x86_64-freebsd-debug (push) Waiting to run
ci / x86_64-freebsd-release (push) Waiting to run
ci / x86_64-linux-debug (push) Waiting to run
ci / x86_64-linux-debug-llvm (push) Waiting to run
ci / x86_64-linux-release (push) Waiting to run
ci / x86_64-windows-debug (push) Waiting to run
ci / x86_64-windows-release (push) Waiting to run
2025-12-05 14:14:01 +01:00
Luna Schwalbe
adc5a39de2 Change github links to codeberg 2025-12-05 14:12:39 +01:00
Loris Cro
58e3c2cefd make Io.net.sendMany compile 2025-12-05 11:50:04 +01:00
Alex Rønne Petersen
c166bb36f6
ci: reduce x86_64-linux timeouts
These excessive timeouts should no longer be necessary with the recent tuning of
job capacity and maxrss on these machines.
2025-12-04 20:52:34 +01:00
64 changed files with 229 additions and 221 deletions

View file

@ -152,7 +152,7 @@ jobs:
fetch-depth: 0 fetch-depth: 0
- name: Build and Test - name: Build and Test
run: sh ci/x86_64-linux-debug.sh run: sh ci/x86_64-linux-debug.sh
timeout-minutes: 240 timeout-minutes: 180
x86_64-linux-debug-llvm: x86_64-linux-debug-llvm:
runs-on: [self-hosted, x86_64-linux] runs-on: [self-hosted, x86_64-linux]
steps: steps:
@ -162,7 +162,7 @@ jobs:
fetch-depth: 0 fetch-depth: 0
- name: Build and Test - name: Build and Test
run: sh ci/x86_64-linux-debug-llvm.sh run: sh ci/x86_64-linux-debug-llvm.sh
timeout-minutes: 480 timeout-minutes: 360
x86_64-linux-release: x86_64-linux-release:
runs-on: [self-hosted, x86_64-linux] runs-on: [self-hosted, x86_64-linux]
steps: steps:
@ -172,7 +172,7 @@ jobs:
fetch-depth: 0 fetch-depth: 0
- name: Build and Test - name: Build and Test
run: sh ci/x86_64-linux-release.sh run: sh ci/x86_64-linux-release.sh
timeout-minutes: 480 timeout-minutes: 360
x86_64-windows-debug: x86_64-windows-debug:
runs-on: [self-hosted, x86_64-windows] runs-on: [self-hosted, x86_64-windows]

View file

@ -485,16 +485,14 @@ interpret your words.
### Find a Contributor Friendly Issue ### Find a Contributor Friendly Issue
The issue label The issue label
[Contributor Friendly](https://github.com/ziglang/zig/issues?q=is%3Aissue+is%3Aopen+label%3A%22contributor+friendly%22) [Contributor Friendly](https://codeberg.org/ziglang/zig/issues?labels=741726&state=open)
exists to help you find issues that are **limited in scope and/or exists to help you find issues that are **limited in scope and/or
knowledge of Zig internals.** knowledge of Zig internals.**
Please note that issues labeled Please note that issues labeled
[Proposal](https://github.com/ziglang/zig/issues?q=is%3Aissue+is%3Aopen+label%3Aproposal) [Proposal: Proposed](https://codeberg.org/ziglang/zig/issues?labels=746937&state=open)
but do not also have the are still under consideration, and efforts to implement such a proposal have
[Accepted](https://github.com/ziglang/zig/issues?q=is%3Aissue+is%3Aopen+label%3Aaccepted) a high risk of being wasted. If you are interested in a proposal which is
label are still under consideration, and efforts to implement such a proposal
have a high risk of being wasted. If you are interested in a proposal which is
still under consideration, please express your interest in the issue tracker, still under consideration, please express your interest in the issue tracker,
providing extra insights and considerations that others have not yet expressed. providing extra insights and considerations that others have not yet expressed.
The most highly regarded argument in such a discussion is a real world use case. The most highly regarded argument in such a discussion is a real world use case.
@ -777,7 +775,7 @@ If you will be debugging the Zig compiler itself, or if you will be debugging
any project compiled with Zig's LLVM backend (not recommended with the LLDB any project compiled with Zig's LLVM backend (not recommended with the LLDB
fork, prefer vanilla LLDB with a version that matches the version of LLVM that fork, prefer vanilla LLDB with a version that matches the version of LLVM that
Zig is using), you can get a better debugging experience by using Zig is using), you can get a better debugging experience by using
[`lldb_pretty_printers.py`](https://github.com/ziglang/zig/blob/master/tools/lldb_pretty_printers.py). [`lldb_pretty_printers.py`](https://codeberg.org/ziglang/zig/src/branch/master/tools/lldb_pretty_printers.py).
Put this line in `~/.lldbinit`: Put this line in `~/.lldbinit`:

View file

@ -39,7 +39,7 @@ v2.2.5.
The file `lib/libc/glibc/abilist` is a Zig-specific binary blob that The file `lib/libc/glibc/abilist` is a Zig-specific binary blob that
defines the supported glibc versions and the set of symbols each version defines the supported glibc versions and the set of symbols each version
must define. See https://github.com/ziglang/glibc-abi-tool for the must define. See https://codeberg.org/ziglang/libc-abi-tools for the
tooling to generate this blob. The code in `glibc.zig` parses the abilist tooling to generate this blob. The code in `glibc.zig` parses the abilist
to build version-specific stub libraries on demand. to build version-specific stub libraries on demand.

View file

@ -79,7 +79,7 @@ enable_rosetta: bool = false,
enable_wasmtime: bool = false, enable_wasmtime: bool = false,
/// Use system Wine installation to run cross compiled Windows build artifacts. /// Use system Wine installation to run cross compiled Windows build artifacts.
enable_wine: bool = false, enable_wine: bool = false,
/// After following the steps in https://github.com/ziglang/zig/wiki/Updating-libc#glibc, /// After following the steps in https://codeberg.org/ziglang/infra/src/branch/master/libc-update/glibc.md,
/// this will be the directory $glibc-build-dir/install/glibcs /// this will be the directory $glibc-build-dir/install/glibcs
/// Given the example of the aarch64 target, this is the directory /// Given the example of the aarch64 target, this is the directory
/// that contains the path `aarch64-linux-gnu/lib/ld-linux-aarch64.so.1`. /// that contains the path `aarch64-linux-gnu/lib/ld-linux-aarch64.so.1`.

View file

@ -60,7 +60,7 @@ fn make(step: *Step, options: Step.MakeOptions) !void {
}; };
for (check_file.expected_matches) |expected_match| { for (check_file.expected_matches) |expected_match| {
if (mem.indexOf(u8, contents, expected_match) == null) { if (mem.find(u8, contents, expected_match) == null) {
return step.fail( return step.fail(
\\ \\
\\========= expected to find: =================== \\========= expected to find: ===================

View file

@ -88,7 +88,7 @@ const Action = struct {
while (needle_it.next()) |needle_tok| { while (needle_it.next()) |needle_tok| {
const hay_tok = hay_it.next() orelse break; const hay_tok = hay_it.next() orelse break;
if (mem.startsWith(u8, needle_tok, "{")) { if (mem.startsWith(u8, needle_tok, "{")) {
const closing_brace = mem.indexOf(u8, needle_tok, "}") orelse return error.MissingClosingBrace; const closing_brace = mem.find(u8, needle_tok, "}") orelse return error.MissingClosingBrace;
if (closing_brace != needle_tok.len - 1) return error.ClosingBraceNotLast; if (closing_brace != needle_tok.len - 1) return error.ClosingBraceNotLast;
const name = needle_tok[1..closing_brace]; const name = needle_tok[1..closing_brace];
@ -133,7 +133,7 @@ const Action = struct {
assert(act.tag == .contains); assert(act.tag == .contains);
const hay = mem.trim(u8, haystack, " "); const hay = mem.trim(u8, haystack, " ");
const phrase = mem.trim(u8, act.phrase.resolve(b, step), " "); const phrase = mem.trim(u8, act.phrase.resolve(b, step), " ");
return mem.indexOf(u8, hay, phrase) != null; return mem.find(u8, hay, phrase) != null;
} }
/// Returns true if the `phrase` does not exist within the haystack. /// Returns true if the `phrase` does not exist within the haystack.
@ -1662,7 +1662,7 @@ const MachODumper = struct {
.dump_section => { .dump_section => {
const name = mem.sliceTo(@as([*:0]const u8, @ptrCast(check.data.items.ptr + check.payload.dump_section)), 0); const name = mem.sliceTo(@as([*:0]const u8, @ptrCast(check.data.items.ptr + check.payload.dump_section)), 0);
const sep_index = mem.indexOfScalar(u8, name, ',') orelse const sep_index = mem.findScalar(u8, name, ',') orelse
return step.fail("invalid section name: {s}", .{name}); return step.fail("invalid section name: {s}", .{name});
const segname = name[0..sep_index]; const segname = name[0..sep_index];
const sectname = name[sep_index + 1 ..]; const sectname = name[sep_index + 1 ..];

View file

@ -372,7 +372,7 @@ pub const TestRunner = struct {
pub fn create(owner: *std.Build, options: Options) *Compile { pub fn create(owner: *std.Build, options: Options) *Compile {
const name = owner.dupe(options.name); const name = owner.dupe(options.name);
if (mem.indexOf(u8, name, "/") != null or mem.indexOf(u8, name, "\\") != null) { if (mem.find(u8, name, "/") != null or mem.find(u8, name, "\\") != null) {
panic("invalid name: '{s}'. It looks like a file path, but it is supposed to be the library or application name.", .{name}); panic("invalid name: '{s}'. It looks like a file path, but it is supposed to be the library or application name.", .{name});
} }
@ -731,7 +731,7 @@ fn runPkgConfig(compile: *Compile, lib_name: []const u8) !PkgConfigResult {
// Prefixed "lib" or suffixed ".0". // Prefixed "lib" or suffixed ".0".
for (pkgs) |pkg| { for (pkgs) |pkg| {
if (std.ascii.indexOfIgnoreCase(pkg.name, lib_name)) |pos| { if (std.ascii.findIgnoreCase(pkg.name, lib_name)) |pos| {
const prefix = pkg.name[0..pos]; const prefix = pkg.name[0..pos];
const suffix = pkg.name[pos + lib_name.len ..]; const suffix = pkg.name[pos + lib_name.len ..];
if (prefix.len > 0 and !mem.eql(u8, prefix, "lib")) continue; if (prefix.len > 0 and !mem.eql(u8, prefix, "lib")) continue;
@ -2129,7 +2129,7 @@ fn matchCompileError(actual: []const u8, expected: []const u8) bool {
// We scan for /?/ in expected line and if there is a match, we match everything // We scan for /?/ in expected line and if there is a match, we match everything
// up to and after /?/. // up to and after /?/.
const expected_trim = mem.trim(u8, expected, " "); const expected_trim = mem.trim(u8, expected, " ");
if (mem.indexOf(u8, expected_trim, "/?/")) |index| { if (mem.find(u8, expected_trim, "/?/")) |index| {
const actual_trim = mem.trim(u8, actual, " "); const actual_trim = mem.trim(u8, actual, " ");
const lhs = expected_trim[0..index]; const lhs = expected_trim[0..index];
const rhs = expected_trim[index + "/?/".len ..]; const rhs = expected_trim[index + "/?/".len ..];

View file

@ -581,12 +581,12 @@ fn expand_variables_autoconf_at(
var source_offset: usize = 0; var source_offset: usize = 0;
while (curr < contents.len) : (curr += 1) { while (curr < contents.len) : (curr += 1) {
if (contents[curr] != '@') continue; if (contents[curr] != '@') continue;
if (std.mem.indexOfScalarPos(u8, contents, curr + 1, '@')) |close_pos| { if (std.mem.findScalarPos(u8, contents, curr + 1, '@')) |close_pos| {
if (close_pos == curr + 1) { if (close_pos == curr + 1) {
// closed immediately, preserve as a literal // closed immediately, preserve as a literal
continue; continue;
} }
const valid_varname_end = std.mem.indexOfNonePos(u8, contents, curr + 1, valid_varname_chars) orelse 0; const valid_varname_end = std.mem.findNonePos(u8, contents, curr + 1, valid_varname_chars) orelse 0;
if (valid_varname_end != close_pos) { if (valid_varname_end != close_pos) {
// contains invalid characters, preserve as a literal // contains invalid characters, preserve as a literal
continue; continue;
@ -638,12 +638,12 @@ fn expand_variables_cmake(
loop: while (curr < contents.len) : (curr += 1) { loop: while (curr < contents.len) : (curr += 1) {
switch (contents[curr]) { switch (contents[curr]) {
'@' => blk: { '@' => blk: {
if (std.mem.indexOfScalarPos(u8, contents, curr + 1, '@')) |close_pos| { if (std.mem.findScalarPos(u8, contents, curr + 1, '@')) |close_pos| {
if (close_pos == curr + 1) { if (close_pos == curr + 1) {
// closed immediately, preserve as a literal // closed immediately, preserve as a literal
break :blk; break :blk;
} }
const valid_varname_end = std.mem.indexOfNonePos(u8, contents, curr + 1, valid_varname_chars) orelse 0; const valid_varname_end = std.mem.findNonePos(u8, contents, curr + 1, valid_varname_chars) orelse 0;
if (valid_varname_end != close_pos) { if (valid_varname_end != close_pos) {
// contains invalid characters, preserve as a literal // contains invalid characters, preserve as a literal
break :blk; break :blk;
@ -734,7 +734,7 @@ fn expand_variables_cmake(
else => {}, else => {},
} }
if (var_stack.items.len > 0 and std.mem.indexOfScalar(u8, valid_varname_chars, contents[curr]) == null) { if (var_stack.items.len > 0 and std.mem.findScalar(u8, valid_varname_chars, contents[curr]) == null) {
return error.InvalidCharacter; return error.InvalidCharacter;
} }
} }

View file

@ -1509,7 +1509,7 @@ fn runCommand(
} }
}, },
.expect_stderr_match => |match| { .expect_stderr_match => |match| {
if (mem.indexOf(u8, generic_result.stderr.?, match) == null) { if (mem.find(u8, generic_result.stderr.?, match) == null) {
return step.fail( return step.fail(
\\========= expected to find in stderr: ========= \\========= expected to find in stderr: =========
\\{s} \\{s}
@ -1535,7 +1535,7 @@ fn runCommand(
} }
}, },
.expect_stdout_match => |match| { .expect_stdout_match => |match| {
if (mem.indexOf(u8, generic_result.stdout.?, match) == null) { if (mem.find(u8, generic_result.stdout.?, match) == null) {
return step.fail( return step.fail(
\\========= expected to find in stdout: ========= \\========= expected to find in stdout: =========
\\{s} \\{s}

View file

@ -890,7 +890,7 @@ pub const Timestamp = struct {
} }
pub fn withClock(t: Timestamp, clock: Clock) Clock.Timestamp { pub fn withClock(t: Timestamp, clock: Clock) Clock.Timestamp {
return .{ .nanoseconds = t.nanoseconds, .clock = clock }; return .{ .raw = t, .clock = clock };
} }
pub fn fromNanoseconds(x: i96) Timestamp { pub fn fromNanoseconds(x: i96) Timestamp {

View file

@ -993,7 +993,7 @@ pub fn streamDelimiterLimit(
error.ReadFailed => return error.ReadFailed, error.ReadFailed => return error.ReadFailed,
error.EndOfStream => return @intFromEnum(limit) - remaining, error.EndOfStream => return @intFromEnum(limit) - remaining,
}); });
if (std.mem.indexOfScalar(u8, available, delimiter)) |delimiter_index| { if (std.mem.findScalar(u8, available, delimiter)) |delimiter_index| {
try w.writeAll(available[0..delimiter_index]); try w.writeAll(available[0..delimiter_index]);
r.toss(delimiter_index); r.toss(delimiter_index);
remaining -= delimiter_index; remaining -= delimiter_index;
@ -1064,7 +1064,7 @@ pub fn discardDelimiterLimit(r: *Reader, delimiter: u8, limit: Limit) DiscardDel
error.ReadFailed => return error.ReadFailed, error.ReadFailed => return error.ReadFailed,
error.EndOfStream => return @intFromEnum(limit) - remaining, error.EndOfStream => return @intFromEnum(limit) - remaining,
}); });
if (std.mem.indexOfScalar(u8, available, delimiter)) |delimiter_index| { if (std.mem.findScalar(u8, available, delimiter)) |delimiter_index| {
r.toss(delimiter_index); r.toss(delimiter_index);
remaining -= delimiter_index; remaining -= delimiter_index;
return @intFromEnum(limit) - remaining; return @intFromEnum(limit) - remaining;

View file

@ -1090,7 +1090,8 @@ pub const Socket = struct {
} }
pub fn sendMany(s: *const Socket, io: Io, messages: []OutgoingMessage, flags: SendFlags) SendError!void { pub fn sendMany(s: *const Socket, io: Io, messages: []OutgoingMessage, flags: SendFlags) SendError!void {
return io.vtable.netSend(io.userdata, s.handle, messages, flags); const err, const n = io.vtable.netSend(io.userdata, s.handle, messages, flags);
if (n != messages.len) return err.?;
} }
pub const ReceiveError = error{ pub const ReceiveError = error{

View file

@ -257,7 +257,7 @@ pub const Node = struct {
const index = n.index.unwrap() orelse return; const index = n.index.unwrap() orelse return;
const storage = storageByIndex(index); const storage = storageByIndex(index);
const name_len = @min(max_name_len, std.mem.indexOfScalar(u8, new_name, 0) orelse new_name.len); const name_len = @min(max_name_len, std.mem.findScalar(u8, new_name, 0) orelse new_name.len);
copyAtomicStore(storage.name[0..name_len], new_name[0..name_len]); copyAtomicStore(storage.name[0..name_len], new_name[0..name_len]);
if (name_len < storage.name.len) if (name_len < storage.name.len)
@ -1347,7 +1347,7 @@ fn computeNode(
const storage = &serialized.storage[@intFromEnum(node_index)]; const storage = &serialized.storage[@intFromEnum(node_index)];
const estimated_total = storage.estimated_total_count; const estimated_total = storage.estimated_total_count;
const completed_items = storage.completed_count; const completed_items = storage.completed_count;
const name = if (std.mem.indexOfScalar(u8, &storage.name, 0)) |end| storage.name[0..end] else &storage.name; const name = if (std.mem.findScalar(u8, &storage.name, 0)) |end| storage.name[0..end] else &storage.name;
const parent = serialized.parents[@intFromEnum(node_index)]; const parent = serialized.parents[@intFromEnum(node_index)];
if (parent != .none) p: { if (parent != .none) p: {

View file

@ -180,7 +180,7 @@ pub fn main() !void {
if (bench_prngs) { if (bench_prngs) {
if (bench_long) { if (bench_long) {
inline for (prngs) |R| { inline for (prngs) |R| {
if (filter == null or std.mem.indexOf(u8, R.name, filter.?) != null) { if (filter == null or std.mem.find(u8, R.name, filter.?) != null) {
try stdout.print("{s} (long outputs)\n", .{R.name}); try stdout.print("{s} (long outputs)\n", .{R.name});
try stdout.flush(); try stdout.flush();
@ -191,7 +191,7 @@ pub fn main() !void {
} }
if (bench_short) { if (bench_short) {
inline for (prngs) |R| { inline for (prngs) |R| {
if (filter == null or std.mem.indexOf(u8, R.name, filter.?) != null) { if (filter == null or std.mem.find(u8, R.name, filter.?) != null) {
try stdout.print("{s} (short outputs)\n", .{R.name}); try stdout.print("{s} (short outputs)\n", .{R.name});
try stdout.flush(); try stdout.flush();
@ -204,7 +204,7 @@ pub fn main() !void {
if (bench_csprngs) { if (bench_csprngs) {
if (bench_long) { if (bench_long) {
inline for (csprngs) |R| { inline for (csprngs) |R| {
if (filter == null or std.mem.indexOf(u8, R.name, filter.?) != null) { if (filter == null or std.mem.find(u8, R.name, filter.?) != null) {
try stdout.print("{s} (cryptographic, long outputs)\n", .{R.name}); try stdout.print("{s} (cryptographic, long outputs)\n", .{R.name});
try stdout.flush(); try stdout.flush();
@ -215,7 +215,7 @@ pub fn main() !void {
} }
if (bench_short) { if (bench_short) {
inline for (csprngs) |R| { inline for (csprngs) |R| {
if (filter == null or std.mem.indexOf(u8, R.name, filter.?) != null) { if (filter == null or std.mem.find(u8, R.name, filter.?) != null) {
try stdout.print("{s} (cryptographic, short outputs)\n", .{R.name}); try stdout.print("{s} (cryptographic, short outputs)\n", .{R.name});
try stdout.flush(); try stdout.flush();

View file

@ -84,7 +84,7 @@ pub fn order(lhs: Version, rhs: Version) std.math.Order {
pub fn parse(text: []const u8) !Version { pub fn parse(text: []const u8) !Version {
// Parse the required major, minor, and patch numbers. // Parse the required major, minor, and patch numbers.
const extra_index = std.mem.indexOfAny(u8, text, "-+"); const extra_index = std.mem.findAny(u8, text, "-+");
const required = text[0..(extra_index orelse text.len)]; const required = text[0..(extra_index orelse text.len)];
var it = std.mem.splitScalar(u8, required, '.'); var it = std.mem.splitScalar(u8, required, '.');
var ver = Version{ var ver = Version{
@ -98,7 +98,7 @@ pub fn parse(text: []const u8) !Version {
// Slice optional pre-release or build metadata components. // Slice optional pre-release or build metadata components.
const extra: []const u8 = text[extra_index.?..text.len]; const extra: []const u8 = text[extra_index.?..text.len];
if (extra[0] == '-') { if (extra[0] == '-') {
const build_index = std.mem.indexOfScalar(u8, extra, '+'); const build_index = std.mem.findScalar(u8, extra, '+');
ver.pre = extra[1..(build_index orelse extra.len)]; ver.pre = extra[1..(build_index orelse extra.len)];
if (build_index) |idx| ver.build = extra[(idx + 1)..]; if (build_index) |idx| ver.build = extra[(idx + 1)..];
} else { } else {

View file

@ -65,7 +65,7 @@ pub const Component = union(enum) {
pub fn toRaw(component: Component, buffer: []u8) error{NoSpaceLeft}![]const u8 { pub fn toRaw(component: Component, buffer: []u8) error{NoSpaceLeft}![]const u8 {
return switch (component) { return switch (component) {
.raw => |raw| raw, .raw => |raw| raw,
.percent_encoded => |percent_encoded| if (std.mem.indexOfScalar(u8, percent_encoded, '%')) |_| .percent_encoded => |percent_encoded| if (std.mem.findScalar(u8, percent_encoded, '%')) |_|
try std.fmt.bufPrint(buffer, "{f}", .{std.fmt.alt(component, .formatRaw)}) try std.fmt.bufPrint(buffer, "{f}", .{std.fmt.alt(component, .formatRaw)})
else else
percent_encoded, percent_encoded,
@ -76,7 +76,7 @@ pub const Component = union(enum) {
pub fn toRawMaybeAlloc(component: Component, arena: Allocator) Allocator.Error![]const u8 { pub fn toRawMaybeAlloc(component: Component, arena: Allocator) Allocator.Error![]const u8 {
return switch (component) { return switch (component) {
.raw => |raw| raw, .raw => |raw| raw,
.percent_encoded => |percent_encoded| if (std.mem.indexOfScalar(u8, percent_encoded, '%')) |_| .percent_encoded => |percent_encoded| if (std.mem.findScalar(u8, percent_encoded, '%')) |_|
try std.fmt.allocPrint(arena, "{f}", .{std.fmt.alt(component, .formatRaw)}) try std.fmt.allocPrint(arena, "{f}", .{std.fmt.alt(component, .formatRaw)})
else else
percent_encoded, percent_encoded,
@ -89,7 +89,7 @@ pub const Component = union(enum) {
.percent_encoded => |percent_encoded| { .percent_encoded => |percent_encoded| {
var start: usize = 0; var start: usize = 0;
var index: usize = 0; var index: usize = 0;
while (std.mem.indexOfScalarPos(u8, percent_encoded, index, '%')) |percent| { while (std.mem.findScalarPos(u8, percent_encoded, index, '%')) |percent| {
index = percent + 1; index = percent + 1;
if (percent_encoded.len - index < 2) continue; if (percent_encoded.len - index < 2) continue;
const percent_encoded_char = const percent_encoded_char =
@ -213,7 +213,7 @@ pub fn parseAfterScheme(scheme: []const u8, text: []const u8) ParseError!Uri {
var i: usize = 0; var i: usize = 0;
if (std.mem.startsWith(u8, text, "//")) a: { if (std.mem.startsWith(u8, text, "//")) a: {
i = std.mem.indexOfAnyPos(u8, text, 2, &authority_sep) orelse text.len; i = std.mem.findAnyPos(u8, text, 2, &authority_sep) orelse text.len;
const authority = text[2..i]; const authority = text[2..i];
if (authority.len == 0) { if (authority.len == 0) {
if (!std.mem.startsWith(u8, text[2..], "/")) return error.InvalidFormat; if (!std.mem.startsWith(u8, text[2..], "/")) return error.InvalidFormat;
@ -221,11 +221,11 @@ pub fn parseAfterScheme(scheme: []const u8, text: []const u8) ParseError!Uri {
} }
var start_of_host: usize = 0; var start_of_host: usize = 0;
if (std.mem.indexOf(u8, authority, "@")) |index| { if (std.mem.find(u8, authority, "@")) |index| {
start_of_host = index + 1; start_of_host = index + 1;
const user_info = authority[0..index]; const user_info = authority[0..index];
if (std.mem.indexOf(u8, user_info, ":")) |idx| { if (std.mem.find(u8, user_info, ":")) |idx| {
uri.user = .{ .percent_encoded = user_info[0..idx] }; uri.user = .{ .percent_encoded = user_info[0..idx] };
if (idx < user_info.len - 1) { // empty password is also "no password" if (idx < user_info.len - 1) { // empty password is also "no password"
uri.password = .{ .percent_encoded = user_info[idx + 1 ..] }; uri.password = .{ .percent_encoded = user_info[idx + 1 ..] };
@ -268,12 +268,12 @@ pub fn parseAfterScheme(scheme: []const u8, text: []const u8) ParseError!Uri {
} }
const path_start = i; const path_start = i;
i = std.mem.indexOfAnyPos(u8, text, path_start, &path_sep) orelse text.len; i = std.mem.findAnyPos(u8, text, path_start, &path_sep) orelse text.len;
uri.path = .{ .percent_encoded = text[path_start..i] }; uri.path = .{ .percent_encoded = text[path_start..i] };
if (std.mem.startsWith(u8, text[i..], "?")) { if (std.mem.startsWith(u8, text[i..], "?")) {
const query_start = i + 1; const query_start = i + 1;
i = std.mem.indexOfScalarPos(u8, text, query_start, '#') orelse text.len; i = std.mem.findScalarPos(u8, text, query_start, '#') orelse text.len;
uri.query = .{ .percent_encoded = text[query_start..i] }; uri.query = .{ .percent_encoded = text[query_start..i] };
} }

View file

@ -156,7 +156,7 @@ test whitespace {
var i: u8 = 0; var i: u8 = 0;
while (isAscii(i)) : (i += 1) { while (isAscii(i)) : (i += 1) {
if (isWhitespace(i)) try std.testing.expect(std.mem.indexOfScalar(u8, &whitespace, i) != null); if (isWhitespace(i)) try std.testing.expect(std.mem.findScalar(u8, &whitespace, i) != null);
} }
} }
@ -357,19 +357,25 @@ test endsWithIgnoreCase {
try std.testing.expect(!endsWithIgnoreCase("BoB", "Bo")); try std.testing.expect(!endsWithIgnoreCase("BoB", "Bo"));
} }
/// Deprecated in favor of `findIgnoreCase`.
pub const indexOfIgnoreCase = findIgnoreCase;
/// Finds `needle` in `haystack`, ignoring case, starting at index 0. /// Finds `needle` in `haystack`, ignoring case, starting at index 0.
pub fn indexOfIgnoreCase(haystack: []const u8, needle: []const u8) ?usize { pub fn findIgnoreCase(haystack: []const u8, needle: []const u8) ?usize {
return indexOfIgnoreCasePos(haystack, 0, needle); return findIgnoreCasePos(haystack, 0, needle);
} }
/// Deprecated in favor of `findIgnoreCasePos`.
pub const indexOfIgnoreCasePos = findIgnoreCasePos;
/// Finds `needle` in `haystack`, ignoring case, starting at `start_index`. /// Finds `needle` in `haystack`, ignoring case, starting at `start_index`.
/// Uses Boyer-Moore-Horspool algorithm on large inputs; `indexOfIgnoreCasePosLinear` on small inputs. /// Uses Boyer-Moore-Horspool algorithm on large inputs; `findIgnoreCasePosLinear` on small inputs.
pub fn indexOfIgnoreCasePos(haystack: []const u8, start_index: usize, needle: []const u8) ?usize { pub fn findIgnoreCasePos(haystack: []const u8, start_index: usize, needle: []const u8) ?usize {
if (needle.len > haystack.len) return null; if (needle.len > haystack.len) return null;
if (needle.len == 0) return start_index; if (needle.len == 0) return start_index;
if (haystack.len < 52 or needle.len <= 4) if (haystack.len < 52 or needle.len <= 4)
return indexOfIgnoreCasePosLinear(haystack, start_index, needle); return findIgnoreCasePosLinear(haystack, start_index, needle);
var skip_table: [256]usize = undefined; var skip_table: [256]usize = undefined;
boyerMooreHorspoolPreprocessIgnoreCase(needle, skip_table[0..]); boyerMooreHorspoolPreprocessIgnoreCase(needle, skip_table[0..]);
@ -383,9 +389,12 @@ pub fn indexOfIgnoreCasePos(haystack: []const u8, start_index: usize, needle: []
return null; return null;
} }
/// Consider using `indexOfIgnoreCasePos` instead of this, which will automatically use a /// Deprecated in favor of `findIgnoreCaseLinear`.
pub const indexOfIgnoreCasePosLinear = findIgnoreCasePosLinear;
/// Consider using `findIgnoreCasePos` instead of this, which will automatically use a
/// more sophisticated algorithm on larger inputs. /// more sophisticated algorithm on larger inputs.
pub fn indexOfIgnoreCasePosLinear(haystack: []const u8, start_index: usize, needle: []const u8) ?usize { pub fn findIgnoreCasePosLinear(haystack: []const u8, start_index: usize, needle: []const u8) ?usize {
var i: usize = start_index; var i: usize = start_index;
const end = haystack.len - needle.len; const end = haystack.len - needle.len;
while (i <= end) : (i += 1) { while (i <= end) : (i += 1) {
@ -407,15 +416,15 @@ fn boyerMooreHorspoolPreprocessIgnoreCase(pattern: []const u8, table: *[256]usiz
} }
} }
test indexOfIgnoreCase { test findIgnoreCase {
try std.testing.expect(indexOfIgnoreCase("one Two Three Four", "foUr").? == 14); try std.testing.expect(findIgnoreCase("one Two Three Four", "foUr").? == 14);
try std.testing.expect(indexOfIgnoreCase("one two three FouR", "gOur") == null); try std.testing.expect(findIgnoreCase("one two three FouR", "gOur") == null);
try std.testing.expect(indexOfIgnoreCase("foO", "Foo").? == 0); try std.testing.expect(findIgnoreCase("foO", "Foo").? == 0);
try std.testing.expect(indexOfIgnoreCase("foo", "fool") == null); try std.testing.expect(findIgnoreCase("foo", "fool") == null);
try std.testing.expect(indexOfIgnoreCase("FOO foo", "fOo").? == 0); try std.testing.expect(findIgnoreCase("FOO foo", "fOo").? == 0);
try std.testing.expect(indexOfIgnoreCase("one two three four five six seven eight nine ten eleven", "ThReE fOUr").? == 8); try std.testing.expect(findIgnoreCase("one two three four five six seven eight nine ten eleven", "ThReE fOUr").? == 8);
try std.testing.expect(indexOfIgnoreCase("one two three four five six seven eight nine ten eleven", "Two tWo") == null); try std.testing.expect(findIgnoreCase("one two three four five six seven eight nine ten eleven", "Two tWo") == null);
} }
/// Returns the lexicographical order of two slices. O(n). /// Returns the lexicographical order of two slices. O(n).

View file

@ -466,13 +466,13 @@ pub const SectionHeader = extern struct {
pub fn getName(self: *align(1) const SectionHeader) ?[]const u8 { pub fn getName(self: *align(1) const SectionHeader) ?[]const u8 {
if (self.name[0] == '/') return null; if (self.name[0] == '/') return null;
const len = std.mem.indexOfScalar(u8, &self.name, @as(u8, 0)) orelse self.name.len; const len = std.mem.findScalar(u8, &self.name, @as(u8, 0)) orelse self.name.len;
return self.name[0..len]; return self.name[0..len];
} }
pub fn getNameOffset(self: SectionHeader) ?u32 { pub fn getNameOffset(self: SectionHeader) ?u32 {
if (self.name[0] != '/') return null; if (self.name[0] != '/') return null;
const len = std.mem.indexOfScalar(u8, &self.name, @as(u8, 0)) orelse self.name.len; const len = std.mem.findScalar(u8, &self.name, @as(u8, 0)) orelse self.name.len;
const offset = std.fmt.parseInt(u32, self.name[1..len], 10) catch unreachable; const offset = std.fmt.parseInt(u32, self.name[1..len], 10) catch unreachable;
return offset; return offset;
} }
@ -628,7 +628,7 @@ pub const Symbol = struct {
pub fn getName(self: *const Symbol) ?[]const u8 { pub fn getName(self: *const Symbol) ?[]const u8 {
if (std.mem.eql(u8, self.name[0..4], "\x00\x00\x00\x00")) return null; if (std.mem.eql(u8, self.name[0..4], "\x00\x00\x00\x00")) return null;
const len = std.mem.indexOfScalar(u8, &self.name, @as(u8, 0)) orelse self.name.len; const len = std.mem.findScalar(u8, &self.name, @as(u8, 0)) orelse self.name.len;
return self.name[0..len]; return self.name[0..len];
} }
@ -869,7 +869,7 @@ pub const FileDefinition = struct {
file_name: [18]u8, file_name: [18]u8,
pub fn getFileName(self: *const FileDefinition) []const u8 { pub fn getFileName(self: *const FileDefinition) []const u8 {
const len = std.mem.indexOfScalar(u8, &self.file_name, @as(u8, 0)) orelse self.file_name.len; const len = std.mem.findScalar(u8, &self.file_name, @as(u8, 0)) orelse self.file_name.len;
return self.file_name[0..len]; return self.file_name[0..len];
} }
}; };
@ -1044,7 +1044,7 @@ pub const Coff = struct {
// Finally read the null-terminated string. // Finally read the null-terminated string.
const start = reader.seek; const start = reader.seek;
const len = std.mem.indexOfScalar(u8, self.data[start..], 0) orelse return null; const len = std.mem.findScalar(u8, self.data[start..], 0) orelse return null;
return self.data[start .. start + len]; return self.data[start .. start + len];
} }

View file

@ -598,7 +598,7 @@ fn testFuzzedMatchLen(_: void, input: []const u8) !void {
const bytes = w.buffered()[bytes_off..]; const bytes = w.buffered()[bytes_off..];
old = @min(old, bytes.len - 1, token.max_length - 1); old = @min(old, bytes.len - 1, token.max_length - 1);
const diff_index = mem.indexOfDiff(u8, prev, bytes).?; // unwrap since lengths are not same const diff_index = mem.findDiff(u8, prev, bytes).?; // unwrap since lengths are not same
const expected_len = @min(diff_index, 258); const expected_len = @min(diff_index, 258);
errdefer std.debug.print( errdefer std.debug.print(
\\prev : '{any}' \\prev : '{any}'

View file

@ -358,10 +358,10 @@ pub const Parsed = struct {
const wildcard_suffix = dns_name[2..]; const wildcard_suffix = dns_name[2..];
// No additional wildcards allowed in the suffix // No additional wildcards allowed in the suffix
if (mem.indexOf(u8, wildcard_suffix, "*") != null) return false; if (mem.find(u8, wildcard_suffix, "*") != null) return false;
// Find the first dot in hostname to split first label from rest // Find the first dot in hostname to split first label from rest
const dot_pos = mem.indexOf(u8, host_name, ".") orelse return false; const dot_pos = mem.find(u8, host_name, ".") orelse return false;
// Wildcard matches exactly one label, so compare the rest // Wildcard matches exactly one label, so compare the rest
const host_suffix = host_name[dot_pos + 1 ..]; const host_suffix = host_name[dot_pos + 1 ..];

View file

@ -269,9 +269,9 @@ pub fn addCertsFromFile(cb: *Bundle, gpa: Allocator, file_reader: *Io.File.Reade
const end_marker = "-----END CERTIFICATE-----"; const end_marker = "-----END CERTIFICATE-----";
var start_index: usize = 0; var start_index: usize = 0;
while (mem.indexOfPos(u8, encoded_bytes, start_index, begin_marker)) |begin_marker_start| { while (mem.findPos(u8, encoded_bytes, start_index, begin_marker)) |begin_marker_start| {
const cert_start = begin_marker_start + begin_marker.len; const cert_start = begin_marker_start + begin_marker.len;
const cert_end = mem.indexOfPos(u8, encoded_bytes, cert_start, end_marker) orelse const cert_end = mem.findPos(u8, encoded_bytes, cert_start, end_marker) orelse
return error.MissingEndCertificateMarker; return error.MissingEndCertificateMarker;
start_index = cert_end + end_marker.len; start_index = cert_end + end_marker.len;
const encoded_cert = mem.trim(u8, encoded_bytes[cert_start..cert_end], " \t\r\n"); const encoded_cert = mem.trim(u8, encoded_bytes[cert_start..cert_end], " \t\r\n");

View file

@ -547,7 +547,7 @@ pub fn main() !void {
} }
inline for (hashes) |H| { inline for (hashes) |H| {
if (filter == null or std.mem.indexOf(u8, H.name, filter.?) != null) { if (filter == null or std.mem.find(u8, H.name, filter.?) != null) {
const throughput = try benchmarkHash(H.ty, mode(128 * MiB)); const throughput = try benchmarkHash(H.ty, mode(128 * MiB));
try stdout.print("{s:>17}: {:10} MiB/s\n", .{ H.name, throughput / (1 * MiB) }); try stdout.print("{s:>17}: {:10} MiB/s\n", .{ H.name, throughput / (1 * MiB) });
try stdout.flush(); try stdout.flush();
@ -559,7 +559,7 @@ pub fn main() !void {
const io = io_threaded.io(); const io = io_threaded.io();
inline for (parallel_hashes) |H| { inline for (parallel_hashes) |H| {
if (filter == null or std.mem.indexOf(u8, H.name, filter.?) != null) { if (filter == null or std.mem.find(u8, H.name, filter.?) != null) {
const throughput = try benchmarkHashParallel(H.ty, mode(128 * MiB), arena_allocator, io); const throughput = try benchmarkHashParallel(H.ty, mode(128 * MiB), arena_allocator, io);
try stdout.print("{s:>17}: {:10} MiB/s\n", .{ H.name, throughput / (1 * MiB) }); try stdout.print("{s:>17}: {:10} MiB/s\n", .{ H.name, throughput / (1 * MiB) });
try stdout.flush(); try stdout.flush();
@ -567,7 +567,7 @@ pub fn main() !void {
} }
inline for (macs) |M| { inline for (macs) |M| {
if (filter == null or std.mem.indexOf(u8, M.name, filter.?) != null) { if (filter == null or std.mem.find(u8, M.name, filter.?) != null) {
const throughput = try benchmarkMac(M.ty, mode(128 * MiB)); const throughput = try benchmarkMac(M.ty, mode(128 * MiB));
try stdout.print("{s:>17}: {:10} MiB/s\n", .{ M.name, throughput / (1 * MiB) }); try stdout.print("{s:>17}: {:10} MiB/s\n", .{ M.name, throughput / (1 * MiB) });
try stdout.flush(); try stdout.flush();
@ -575,7 +575,7 @@ pub fn main() !void {
} }
inline for (exchanges) |E| { inline for (exchanges) |E| {
if (filter == null or std.mem.indexOf(u8, E.name, filter.?) != null) { if (filter == null or std.mem.find(u8, E.name, filter.?) != null) {
const throughput = try benchmarkKeyExchange(E.ty, mode(1000)); const throughput = try benchmarkKeyExchange(E.ty, mode(1000));
try stdout.print("{s:>17}: {:10} exchanges/s\n", .{ E.name, throughput }); try stdout.print("{s:>17}: {:10} exchanges/s\n", .{ E.name, throughput });
try stdout.flush(); try stdout.flush();
@ -583,7 +583,7 @@ pub fn main() !void {
} }
inline for (signatures) |E| { inline for (signatures) |E| {
if (filter == null or std.mem.indexOf(u8, E.name, filter.?) != null) { if (filter == null or std.mem.find(u8, E.name, filter.?) != null) {
const throughput = try benchmarkSignature(E.ty, mode(1000)); const throughput = try benchmarkSignature(E.ty, mode(1000));
try stdout.print("{s:>17}: {:10} signatures/s\n", .{ E.name, throughput }); try stdout.print("{s:>17}: {:10} signatures/s\n", .{ E.name, throughput });
try stdout.flush(); try stdout.flush();
@ -591,7 +591,7 @@ pub fn main() !void {
} }
inline for (signature_verifications) |E| { inline for (signature_verifications) |E| {
if (filter == null or std.mem.indexOf(u8, E.name, filter.?) != null) { if (filter == null or std.mem.find(u8, E.name, filter.?) != null) {
const throughput = try benchmarkSignatureVerification(E.ty, mode(1000)); const throughput = try benchmarkSignatureVerification(E.ty, mode(1000));
try stdout.print("{s:>17}: {:10} verifications/s\n", .{ E.name, throughput }); try stdout.print("{s:>17}: {:10} verifications/s\n", .{ E.name, throughput });
try stdout.flush(); try stdout.flush();
@ -599,7 +599,7 @@ pub fn main() !void {
} }
inline for (batch_signature_verifications) |E| { inline for (batch_signature_verifications) |E| {
if (filter == null or std.mem.indexOf(u8, E.name, filter.?) != null) { if (filter == null or std.mem.find(u8, E.name, filter.?) != null) {
const throughput = try benchmarkBatchSignatureVerification(E.ty, mode(1000)); const throughput = try benchmarkBatchSignatureVerification(E.ty, mode(1000));
try stdout.print("{s:>17}: {:10} verifications/s (batch)\n", .{ E.name, throughput }); try stdout.print("{s:>17}: {:10} verifications/s (batch)\n", .{ E.name, throughput });
try stdout.flush(); try stdout.flush();
@ -607,7 +607,7 @@ pub fn main() !void {
} }
inline for (aeads) |E| { inline for (aeads) |E| {
if (filter == null or std.mem.indexOf(u8, E.name, filter.?) != null) { if (filter == null or std.mem.find(u8, E.name, filter.?) != null) {
const throughput = try benchmarkAead(E.ty, mode(128 * MiB)); const throughput = try benchmarkAead(E.ty, mode(128 * MiB));
try stdout.print("{s:>17}: {:10} MiB/s\n", .{ E.name, throughput / (1 * MiB) }); try stdout.print("{s:>17}: {:10} MiB/s\n", .{ E.name, throughput / (1 * MiB) });
try stdout.flush(); try stdout.flush();
@ -615,7 +615,7 @@ pub fn main() !void {
} }
inline for (aes) |E| { inline for (aes) |E| {
if (filter == null or std.mem.indexOf(u8, E.name, filter.?) != null) { if (filter == null or std.mem.find(u8, E.name, filter.?) != null) {
const throughput = try benchmarkAes(E.ty, mode(100000000)); const throughput = try benchmarkAes(E.ty, mode(100000000));
try stdout.print("{s:>17}: {:10} ops/s\n", .{ E.name, throughput }); try stdout.print("{s:>17}: {:10} ops/s\n", .{ E.name, throughput });
try stdout.flush(); try stdout.flush();
@ -623,7 +623,7 @@ pub fn main() !void {
} }
inline for (aes8) |E| { inline for (aes8) |E| {
if (filter == null or std.mem.indexOf(u8, E.name, filter.?) != null) { if (filter == null or std.mem.find(u8, E.name, filter.?) != null) {
const throughput = try benchmarkAes8(E.ty, mode(10000000)); const throughput = try benchmarkAes8(E.ty, mode(10000000));
try stdout.print("{s:>17}: {:10} ops/s\n", .{ E.name, throughput }); try stdout.print("{s:>17}: {:10} ops/s\n", .{ E.name, throughput });
try stdout.flush(); try stdout.flush();
@ -631,7 +631,7 @@ pub fn main() !void {
} }
inline for (pwhashes) |H| { inline for (pwhashes) |H| {
if (filter == null or std.mem.indexOf(u8, H.name, filter.?) != null) { if (filter == null or std.mem.find(u8, H.name, filter.?) != null) {
const throughput = try benchmarkPwhash(arena_allocator, H.ty, H.params, mode(64), io); const throughput = try benchmarkPwhash(arena_allocator, H.ty, H.params, mode(64), io);
try stdout.print("{s:>17}: {d:10.3} s/ops\n", .{ H.name, throughput }); try stdout.print("{s:>17}: {d:10.3} s/ops\n", .{ H.name, throughput });
try stdout.flush(); try stdout.flush();
@ -639,7 +639,7 @@ pub fn main() !void {
} }
inline for (kems) |E| { inline for (kems) |E| {
if (filter == null or std.mem.indexOf(u8, E.name, filter.?) != null) { if (filter == null or std.mem.find(u8, E.name, filter.?) != null) {
const throughput = try benchmarkKem(E.ty, mode(1000)); const throughput = try benchmarkKem(E.ty, mode(1000));
try stdout.print("{s:>17}: {:10} encaps/s\n", .{ E.name, throughput }); try stdout.print("{s:>17}: {:10} encaps/s\n", .{ E.name, throughput });
try stdout.flush(); try stdout.flush();
@ -647,7 +647,7 @@ pub fn main() !void {
} }
inline for (kems) |E| { inline for (kems) |E| {
if (filter == null or std.mem.indexOf(u8, E.name, filter.?) != null) { if (filter == null or std.mem.find(u8, E.name, filter.?) != null) {
const throughput = try benchmarkKemDecaps(E.ty, mode(25000)); const throughput = try benchmarkKemDecaps(E.ty, mode(25000));
try stdout.print("{s:>17}: {:10} decaps/s\n", .{ E.name, throughput }); try stdout.print("{s:>17}: {:10} decaps/s\n", .{ E.name, throughput });
try stdout.flush(); try stdout.flush();
@ -655,7 +655,7 @@ pub fn main() !void {
} }
inline for (kems) |E| { inline for (kems) |E| {
if (filter == null or std.mem.indexOf(u8, E.name, filter.?) != null) { if (filter == null or std.mem.find(u8, E.name, filter.?) != null) {
const throughput = try benchmarkKemKeyGen(E.ty, mode(25000)); const throughput = try benchmarkKemKeyGen(E.ty, mode(25000));
try stdout.print("{s:>17}: {:10} keygen/s\n", .{ E.name, throughput }); try stdout.print("{s:>17}: {:10} keygen/s\n", .{ E.name, throughput });
try stdout.flush(); try stdout.flush();

View file

@ -358,7 +358,7 @@ const crypt_format = struct {
fn intDecode(comptime T: type, src: *const [(@bitSizeOf(T) + 5) / 6]u8) !T { fn intDecode(comptime T: type, src: *const [(@bitSizeOf(T) + 5) / 6]u8) !T {
var v: T = 0; var v: T = 0;
for (src, 0..) |x, i| { for (src, 0..) |x, i| {
const vi = mem.indexOfScalar(u8, &map64, x) orelse return EncodingError.InvalidEncoding; const vi = mem.findScalar(u8, &map64, x) orelse return EncodingError.InvalidEncoding;
v |= @as(T, @intCast(vi)) << @as(math.Log2Int(T), @intCast(i * 6)); v |= @as(T, @intCast(vi)) << @as(math.Log2Int(T), @intCast(i * 6));
} }
return v; return v;

View file

@ -1196,7 +1196,7 @@ fn printLineFromFile(writer: *Writer, source_location: SourceLocation) !void {
var next_line: usize = 1; var next_line: usize = 1;
while (next_line != source_location.line) { while (next_line != source_location.line) {
const slice = buf[current_line_start..amt_read]; const slice = buf[current_line_start..amt_read];
if (mem.indexOfScalar(u8, slice, '\n')) |pos| { if (mem.findScalar(u8, slice, '\n')) |pos| {
next_line += 1; next_line += 1;
if (pos == slice.len - 1) { if (pos == slice.len - 1) {
amt_read = try f.read(buf[0..]); amt_read = try f.read(buf[0..]);
@ -1212,7 +1212,7 @@ fn printLineFromFile(writer: *Writer, source_location: SourceLocation) !void {
break :seek current_line_start; break :seek current_line_start;
}; };
const slice = buf[line_start..amt_read]; const slice = buf[line_start..amt_read];
if (mem.indexOfScalar(u8, slice, '\n')) |pos| { if (mem.findScalar(u8, slice, '\n')) |pos| {
const line = slice[0 .. pos + 1]; const line = slice[0 .. pos + 1];
mem.replaceScalar(u8, line, '\t', ' '); mem.replaceScalar(u8, line, '\t', ' ');
return writer.writeAll(line); return writer.writeAll(line);
@ -1221,7 +1221,7 @@ fn printLineFromFile(writer: *Writer, source_location: SourceLocation) !void {
try writer.writeAll(slice); try writer.writeAll(slice);
while (amt_read == buf.len) { while (amt_read == buf.len) {
amt_read = try f.read(buf[0..]); amt_read = try f.read(buf[0..]);
if (mem.indexOfScalar(u8, buf[0..amt_read], '\n')) |pos| { if (mem.findScalar(u8, buf[0..amt_read], '\n')) |pos| {
const line = buf[0 .. pos + 1]; const line = buf[0 .. pos + 1];
mem.replaceScalar(u8, line, '\t', ' '); mem.replaceScalar(u8, line, '\t', ' ');
return writer.writeAll(line); return writer.writeAll(line);

View file

@ -437,7 +437,7 @@ fn scanAllFunctions(di: *Dwarf, gpa: Allocator, endian: Endian) ScanError!void {
}; };
while (true) { while (true) {
fr.seek = std.mem.indexOfNonePos(u8, fr.buffer, fr.seek, &.{ fr.seek = std.mem.findNonePos(u8, fr.buffer, fr.seek, &.{
zig_padding_abbrev_code, 0, zig_padding_abbrev_code, 0,
}) orelse fr.buffer.len; }) orelse fr.buffer.len;
if (fr.seek >= next_unit_pos) break; if (fr.seek >= next_unit_pos) break;
@ -1539,7 +1539,7 @@ fn getStringGeneric(opt_str: ?[]const u8, offset: u64) ![:0]const u8 {
if (offset > str.len) return bad(); if (offset > str.len) return bad();
const casted_offset = cast(usize, offset) orelse return bad(); const casted_offset = cast(usize, offset) orelse return bad();
// Valid strings always have a terminating zero byte // Valid strings always have a terminating zero byte
const last = std.mem.indexOfScalarPos(u8, str, casted_offset, 0) orelse return bad(); const last = std.mem.findScalarPos(u8, str, casted_offset, 0) orelse return bad();
return str[casted_offset..last :0]; return str[casted_offset..last :0];
} }

View file

@ -197,7 +197,7 @@ pub const ElfDynLib = struct {
// - /etc/ld.so.cache is not read // - /etc/ld.so.cache is not read
fn resolveFromName(path_or_name: []const u8) !posix.fd_t { fn resolveFromName(path_or_name: []const u8) !posix.fd_t {
// If filename contains a slash ("/"), then it is interpreted as a (relative or absolute) pathname // If filename contains a slash ("/"), then it is interpreted as a (relative or absolute) pathname
if (std.mem.indexOfScalarPos(u8, path_or_name, 0, '/')) |_| { if (std.mem.findScalarPos(u8, path_or_name, 0, '/')) |_| {
return posix.open(path_or_name, .{ .ACCMODE = .RDONLY, .CLOEXEC = true }, 0); return posix.open(path_or_name, .{ .ACCMODE = .RDONLY, .CLOEXEC = true }, 0);
} }

View file

@ -3039,7 +3039,7 @@ pub const ar_hdr = extern struct {
pub fn name(self: *const ar_hdr) ?[]const u8 { pub fn name(self: *const ar_hdr) ?[]const u8 {
const value = &self.ar_name; const value = &self.ar_name;
if (value[0] == '/') return null; if (value[0] == '/') return null;
const sentinel = mem.indexOfScalar(u8, value, '/') orelse value.len; const sentinel = mem.findScalar(u8, value, '/') orelse value.len;
return value[0..sentinel]; return value[0..sentinel];
} }

View file

@ -182,7 +182,7 @@ pub const Parser = struct {
pub fn until(self: *@This(), delimiter: u8) []const u8 { pub fn until(self: *@This(), delimiter: u8) []const u8 {
const start = self.i; const start = self.i;
self.i = std.mem.indexOfScalarPos(u8, self.bytes, self.i, delimiter) orelse self.bytes.len; self.i = std.mem.findScalarPos(u8, self.bytes, self.i, delimiter) orelse self.bytes.len;
return self.bytes[start..self.i]; return self.bytes[start..self.i];
} }

View file

@ -469,7 +469,7 @@ pub fn selfExePath(out_buffer: []u8) SelfExePathError![]u8 {
return error.FileNotFound; return error.FileNotFound;
const argv0 = mem.span(std.os.argv[0]); const argv0 = mem.span(std.os.argv[0]);
if (mem.indexOf(u8, argv0, "/") != null) { if (mem.find(u8, argv0, "/") != null) {
// argv[0] is a path (relative or absolute): use realpath(3) directly // argv[0] is a path (relative or absolute): use realpath(3) directly
var real_path_buf: [max_path_bytes]u8 = undefined; var real_path_buf: [max_path_bytes]u8 = undefined;
const real_path = posix.realpathZ(std.os.argv[0], &real_path_buf) catch |err| switch (err) { const real_path = posix.realpathZ(std.os.argv[0], &real_path_buf) catch |err| switch (err) {

View file

@ -179,7 +179,7 @@ pub fn isCygwinPty(file: File) bool {
// The name we get from NtQueryInformationFile will be prefixed with a '\', e.g. \msys-1888ae32e00d56aa-pty0-to-master // The name we get from NtQueryInformationFile will be prefixed with a '\', e.g. \msys-1888ae32e00d56aa-pty0-to-master
return (std.mem.startsWith(u16, name_wide, &[_]u16{ '\\', 'm', 's', 'y', 's', '-' }) or return (std.mem.startsWith(u16, name_wide, &[_]u16{ '\\', 'm', 's', 'y', 's', '-' }) or
std.mem.startsWith(u16, name_wide, &[_]u16{ '\\', 'c', 'y', 'g', 'w', 'i', 'n', '-' })) and std.mem.startsWith(u16, name_wide, &[_]u16{ '\\', 'c', 'y', 'g', 'w', 'i', 'n', '-' })) and
std.mem.indexOf(u16, name_wide, &[_]u16{ '-', 'p', 't', 'y' }) != null; std.mem.find(u16, name_wide, &[_]u16{ '-', 'p', 't', 'y' }) != null;
} }
/// Returns whether or not ANSI escape codes will be treated as such, /// Returns whether or not ANSI escape codes will be treated as such,

View file

@ -402,9 +402,9 @@ pub fn windowsParsePath(path: []const u8) WindowsPath {
if (path.len >= 2 and PathType.windows.isSep(u8, path[0]) and PathType.windows.isSep(u8, path[1])) { if (path.len >= 2 and PathType.windows.isSep(u8, path[0]) and PathType.windows.isSep(u8, path[1])) {
const root_end = root_end: { const root_end = root_end: {
var server_end = mem.indexOfAnyPos(u8, path, 2, "/\\") orelse break :root_end path.len; var server_end = mem.findAnyPos(u8, path, 2, "/\\") orelse break :root_end path.len;
while (server_end < path.len and PathType.windows.isSep(u8, path[server_end])) server_end += 1; while (server_end < path.len and PathType.windows.isSep(u8, path[server_end])) server_end += 1;
break :root_end mem.indexOfAnyPos(u8, path, server_end, "/\\") orelse path.len; break :root_end mem.findAnyPos(u8, path, server_end, "/\\") orelse path.len;
}; };
return WindowsPath{ return WindowsPath{
.is_abs = true, .is_abs = true,
@ -722,7 +722,7 @@ fn parseUNC(comptime T: type, path: []const T) WindowsUNC(T) {
// For the server, the first path separator after the initial two is always // For the server, the first path separator after the initial two is always
// the terminator of the server name, even if that means the server name is // the terminator of the server name, even if that means the server name is
// zero-length. // zero-length.
const server_end = mem.indexOfAnyPos(T, path, 2, any_sep) orelse return .{ const server_end = mem.findAnyPos(T, path, 2, any_sep) orelse return .{
.server = path[2..path.len], .server = path[2..path.len],
.sep_after_server = false, .sep_after_server = false,
.share = path[path.len..path.len], .share = path[path.len..path.len],

View file

@ -443,7 +443,7 @@ pub fn main() !void {
const allocator = gpa.allocator(); const allocator = gpa.allocator();
inline for (hashes) |H| { inline for (hashes) |H| {
if (filter == null or std.mem.indexOf(u8, H.name, filter.?) != null) hash: { if (filter == null or std.mem.find(u8, H.name, filter.?) != null) hash: {
if (!test_iterative_only or H.has_iterative_api) { if (!test_iterative_only or H.has_iterative_api) {
try stdout.print("{s}\n", .{H.name}); try stdout.print("{s}\n", .{H.name});
try stdout.flush(); try stdout.flush();

View file

@ -110,7 +110,7 @@ pub const StringIndexAdapter = struct {
} }
pub fn hash(_: @This(), adapted_key: []const u8) u64 { pub fn hash(_: @This(), adapted_key: []const u8) u64 {
assert(mem.indexOfScalar(u8, adapted_key, 0) == null); assert(mem.findScalar(u8, adapted_key, 0) == null);
return hashString(adapted_key); return hashString(adapted_key);
} }
}; };

View file

@ -1674,14 +1674,14 @@ pub fn request(
if (std.debug.runtime_safety) { if (std.debug.runtime_safety) {
for (options.extra_headers) |header| { for (options.extra_headers) |header| {
assert(header.name.len != 0); assert(header.name.len != 0);
assert(std.mem.indexOfScalar(u8, header.name, ':') == null); assert(std.mem.findScalar(u8, header.name, ':') == null);
assert(std.mem.indexOfPosLinear(u8, header.name, 0, "\r\n") == null); assert(std.mem.findPosLinear(u8, header.name, 0, "\r\n") == null);
assert(std.mem.indexOfPosLinear(u8, header.value, 0, "\r\n") == null); assert(std.mem.findPosLinear(u8, header.value, 0, "\r\n") == null);
} }
for (options.privileged_headers) |header| { for (options.privileged_headers) |header| {
assert(header.name.len != 0); assert(header.name.len != 0);
assert(std.mem.indexOfPosLinear(u8, header.name, 0, "\r\n") == null); assert(std.mem.findPosLinear(u8, header.name, 0, "\r\n") == null);
assert(std.mem.indexOfPosLinear(u8, header.value, 0, "\r\n") == null); assert(std.mem.findPosLinear(u8, header.value, 0, "\r\n") == null);
} }
} }

View file

@ -5,17 +5,17 @@ is_trailer: bool,
pub fn init(bytes: []const u8) HeaderIterator { pub fn init(bytes: []const u8) HeaderIterator {
return .{ return .{
.bytes = bytes, .bytes = bytes,
.index = std.mem.indexOfPosLinear(u8, bytes, 0, "\r\n").? + 2, .index = std.mem.findPosLinear(u8, bytes, 0, "\r\n").? + 2,
.is_trailer = false, .is_trailer = false,
}; };
} }
pub fn next(it: *HeaderIterator) ?std.http.Header { pub fn next(it: *HeaderIterator) ?std.http.Header {
const end = std.mem.indexOfPosLinear(u8, it.bytes, it.index, "\r\n").?; const end = std.mem.findPosLinear(u8, it.bytes, it.index, "\r\n").?;
if (it.index == end) { // found the trailer boundary (\r\n\r\n) if (it.index == end) { // found the trailer boundary (\r\n\r\n)
if (it.is_trailer) return null; if (it.is_trailer) return null;
const next_end = std.mem.indexOfPosLinear(u8, it.bytes, end + 2, "\r\n") orelse const next_end = std.mem.findPosLinear(u8, it.bytes, end + 2, "\r\n") orelse
return null; return null;
var kv_it = std.mem.splitScalar(u8, it.bytes[end + 2 .. next_end], ':'); var kv_it = std.mem.splitScalar(u8, it.bytes[end + 2 .. next_end], ':');

View file

@ -96,7 +96,7 @@ pub const Request = struct {
if (first_line.len < 10) if (first_line.len < 10)
return error.HttpHeadersInvalid; return error.HttpHeadersInvalid;
const method_end = mem.indexOfScalar(u8, first_line, ' ') orelse const method_end = mem.findScalar(u8, first_line, ' ') orelse
return error.HttpHeadersInvalid; return error.HttpHeadersInvalid;
const method = std.meta.stringToEnum(http.Method, first_line[0..method_end]) orelse const method = std.meta.stringToEnum(http.Method, first_line[0..method_end]) orelse
@ -338,9 +338,9 @@ pub const Request = struct {
if (std.debug.runtime_safety) { if (std.debug.runtime_safety) {
for (options.extra_headers) |header| { for (options.extra_headers) |header| {
assert(header.name.len != 0); assert(header.name.len != 0);
assert(std.mem.indexOfScalar(u8, header.name, ':') == null); assert(std.mem.findScalar(u8, header.name, ':') == null);
assert(std.mem.indexOfPosLinear(u8, header.name, 0, "\r\n") == null); assert(std.mem.findPosLinear(u8, header.name, 0, "\r\n") == null);
assert(std.mem.indexOfPosLinear(u8, header.value, 0, "\r\n") == null); assert(std.mem.findPosLinear(u8, header.value, 0, "\r\n") == null);
} }
} }
try writeExpectContinue(request); try writeExpectContinue(request);

View file

@ -435,7 +435,7 @@ test "general client/server API coverage" {
if (mem.startsWith(u8, target, "/get")) { if (mem.startsWith(u8, target, "/get")) {
var response = try request.respondStreaming(&.{}, .{ var response = try request.respondStreaming(&.{}, .{
.content_length = if (mem.indexOf(u8, target, "?chunked") == null) .content_length = if (mem.find(u8, target, "?chunked") == null)
14 14
else else
null, null,

View file

@ -1758,7 +1758,7 @@ fn appendSlice(list: *std.array_list.Managed(u8), buf: []const u8, max_value_len
/// This function will not give meaningful results on non-numeric input. /// This function will not give meaningful results on non-numeric input.
pub fn isNumberFormattedLikeAnInteger(value: []const u8) bool { pub fn isNumberFormattedLikeAnInteger(value: []const u8) bool {
if (std.mem.eql(u8, value, "-0")) return false; if (std.mem.eql(u8, value, "-0")) return false;
return std.mem.indexOfAny(u8, value, ".eE") == null; return std.mem.findAny(u8, value, ".eE") == null;
} }
test { test {

View file

@ -825,7 +825,7 @@ pub const section_64 = extern struct {
}; };
fn parseName(name: *const [16]u8) []const u8 { fn parseName(name: *const [16]u8) []const u8 {
const len = mem.indexOfScalar(u8, name, @as(u8, 0)) orelse name.len; const len = mem.findScalar(u8, name, @as(u8, 0)) orelse name.len;
return name[0..len]; return name[0..len];
} }

View file

@ -1658,8 +1658,8 @@ pub const Mutable = struct {
// Handle trailing zero-words of divisor/dividend. These are not handled in the following // Handle trailing zero-words of divisor/dividend. These are not handled in the following
// algorithms. // algorithms.
// Note, there must be a non-zero limb for either. // Note, there must be a non-zero limb for either.
// const x_trailing = std.mem.indexOfScalar(Limb, x.limbs[0..x.len], 0).?; // const x_trailing = std.mem.findScalar(Limb, x.limbs[0..x.len], 0).?;
// const y_trailing = std.mem.indexOfScalar(Limb, y.limbs[0..y.len], 0).?; // const y_trailing = std.mem.findScalar(Limb, y.limbs[0..y.len], 0).?;
const x_trailing = for (x.limbs[0..x.len], 0..) |xi, i| { const x_trailing = for (x.limbs[0..x.len], 0..) |xi, i| {
if (xi != 0) break i; if (xi != 0) break i;

View file

@ -998,7 +998,7 @@ fn lenSliceTo(ptr: anytype, comptime end: std.meta.Elem(@TypeOf(ptr))) usize {
.array => |array_info| { .array => |array_info| {
if (array_info.sentinel()) |s| { if (array_info.sentinel()) |s| {
if (s == end) { if (s == end) {
return indexOfSentinel(array_info.child, end, ptr); return findSentinel(array_info.child, end, ptr);
} }
} }
return findScalar(array_info.child, ptr, end) orelse array_info.len; return findScalar(array_info.child, ptr, end) orelse array_info.len;
@ -1007,7 +1007,7 @@ fn lenSliceTo(ptr: anytype, comptime end: std.meta.Elem(@TypeOf(ptr))) usize {
}, },
.many => if (ptr_info.sentinel()) |s| { .many => if (ptr_info.sentinel()) |s| {
if (s == end) { if (s == end) {
return indexOfSentinel(ptr_info.child, end, ptr); return findSentinel(ptr_info.child, end, ptr);
} }
// We're looking for something other than the sentinel, // We're looking for something other than the sentinel,
// but iterating past the sentinel would be a bug so we need // but iterating past the sentinel would be a bug so we need
@ -1018,12 +1018,12 @@ fn lenSliceTo(ptr: anytype, comptime end: std.meta.Elem(@TypeOf(ptr))) usize {
}, },
.c => { .c => {
assert(ptr != null); assert(ptr != null);
return indexOfSentinel(ptr_info.child, end, ptr); return findSentinel(ptr_info.child, end, ptr);
}, },
.slice => { .slice => {
if (ptr_info.sentinel()) |s| { if (ptr_info.sentinel()) |s| {
if (s == end) { if (s == end) {
return indexOfSentinel(ptr_info.child, s, ptr); return findSentinel(ptr_info.child, s, ptr);
} }
} }
return findScalar(ptr_info.child, ptr, end) orelse ptr.len; return findScalar(ptr_info.child, ptr, end) orelse ptr.len;
@ -1076,11 +1076,11 @@ pub fn len(value: anytype) usize {
.many => { .many => {
const sentinel = info.sentinel() orelse const sentinel = info.sentinel() orelse
@compileError("invalid type given to std.mem.len: " ++ @typeName(@TypeOf(value))); @compileError("invalid type given to std.mem.len: " ++ @typeName(@TypeOf(value)));
return indexOfSentinel(info.child, sentinel, value); return findSentinel(info.child, sentinel, value);
}, },
.c => { .c => {
assert(value != null); assert(value != null);
return indexOfSentinel(info.child, 0, value); return findSentinel(info.child, 0, value);
}, },
else => @compileError("invalid type given to std.mem.len: " ++ @typeName(@TypeOf(value))), else => @compileError("invalid type given to std.mem.len: " ++ @typeName(@TypeOf(value))),
}, },
@ -1166,7 +1166,7 @@ pub fn findSentinel(comptime T: type, comptime sentinel: T, p: [*:sentinel]const
return i; return i;
} }
test "indexOfSentinel vector paths" { test "findSentinel vector paths" {
const Types = [_]type{ u8, u16, u32, u64 }; const Types = [_]type{ u8, u16, u32, u64 };
const allocator = std.testing.allocator; const allocator = std.testing.allocator;
const page_size = std.heap.page_size_min; const page_size = std.heap.page_size_min;
@ -1189,7 +1189,7 @@ test "indexOfSentinel vector paths" {
const search_len = page_size / @sizeOf(T); const search_len = page_size / @sizeOf(T);
memory[start + search_len] = 0; memory[start + search_len] = 0;
for (0..block_len) |offset| { for (0..block_len) |offset| {
try testing.expectEqual(search_len - offset, indexOfSentinel(T, 0, @ptrCast(&memory[start + offset]))); try testing.expectEqual(search_len - offset, findSentinel(T, 0, @ptrCast(&memory[start + offset])));
} }
memory[start + search_len] = 0xaa; memory[start + search_len] = 0xaa;
@ -1197,7 +1197,7 @@ test "indexOfSentinel vector paths" {
const start_page_boundary = start + (page_size / @sizeOf(T)); const start_page_boundary = start + (page_size / @sizeOf(T));
memory[start_page_boundary + block_len] = 0; memory[start_page_boundary + block_len] = 0;
for (0..block_len) |offset| { for (0..block_len) |offset| {
try testing.expectEqual(2 * block_len - offset, indexOfSentinel(T, 0, @ptrCast(&memory[start_page_boundary - block_len + offset]))); try testing.expectEqual(2 * block_len - offset, findSentinel(T, 0, @ptrCast(&memory[start_page_boundary - block_len + offset])));
} }
} }
} }
@ -1257,7 +1257,7 @@ pub const indexOfScalar = findScalar;
/// Linear search for the index of a scalar value inside a slice. /// Linear search for the index of a scalar value inside a slice.
pub fn findScalar(comptime T: type, slice: []const T, value: T) ?usize { pub fn findScalar(comptime T: type, slice: []const T, value: T) ?usize {
return indexOfScalarPos(T, slice, 0, value); return findScalarPos(T, slice, 0, value);
} }
/// Deprecated in favor of `findScalarLast`. /// Deprecated in favor of `findScalarLast`.
@ -1340,7 +1340,7 @@ pub fn findScalarPos(comptime T: type, slice: []const T, start_index: usize, val
return null; return null;
} }
test indexOfScalarPos { test findScalarPos {
const Types = [_]type{ u8, u16, u32, u64 }; const Types = [_]type{ u8, u16, u32, u64 };
inline for (Types) |T| { inline for (Types) |T| {
@ -1349,7 +1349,7 @@ test indexOfScalarPos {
memory[memory.len - 1] = 0; memory[memory.len - 1] = 0;
for (0..memory.len) |i| { for (0..memory.len) |i| {
try testing.expectEqual(memory.len - i - 1, indexOfScalarPos(T, memory[i..], 0, 0).?); try testing.expectEqual(memory.len - i - 1, findScalarPos(T, memory[i..], 0, 0).?);
} }
} }
} }
@ -1360,7 +1360,7 @@ pub const indexOfAny = findAny;
/// Linear search for the index of any value in the provided list inside a slice. /// Linear search for the index of any value in the provided list inside a slice.
/// Returns null if no values are found. /// Returns null if no values are found.
pub fn findAny(comptime T: type, slice: []const T, values: []const T) ?usize { pub fn findAny(comptime T: type, slice: []const T, values: []const T) ?usize {
return indexOfAnyPos(T, slice, 0, values); return findAnyPos(T, slice, 0, values);
} }
/// Deprecated in favor of `findLastAny`. /// Deprecated in favor of `findLastAny`.
@ -1401,7 +1401,7 @@ pub const indexOfNone = findNone;
/// ///
/// Comparable to `strspn` in the C standard library. /// Comparable to `strspn` in the C standard library.
pub fn findNone(comptime T: type, slice: []const T, values: []const T) ?usize { pub fn findNone(comptime T: type, slice: []const T, values: []const T) ?usize {
return indexOfNonePos(T, slice, 0, values); return findNonePos(T, slice, 0, values);
} }
test findNone { test findNone {
@ -1412,7 +1412,7 @@ test findNone {
try testing.expect(findNone(u8, "123123", "123") == null); try testing.expect(findNone(u8, "123123", "123") == null);
try testing.expect(findNone(u8, "333333", "123") == null); try testing.expect(findNone(u8, "333333", "123") == null);
try testing.expect(indexOfNonePos(u8, "abc123", 3, "321") == null); try testing.expect(findNonePos(u8, "abc123", 3, "321") == null);
} }
/// Deprecated in favor of `findLastNone`. /// Deprecated in favor of `findLastNone`.
@ -1457,7 +1457,7 @@ pub const indexOf = find;
/// Uses Boyer-Moore-Horspool algorithm on large inputs; linear search on small inputs. /// Uses Boyer-Moore-Horspool algorithm on large inputs; linear search on small inputs.
/// Returns null if needle is not found. /// Returns null if needle is not found.
pub fn find(comptime T: type, haystack: []const T, needle: []const T) ?usize { pub fn find(comptime T: type, haystack: []const T, needle: []const T) ?usize {
return indexOfPos(T, haystack, 0, needle); return findPos(T, haystack, 0, needle);
} }
/// Deprecated in favor of `findLastLinear`. /// Deprecated in favor of `findLastLinear`.
@ -1478,7 +1478,7 @@ pub fn findLastLinear(comptime T: type, haystack: []const T, needle: []const T)
pub const indexOfPosLinear = findPosLinear; pub const indexOfPosLinear = findPosLinear;
/// Consider using `indexOfPos` instead of this, which will automatically use a /// Consider using `findPos` instead of this, which will automatically use a
/// more sophisticated algorithm on larger inputs. /// more sophisticated algorithm on larger inputs.
pub fn findPosLinear(comptime T: type, haystack: []const T, start_index: usize, needle: []const T) ?usize { pub fn findPosLinear(comptime T: type, haystack: []const T, start_index: usize, needle: []const T) ?usize {
if (needle.len > haystack.len) return null; if (needle.len > haystack.len) return null;
@ -1572,17 +1572,17 @@ pub fn findLast(comptime T: type, haystack: []const T, needle: []const T) ?usize
/// Deprecated in favor of `findPos`. /// Deprecated in favor of `findPos`.
pub const indexOfPos = findPos; pub const indexOfPos = findPos;
/// Uses Boyer-Moore-Horspool algorithm on large inputs; `indexOfPosLinear` on small inputs. /// Uses Boyer-Moore-Horspool algorithm on large inputs; `findPosLinear` on small inputs.
pub fn findPos(comptime T: type, haystack: []const T, start_index: usize, needle: []const T) ?usize { pub fn findPos(comptime T: type, haystack: []const T, start_index: usize, needle: []const T) ?usize {
if (needle.len > haystack.len) return null; if (needle.len > haystack.len) return null;
if (needle.len < 2) { if (needle.len < 2) {
if (needle.len == 0) return start_index; if (needle.len == 0) return start_index;
// indexOfScalarPos is significantly faster than indexOfPosLinear // findScalarPos is significantly faster than findPosLinear
return indexOfScalarPos(T, haystack, start_index, needle[0]); return findScalarPos(T, haystack, start_index, needle[0]);
} }
if (!std.meta.hasUniqueRepresentation(T) or haystack.len < 52 or needle.len <= 4) if (!std.meta.hasUniqueRepresentation(T) or haystack.len < 52 or needle.len <= 4)
return indexOfPosLinear(T, haystack, start_index, needle); return findPosLinear(T, haystack, start_index, needle);
const haystack_bytes = sliceAsBytes(haystack); const haystack_bytes = sliceAsBytes(haystack);
const needle_bytes = sliceAsBytes(needle); const needle_bytes = sliceAsBytes(needle);
@ -1601,43 +1601,43 @@ pub fn findPos(comptime T: type, haystack: []const T, start_index: usize, needle
return null; return null;
} }
test indexOf { test find {
try testing.expect(indexOf(u8, "one two three four five six seven eight nine ten eleven", "three four").? == 8); try testing.expect(find(u8, "one two three four five six seven eight nine ten eleven", "three four").? == 8);
try testing.expect(lastIndexOf(u8, "one two three four five six seven eight nine ten eleven", "three four").? == 8); try testing.expect(lastIndexOf(u8, "one two three four five six seven eight nine ten eleven", "three four").? == 8);
try testing.expect(indexOf(u8, "one two three four five six seven eight nine ten eleven", "two two") == null); try testing.expect(find(u8, "one two three four five six seven eight nine ten eleven", "two two") == null);
try testing.expect(lastIndexOf(u8, "one two three four five six seven eight nine ten eleven", "two two") == null); try testing.expect(lastIndexOf(u8, "one two three four five six seven eight nine ten eleven", "two two") == null);
try testing.expect(indexOf(u8, "one two three four five six seven eight nine ten", "").? == 0); try testing.expect(find(u8, "one two three four five six seven eight nine ten", "").? == 0);
try testing.expect(lastIndexOf(u8, "one two three four five six seven eight nine ten", "").? == 48); try testing.expect(lastIndexOf(u8, "one two three four five six seven eight nine ten", "").? == 48);
try testing.expect(indexOf(u8, "one two three four", "four").? == 14); try testing.expect(find(u8, "one two three four", "four").? == 14);
try testing.expect(lastIndexOf(u8, "one two three two four", "two").? == 14); try testing.expect(lastIndexOf(u8, "one two three two four", "two").? == 14);
try testing.expect(indexOf(u8, "one two three four", "gour") == null); try testing.expect(find(u8, "one two three four", "gour") == null);
try testing.expect(lastIndexOf(u8, "one two three four", "gour") == null); try testing.expect(lastIndexOf(u8, "one two three four", "gour") == null);
try testing.expect(indexOf(u8, "foo", "foo").? == 0); try testing.expect(find(u8, "foo", "foo").? == 0);
try testing.expect(lastIndexOf(u8, "foo", "foo").? == 0); try testing.expect(lastIndexOf(u8, "foo", "foo").? == 0);
try testing.expect(indexOf(u8, "foo", "fool") == null); try testing.expect(find(u8, "foo", "fool") == null);
try testing.expect(lastIndexOf(u8, "foo", "lfoo") == null); try testing.expect(lastIndexOf(u8, "foo", "lfoo") == null);
try testing.expect(lastIndexOf(u8, "foo", "fool") == null); try testing.expect(lastIndexOf(u8, "foo", "fool") == null);
try testing.expect(indexOf(u8, "foo foo", "foo").? == 0); try testing.expect(find(u8, "foo foo", "foo").? == 0);
try testing.expect(lastIndexOf(u8, "foo foo", "foo").? == 4); try testing.expect(lastIndexOf(u8, "foo foo", "foo").? == 4);
try testing.expect(lastIndexOfAny(u8, "boo, cat", "abo").? == 6); try testing.expect(lastIndexOfAny(u8, "boo, cat", "abo").? == 6);
try testing.expect(findScalarLast(u8, "boo", 'o').? == 2); try testing.expect(findScalarLast(u8, "boo", 'o').? == 2);
} }
test "indexOf multibyte" { test "find multibyte" {
{ {
// make haystack and needle long enough to trigger Boyer-Moore-Horspool algorithm // make haystack and needle long enough to trigger Boyer-Moore-Horspool algorithm
const haystack = [1]u16{0} ** 100 ++ [_]u16{ 0xbbaa, 0xccbb, 0xddcc, 0xeedd, 0xffee, 0x00ff }; const haystack = [1]u16{0} ** 100 ++ [_]u16{ 0xbbaa, 0xccbb, 0xddcc, 0xeedd, 0xffee, 0x00ff };
const needle = [_]u16{ 0xbbaa, 0xccbb, 0xddcc, 0xeedd, 0xffee }; const needle = [_]u16{ 0xbbaa, 0xccbb, 0xddcc, 0xeedd, 0xffee };
try testing.expectEqual(indexOfPos(u16, &haystack, 0, &needle), 100); try testing.expectEqual(findPos(u16, &haystack, 0, &needle), 100);
// check for misaligned false positives (little and big endian) // check for misaligned false positives (little and big endian)
const needleLE = [_]u16{ 0xbbbb, 0xcccc, 0xdddd, 0xeeee, 0xffff }; const needleLE = [_]u16{ 0xbbbb, 0xcccc, 0xdddd, 0xeeee, 0xffff };
try testing.expectEqual(indexOfPos(u16, &haystack, 0, &needleLE), null); try testing.expectEqual(findPos(u16, &haystack, 0, &needleLE), null);
const needleBE = [_]u16{ 0xaacc, 0xbbdd, 0xccee, 0xddff, 0xee00 }; const needleBE = [_]u16{ 0xaacc, 0xbbdd, 0xccee, 0xddff, 0xee00 };
try testing.expectEqual(indexOfPos(u16, &haystack, 0, &needleBE), null); try testing.expectEqual(findPos(u16, &haystack, 0, &needleBE), null);
} }
{ {
@ -1654,8 +1654,8 @@ test "indexOf multibyte" {
} }
} }
test "indexOfPos empty needle" { test "findPos empty needle" {
try testing.expectEqual(indexOfPos(u8, "abracadabra", 5, ""), 5); try testing.expectEqual(findPos(u8, "abracadabra", 5, ""), 5);
} }
/// Returns the number of needles inside the haystack /// Returns the number of needles inside the haystack
@ -1667,7 +1667,7 @@ pub fn count(comptime T: type, haystack: []const T, needle: []const T) usize {
var i: usize = 0; var i: usize = 0;
var found: usize = 0; var found: usize = 0;
while (indexOfPos(T, haystack, i, needle)) |idx| { while (findPos(T, haystack, i, needle)) |idx| {
i = idx + needle.len; i = idx + needle.len;
found += 1; found += 1;
} }
@ -1737,7 +1737,7 @@ pub fn containsAtLeast(comptime T: type, haystack: []const T, expected_count: us
var i: usize = 0; var i: usize = 0;
var found: usize = 0; var found: usize = 0;
while (indexOfPos(T, haystack, i, needle)) |idx| { while (findPos(T, haystack, i, needle)) |idx| {
i = idx + needle.len; i = idx + needle.len;
found += 1; found += 1;
if (found == expected_count) return true; if (found == expected_count) return true;
@ -3362,9 +3362,9 @@ pub fn SplitIterator(comptime T: type, comptime delimiter_type: DelimiterType) t
pub fn next(self: *Self) ?[]const T { pub fn next(self: *Self) ?[]const T {
const start = self.index orelse return null; const start = self.index orelse return null;
const end = if (switch (delimiter_type) { const end = if (switch (delimiter_type) {
.sequence => indexOfPos(T, self.buffer, start, self.delimiter), .sequence => findPos(T, self.buffer, start, self.delimiter),
.any => indexOfAnyPos(T, self.buffer, start, self.delimiter), .any => findAnyPos(T, self.buffer, start, self.delimiter),
.scalar => indexOfScalarPos(T, self.buffer, start, self.delimiter), .scalar => findScalarPos(T, self.buffer, start, self.delimiter),
}) |delim_start| blk: { }) |delim_start| blk: {
self.index = delim_start + switch (delimiter_type) { self.index = delim_start + switch (delimiter_type) {
.sequence => self.delimiter.len, .sequence => self.delimiter.len,
@ -3383,9 +3383,9 @@ pub fn SplitIterator(comptime T: type, comptime delimiter_type: DelimiterType) t
pub fn peek(self: *Self) ?[]const T { pub fn peek(self: *Self) ?[]const T {
const start = self.index orelse return null; const start = self.index orelse return null;
const end = if (switch (delimiter_type) { const end = if (switch (delimiter_type) {
.sequence => indexOfPos(T, self.buffer, start, self.delimiter), .sequence => findPos(T, self.buffer, start, self.delimiter),
.any => indexOfAnyPos(T, self.buffer, start, self.delimiter), .any => findAnyPos(T, self.buffer, start, self.delimiter),
.scalar => indexOfScalarPos(T, self.buffer, start, self.delimiter), .scalar => findScalarPos(T, self.buffer, start, self.delimiter),
}) |delim_start| delim_start else self.buffer.len; }) |delim_start| delim_start else self.buffer.len;
return self.buffer[start..end]; return self.buffer[start..end];
} }

View file

@ -113,7 +113,7 @@ pub fn getFdPath(fd: std.posix.fd_t, out_buffer: *[max_path_bytes]u8) std.posix.
// errno values to expect when command is F.GETPATH... // errno values to expect when command is F.GETPATH...
else => |err| return posix.unexpectedErrno(err), else => |err| return posix.unexpectedErrno(err),
} }
const len = mem.indexOfScalar(u8, out_buffer[0..], 0) orelse max_path_bytes; const len = mem.findScalar(u8, out_buffer[0..], 0) orelse max_path_bytes;
return out_buffer[0..len]; return out_buffer[0..len];
}, },
.linux, .serenity => { .linux, .serenity => {
@ -150,7 +150,7 @@ pub fn getFdPath(fd: std.posix.fd_t, out_buffer: *[max_path_bytes]u8) std.posix.
.BADF => return error.FileNotFound, .BADF => return error.FileNotFound,
else => |err| return posix.unexpectedErrno(err), else => |err| return posix.unexpectedErrno(err),
} }
const len = mem.indexOfScalar(u8, &kfile.path, 0) orelse max_path_bytes; const len = mem.findScalar(u8, &kfile.path, 0) orelse max_path_bytes;
if (len == 0) return error.NameTooLong; if (len == 0) return error.NameTooLong;
const result = out_buffer[0..len]; const result = out_buffer[0..len];
@memcpy(result, kfile.path[0..len]); @memcpy(result, kfile.path[0..len]);
@ -164,7 +164,7 @@ pub fn getFdPath(fd: std.posix.fd_t, out_buffer: *[max_path_bytes]u8) std.posix.
.RANGE => return error.NameTooLong, .RANGE => return error.NameTooLong,
else => |err| return posix.unexpectedErrno(err), else => |err| return posix.unexpectedErrno(err),
} }
const len = mem.indexOfScalar(u8, out_buffer[0..], 0) orelse max_path_bytes; const len = mem.findScalar(u8, out_buffer[0..], 0) orelse max_path_bytes;
return out_buffer[0..len]; return out_buffer[0..len];
}, },
.netbsd => { .netbsd => {
@ -178,7 +178,7 @@ pub fn getFdPath(fd: std.posix.fd_t, out_buffer: *[max_path_bytes]u8) std.posix.
.RANGE => return error.NameTooLong, .RANGE => return error.NameTooLong,
else => |err| return posix.unexpectedErrno(err), else => |err| return posix.unexpectedErrno(err),
} }
const len = mem.indexOfScalar(u8, out_buffer[0..], 0) orelse max_path_bytes; const len = mem.findScalar(u8, out_buffer[0..], 0) orelse max_path_bytes;
return out_buffer[0..len]; return out_buffer[0..len];
}, },
else => unreachable, // made unreachable by isGetFdPathSupportedOnTarget above else => unreachable, // made unreachable by isGetFdPathSupportedOnTarget above

View file

@ -4092,7 +4092,7 @@ inline fn skipKernelLessThan(required: std.SemanticVersion) !void {
const release = mem.sliceTo(&uts.release, 0); const release = mem.sliceTo(&uts.release, 0);
// Strips potential extra, as kernel version might not be semver compliant, example "6.8.9-300.fc40.x86_64" // Strips potential extra, as kernel version might not be semver compliant, example "6.8.9-300.fc40.x86_64"
const extra_index = std.mem.indexOfAny(u8, release, "-+"); const extra_index = std.mem.findAny(u8, release, "-+");
const stripped = release[0..(extra_index orelse release.len)]; const stripped = release[0..(extra_index orelse release.len)];
// Make sure the input don't rely on the extra we just stripped // Make sure the input don't rely on the extra we just stripped
try testing.expect(required.pre == null and required.build == null); try testing.expect(required.pre == null and required.build == null);

View file

@ -1412,7 +1412,7 @@ pub fn GetFinalPathNameByHandle(
}; };
} }
const file_path_begin_index = mem.indexOfPos(u16, final_path, device_prefix.len, &[_]u16{'\\'}) orelse unreachable; const file_path_begin_index = mem.findPos(u16, final_path, device_prefix.len, &[_]u16{'\\'}) orelse unreachable;
const volume_name_u16 = final_path[0..file_path_begin_index]; const volume_name_u16 = final_path[0..file_path_begin_index];
const device_name_u16 = volume_name_u16[device_prefix.len..]; const device_name_u16 = volume_name_u16[device_prefix.len..];
const file_name_u16 = final_path[file_path_begin_index..]; const file_name_u16 = final_path[file_path_begin_index..];
@ -1494,7 +1494,7 @@ pub fn GetFinalPathNameByHandle(
const total_len = drive_letter.len + file_name_u16.len; const total_len = drive_letter.len + file_name_u16.len;
// Validate that DOS does not contain any spurious nul bytes. // Validate that DOS does not contain any spurious nul bytes.
if (mem.indexOfScalar(u16, out_buffer[0..total_len], 0)) |_| { if (mem.findScalar(u16, out_buffer[0..total_len], 0)) |_| {
return error.BadPathName; return error.BadPathName;
} }
@ -1544,7 +1544,7 @@ pub fn GetFinalPathNameByHandle(
const total_len = volume_path.len + file_name_u16.len; const total_len = volume_path.len + file_name_u16.len;
// Validate that DOS does not contain any spurious nul bytes. // Validate that DOS does not contain any spurious nul bytes.
if (mem.indexOfScalar(u16, out_buffer[0..total_len], 0)) |_| { if (mem.findScalar(u16, out_buffer[0..total_len], 0)) |_| {
return error.BadPathName; return error.BadPathName;
} }

View file

@ -1773,7 +1773,7 @@ pub fn execvpeZ_expandArg0(
envp: [*:null]const ?[*:0]const u8, envp: [*:null]const ?[*:0]const u8,
) ExecveError { ) ExecveError {
const file_slice = mem.sliceTo(file, 0); const file_slice = mem.sliceTo(file, 0);
if (mem.indexOfScalar(u8, file_slice, '/') != null) return execveZ(file, child_argv, envp); if (mem.findScalar(u8, file_slice, '/') != null) return execveZ(file, child_argv, envp);
const PATH = getenvZ("PATH") orelse "/usr/local/bin:/bin/:/usr/bin"; const PATH = getenvZ("PATH") orelse "/usr/local/bin:/bin/:/usr/bin";
// Use of PATH_MAX here is valid as the path_buf will be passed // Use of PATH_MAX here is valid as the path_buf will be passed
@ -1829,7 +1829,7 @@ pub fn getenv(key: []const u8) ?[:0]const u8 {
if (native_os == .windows) { if (native_os == .windows) {
@compileError("std.posix.getenv is unavailable for Windows because environment strings are in WTF-16 format. See std.process.getEnvVarOwned for a cross-platform API or std.process.getenvW for a Windows-specific API."); @compileError("std.posix.getenv is unavailable for Windows because environment strings are in WTF-16 format. See std.process.getEnvVarOwned for a cross-platform API or std.process.getenvW for a Windows-specific API.");
} }
if (mem.indexOfScalar(u8, key, '=') != null) { if (mem.findScalar(u8, key, '=') != null) {
return null; return null;
} }
if (builtin.link_libc) { if (builtin.link_libc) {
@ -6663,7 +6663,7 @@ pub fn unexpectedErrno(err: E) UnexpectedError {
/// Used to convert a slice to a null terminated slice on the stack. /// Used to convert a slice to a null terminated slice on the stack.
pub fn toPosixPath(file_path: []const u8) error{NameTooLong}![PATH_MAX - 1:0]u8 { pub fn toPosixPath(file_path: []const u8) error{NameTooLong}![PATH_MAX - 1:0]u8 {
if (std.debug.runtime_safety) assert(mem.indexOfScalar(u8, file_path, 0) == null); if (std.debug.runtime_safety) assert(mem.findScalar(u8, file_path, 0) == null);
var path_with_null: [PATH_MAX - 1:0]u8 = undefined; var path_with_null: [PATH_MAX - 1:0]u8 = undefined;
// >= rather than > to make room for the null byte // >= rather than > to make room for the null byte
if (file_path.len >= PATH_MAX) return error.NameTooLong; if (file_path.len >= PATH_MAX) return error.NameTooLong;

View file

@ -619,7 +619,7 @@ test "siftUp in remove" {
try queue.addSlice(&.{ 0, 1, 100, 2, 3, 101, 102, 4, 5, 6, 7, 103, 104, 105, 106, 8 }); try queue.addSlice(&.{ 0, 1, 100, 2, 3, 101, 102, 4, 5, 6, 7, 103, 104, 105, 106, 8 });
_ = queue.removeIndex(std.mem.indexOfScalar(u32, queue.items[0..queue.count()], 102).?); _ = queue.removeIndex(std.mem.findScalar(u32, queue.items[0..queue.count()], 102).?);
const sorted_items = [_]u32{ 0, 1, 2, 3, 4, 5, 6, 7, 8, 100, 101, 103, 104, 105, 106 }; const sorted_items = [_]u32{ 0, 1, 2, 3, 4, 5, 6, 7, 8, 100, 101, 103, 104, 105, 106 };
for (sorted_items) |e| { for (sorted_items) |e| {

View file

@ -546,7 +546,7 @@ pub fn getenvW(key: [*:0]const u16) ?[:0]const u16 {
} }
const key_slice = mem.sliceTo(key, 0); const key_slice = mem.sliceTo(key, 0);
// '=' anywhere but the start makes this an invalid environment variable name // '=' anywhere but the start makes this an invalid environment variable name
if (key_slice.len > 0 and std.mem.indexOfScalar(u16, key_slice[1..], '=') != null) { if (key_slice.len > 0 and std.mem.findScalar(u16, key_slice[1..], '=') != null) {
return null; return null;
} }
const ptr = windows.peb().ProcessParameters.Environment; const ptr = windows.peb().ProcessParameters.Environment;
@ -559,7 +559,7 @@ pub fn getenvW(key: [*:0]const u16) ?[:0]const u16 {
// if it's the first character. // if it's the first character.
// https://devblogs.microsoft.com/oldnewthing/20100506-00/?p=14133 // https://devblogs.microsoft.com/oldnewthing/20100506-00/?p=14133
const equal_search_start: usize = if (key_value[0] == '=') 1 else 0; const equal_search_start: usize = if (key_value[0] == '=') 1 else 0;
const equal_index = std.mem.indexOfScalarPos(u16, key_value, equal_search_start, '=') orelse { const equal_index = std.mem.findScalarPos(u16, key_value, equal_search_start, '=') orelse {
// This is enforced by CreateProcess. // This is enforced by CreateProcess.
// If violated, CreateProcess will fail with INVALID_PARAMETER. // If violated, CreateProcess will fail with INVALID_PARAMETER.
unreachable; // must contain a = unreachable; // must contain a =

View file

@ -1810,7 +1810,7 @@ fn argvToScriptCommandLineWindows(
// //
// If the script path does not have a path separator, then we know its relative to CWD and // If the script path does not have a path separator, then we know its relative to CWD and
// we can just put `.\` in the front. // we can just put `.\` in the front.
if (mem.indexOfAny(u16, script_path, &[_]u16{ mem.nativeToLittle(u16, '\\'), mem.nativeToLittle(u16, '/') }) == null) { if (mem.findAny(u16, script_path, &[_]u16{ mem.nativeToLittle(u16, '\\'), mem.nativeToLittle(u16, '/') }) == null) {
try buf.appendSlice(".\\"); try buf.appendSlice(".\\");
} }
// Note that we don't do any escaping/mitigations for this argument, since the relevant // Note that we don't do any escaping/mitigations for this argument, since the relevant
@ -1825,7 +1825,7 @@ fn argvToScriptCommandLineWindows(
// always a mistake to include these characters in argv, so it's // always a mistake to include these characters in argv, so it's
// an error condition in order to ensure that the return of this // an error condition in order to ensure that the return of this
// function can always roundtrip through cmd.exe. // function can always roundtrip through cmd.exe.
if (std.mem.indexOfAny(u8, arg, "\x00\r\n") != null) { if (std.mem.findAny(u8, arg, "\x00\r\n") != null) {
return error.InvalidBatchScriptArg; return error.InvalidBatchScriptArg;
} }

View file

@ -71,7 +71,7 @@ pub const Diagnostics = struct {
const start_index: usize = if (path[0] == '/') 1 else 0; const start_index: usize = if (path[0] == '/') 1 else 0;
const end_index: usize = if (path[path.len - 1] == '/') path.len - 1 else path.len; const end_index: usize = if (path[path.len - 1] == '/') path.len - 1 else path.len;
const buf = path[start_index..end_index]; const buf = path[start_index..end_index];
if (std.mem.indexOfScalarPos(u8, buf, 0, '/')) |idx| { if (std.mem.findScalarPos(u8, buf, 0, '/')) |idx| {
return buf[0..idx]; return buf[0..idx];
} }
@ -569,7 +569,7 @@ pub const PaxIterator = struct {
} }
fn hasNull(str: []const u8) bool { fn hasNull(str: []const u8) bool {
return (std.mem.indexOfScalar(u8, str, 0)) != null; return (std.mem.findScalar(u8, str, 0)) != null;
} }
// Checks that each record ends with new line. // Checks that each record ends with new line.
@ -667,7 +667,7 @@ fn stripComponents(path: []const u8, count: u32) []const u8 {
var i: usize = 0; var i: usize = 0;
var c = count; var c = count;
while (c > 0) : (c -= 1) { while (c > 0) : (c -= 1) {
if (std.mem.indexOfScalarPos(u8, path, i, '/')) |pos| { if (std.mem.findScalarPos(u8, path, i, '/')) |pos| {
i = pos + 1; i = pos + 1;
} else { } else {
i = path.len; i = path.len;

View file

@ -643,7 +643,7 @@ pub fn tmpDir(opts: std.fs.Dir.OpenOptions) TmpDir {
} }
pub fn expectEqualStrings(expected: []const u8, actual: []const u8) !void { pub fn expectEqualStrings(expected: []const u8, actual: []const u8) !void {
if (std.mem.indexOfDiff(u8, actual, expected)) |diff_index| { if (std.mem.findDiff(u8, actual, expected)) |diff_index| {
if (@inComptime()) { if (@inComptime()) {
@compileError(std.fmt.comptimePrint("\nexpected:\n{s}\nfound:\n{s}\ndifference starts at index {d}", .{ @compileError(std.fmt.comptimePrint("\nexpected:\n{s}\nfound:\n{s}\ndifference starts at index {d}", .{
expected, actual, diff_index, expected, actual, diff_index,
@ -992,7 +992,7 @@ fn printIndicatorLine(source: []const u8, indicator_index: usize) void {
line_begin + 1 line_begin + 1
else else
0; 0;
const line_end_index = if (std.mem.indexOfScalar(u8, source[indicator_index..], '\n')) |line_end| const line_end_index = if (std.mem.findScalar(u8, source[indicator_index..], '\n')) |line_end|
(indicator_index + line_end) (indicator_index + line_end)
else else
source.len; source.len;
@ -1008,7 +1008,7 @@ fn printIndicatorLine(source: []const u8, indicator_index: usize) void {
fn printWithVisibleNewlines(source: []const u8) void { fn printWithVisibleNewlines(source: []const u8) void {
var i: usize = 0; var i: usize = 0;
while (std.mem.indexOfScalar(u8, source[i..], '\n')) |nl| : (i += nl + 1) { while (std.mem.findScalar(u8, source[i..], '\n')) |nl| : (i += nl + 1) {
printLine(source[i..][0..nl]); printLine(source[i..][0..nl]);
} }
print("{s}␃\n", .{source[i..]}); // End of Text symbol (ETX) print("{s}␃\n", .{source[i..]}); // End of Text symbol (ETX)

View file

@ -234,7 +234,7 @@ pub fn tokenLocation(self: Ast, start_offset: ByteOffset, token_index: TokenInde
const token_start = self.tokenStart(token_index); const token_start = self.tokenStart(token_index);
// Scan to by line until we go past the token start // Scan to by line until we go past the token start
while (std.mem.indexOfScalarPos(u8, self.source, loc.line_start, '\n')) |i| { while (std.mem.findScalarPos(u8, self.source, loc.line_start, '\n')) |i| {
if (i >= token_start) { if (i >= token_start) {
break; // Went past break; // Went past
} }
@ -1315,7 +1315,7 @@ pub fn lastToken(tree: Ast, node: Node.Index) TokenIndex {
pub fn tokensOnSameLine(tree: Ast, token1: TokenIndex, token2: TokenIndex) bool { pub fn tokensOnSameLine(tree: Ast, token1: TokenIndex, token2: TokenIndex) bool {
const source = tree.source[tree.tokenStart(token1)..tree.tokenStart(token2)]; const source = tree.source[tree.tokenStart(token1)..tree.tokenStart(token2)];
return mem.indexOfScalar(u8, source, '\n') == null; return mem.findScalar(u8, source, '\n') == null;
} }
pub fn getNodeSource(tree: Ast, node: Node.Index) []const u8 { pub fn getNodeSource(tree: Ast, node: Node.Index) []const u8 {

View file

@ -1420,7 +1420,7 @@ fn renderFor(r: *Render, for_node: Ast.full.For, space: Space) Error!void {
try renderParamList(r, lparen, for_node.ast.inputs, .space); try renderParamList(r, lparen, for_node.ast.inputs, .space);
var cur = for_node.payload_token; var cur = for_node.payload_token;
const pipe = std.mem.indexOfScalarPos(std.zig.Token.Tag, token_tags, cur, .pipe).?; const pipe = std.mem.findScalarPos(std.zig.Token.Tag, token_tags, cur, .pipe).?;
if (tree.tokenTag(@intCast(pipe - 1)) == .comma) { if (tree.tokenTag(@intCast(pipe - 1)) == .comma) {
try ais.pushIndent(.normal); try ais.pushIndent(.normal);
try renderToken(r, cur - 1, .newline); // | try renderToken(r, cur - 1, .newline); // |
@ -2197,7 +2197,7 @@ fn renderArrayInit(
try renderExpression(&sub_render, expr, .none); try renderExpression(&sub_render, expr, .none);
const written = sub_expr_buffer.written(); const written = sub_expr_buffer.written();
const width = written.len - start; const width = written.len - start;
const this_contains_newline = mem.indexOfScalar(u8, written[start..], '\n') != null; const this_contains_newline = mem.findScalar(u8, written[start..], '\n') != null;
contains_newline = contains_newline or this_contains_newline; contains_newline = contains_newline or this_contains_newline;
expr_widths[i] = width; expr_widths[i] = width;
expr_newlines[i] = this_contains_newline; expr_newlines[i] = this_contains_newline;
@ -2221,7 +2221,7 @@ fn renderArrayInit(
const written = sub_expr_buffer.written(); const written = sub_expr_buffer.written();
const width = written.len - start - 2; const width = written.len - start - 2;
const this_contains_newline = mem.indexOfScalar(u8, written[start .. written.len - 1], '\n') != null; const this_contains_newline = mem.findScalar(u8, written[start .. written.len - 1], '\n') != null;
contains_newline = contains_newline or this_contains_newline; contains_newline = contains_newline or this_contains_newline;
expr_widths[i] = width; expr_widths[i] = width;
expr_newlines[i] = contains_newline; expr_newlines[i] = contains_newline;
@ -3092,7 +3092,7 @@ fn hasComment(tree: Ast, start_token: Ast.TokenIndex, end_token: Ast.TokenIndex)
const token: Ast.TokenIndex = @intCast(i); const token: Ast.TokenIndex = @intCast(i);
const start = tree.tokenStart(token) + tree.tokenSlice(token).len; const start = tree.tokenStart(token) + tree.tokenSlice(token).len;
const end = tree.tokenStart(token + 1); const end = tree.tokenStart(token + 1);
if (mem.indexOf(u8, tree.source[start..end], "//") != null) return true; if (mem.find(u8, tree.source[start..end], "//") != null) return true;
} }
return false; return false;
@ -3101,7 +3101,7 @@ fn hasComment(tree: Ast, start_token: Ast.TokenIndex, end_token: Ast.TokenIndex)
/// Returns true if there exists a multiline string literal between the start /// Returns true if there exists a multiline string literal between the start
/// of token `start_token` and the start of token `end_token`. /// of token `start_token` and the start of token `end_token`.
fn hasMultilineString(tree: Ast, start_token: Ast.TokenIndex, end_token: Ast.TokenIndex) bool { fn hasMultilineString(tree: Ast, start_token: Ast.TokenIndex, end_token: Ast.TokenIndex) bool {
return std.mem.indexOfScalar( return std.mem.findScalar(
Token.Tag, Token.Tag,
tree.tokens.items(.tag)[start_token..end_token], tree.tokens.items(.tag)[start_token..end_token],
.multiline_string_literal_line, .multiline_string_literal_line,
@ -3115,11 +3115,11 @@ fn renderComments(r: *Render, start: usize, end: usize) Error!bool {
const ais = r.ais; const ais = r.ais;
var index: usize = start; var index: usize = start;
while (mem.indexOf(u8, tree.source[index..end], "//")) |offset| { while (mem.find(u8, tree.source[index..end], "//")) |offset| {
const comment_start = index + offset; const comment_start = index + offset;
// If there is no newline, the comment ends with EOF // If there is no newline, the comment ends with EOF
const newline_index = mem.indexOfScalar(u8, tree.source[comment_start..end], '\n'); const newline_index = mem.findScalar(u8, tree.source[comment_start..end], '\n');
const newline = if (newline_index) |i| comment_start + i else null; const newline = if (newline_index) |i| comment_start + i else null;
const untrimmed_comment = tree.source[comment_start .. newline orelse tree.source.len]; const untrimmed_comment = tree.source[comment_start .. newline orelse tree.source.len];
@ -3131,7 +3131,7 @@ fn renderComments(r: *Render, start: usize, end: usize) Error!bool {
// Leave up to one empty line before the first comment // Leave up to one empty line before the first comment
try ais.insertNewline(); try ais.insertNewline();
try ais.insertNewline(); try ais.insertNewline();
} else if (mem.indexOfScalar(u8, tree.source[index..comment_start], '\n') != null) { } else if (mem.findScalar(u8, tree.source[index..comment_start], '\n') != null) {
// Respect the newline directly before the comment. // Respect the newline directly before the comment.
// Note: This allows an empty line between comments // Note: This allows an empty line between comments
try ais.insertNewline(); try ais.insertNewline();
@ -3190,7 +3190,7 @@ fn renderExtraNewlineToken(r: *Render, token_index: Ast.TokenIndex) Error!void {
// If there is a immediately preceding comment or doc_comment, // If there is a immediately preceding comment or doc_comment,
// skip it because required extra newline has already been rendered. // skip it because required extra newline has already been rendered.
if (mem.indexOf(u8, tree.source[prev_token_end..token_start], "//") != null) return; if (mem.find(u8, tree.source[prev_token_end..token_start], "//") != null) return;
if (tree.isTokenPrecededByTags(token_index, &.{.doc_comment})) return; if (tree.isTokenPrecededByTags(token_index, &.{.doc_comment})) return;
// Iterate backwards to the end of the previous token, stopping if a // Iterate backwards to the end of the previous token, stopping if a

View file

@ -4131,7 +4131,7 @@ fn fnDecl(
const lib_name = if (fn_proto.lib_name) |lib_name_token| blk: { const lib_name = if (fn_proto.lib_name) |lib_name_token| blk: {
const lib_name_str = try astgen.strLitAsString(lib_name_token); const lib_name_str = try astgen.strLitAsString(lib_name_token);
const lib_name_slice = astgen.string_bytes.items[@intFromEnum(lib_name_str.index)..][0..lib_name_str.len]; const lib_name_slice = astgen.string_bytes.items[@intFromEnum(lib_name_str.index)..][0..lib_name_str.len];
if (mem.indexOfScalar(u8, lib_name_slice, 0) != null) { if (mem.findScalar(u8, lib_name_slice, 0) != null) {
return astgen.failTok(lib_name_token, "library name cannot contain null bytes", .{}); return astgen.failTok(lib_name_token, "library name cannot contain null bytes", .{});
} else if (lib_name_str.len == 0) { } else if (lib_name_str.len == 0) {
return astgen.failTok(lib_name_token, "library name cannot be empty", .{}); return astgen.failTok(lib_name_token, "library name cannot be empty", .{});
@ -4547,7 +4547,7 @@ fn globalVarDecl(
const lib_name = if (var_decl.lib_name) |lib_name_token| blk: { const lib_name = if (var_decl.lib_name) |lib_name_token| blk: {
const lib_name_str = try astgen.strLitAsString(lib_name_token); const lib_name_str = try astgen.strLitAsString(lib_name_token);
const lib_name_slice = astgen.string_bytes.items[@intFromEnum(lib_name_str.index)..][0..lib_name_str.len]; const lib_name_slice = astgen.string_bytes.items[@intFromEnum(lib_name_str.index)..][0..lib_name_str.len];
if (mem.indexOfScalar(u8, lib_name_slice, 0) != null) { if (mem.findScalar(u8, lib_name_slice, 0) != null) {
return astgen.failTok(lib_name_token, "library name cannot contain null bytes", .{}); return astgen.failTok(lib_name_token, "library name cannot contain null bytes", .{});
} else if (lib_name_str.len == 0) { } else if (lib_name_str.len == 0) {
return astgen.failTok(lib_name_token, "library name cannot be empty", .{}); return astgen.failTok(lib_name_token, "library name cannot be empty", .{});
@ -4769,7 +4769,7 @@ fn testDecl(
.string_literal => name: { .string_literal => name: {
const name = try astgen.strLitAsString(test_name_token); const name = try astgen.strLitAsString(test_name_token);
const slice = astgen.string_bytes.items[@intFromEnum(name.index)..][0..name.len]; const slice = astgen.string_bytes.items[@intFromEnum(name.index)..][0..name.len];
if (mem.indexOfScalar(u8, slice, 0) != null) { if (mem.findScalar(u8, slice, 0) != null) {
return astgen.failTok(test_name_token, "test name cannot contain null bytes", .{}); return astgen.failTok(test_name_token, "test name cannot contain null bytes", .{});
} else if (slice.len == 0) { } else if (slice.len == 0) {
return astgen.failTok(test_name_token, "empty test name must be omitted", .{}); return astgen.failTok(test_name_token, "empty test name must be omitted", .{});
@ -8779,7 +8779,7 @@ fn numberLiteral(gz: *GenZir, ri: ResultInfo, node: Ast.Node.Index, source_node:
} }
fn failWithNumberError(astgen: *AstGen, err: std.zig.number_literal.Error, token: Ast.TokenIndex, bytes: []const u8) InnerError { fn failWithNumberError(astgen: *AstGen, err: std.zig.number_literal.Error, token: Ast.TokenIndex, bytes: []const u8) InnerError {
const is_float = std.mem.indexOfScalar(u8, bytes, '.') != null; const is_float = std.mem.findScalar(u8, bytes, '.') != null;
switch (err) { switch (err) {
.leading_zero => if (is_float) { .leading_zero => if (is_float) {
return astgen.failTok(token, "number '{s}' has leading zero", .{bytes}); return astgen.failTok(token, "number '{s}' has leading zero", .{bytes});
@ -9272,7 +9272,7 @@ fn builtinCall(
const str_lit_token = tree.nodeMainToken(operand_node); const str_lit_token = tree.nodeMainToken(operand_node);
const str = try astgen.strLitAsString(str_lit_token); const str = try astgen.strLitAsString(str_lit_token);
const str_slice = astgen.string_bytes.items[@intFromEnum(str.index)..][0..str.len]; const str_slice = astgen.string_bytes.items[@intFromEnum(str.index)..][0..str.len];
if (mem.indexOfScalar(u8, str_slice, 0) != null) { if (mem.findScalar(u8, str_slice, 0) != null) {
return astgen.failTok(str_lit_token, "import path cannot contain null bytes", .{}); return astgen.failTok(str_lit_token, "import path cannot contain null bytes", .{});
} else if (str.len == 0) { } else if (str.len == 0) {
return astgen.failTok(str_lit_token, "import path cannot be empty", .{}); return astgen.failTok(str_lit_token, "import path cannot be empty", .{});
@ -11418,7 +11418,7 @@ fn identifierTokenString(astgen: *AstGen, token: Ast.TokenIndex) InnerError![]co
var buf: ArrayList(u8) = .empty; var buf: ArrayList(u8) = .empty;
defer buf.deinit(astgen.gpa); defer buf.deinit(astgen.gpa);
try astgen.parseStrLit(token, &buf, ident_name, 1); try astgen.parseStrLit(token, &buf, ident_name, 1);
if (mem.indexOfScalar(u8, buf.items, 0) != null) { if (mem.findScalar(u8, buf.items, 0) != null) {
return astgen.failTok(token, "identifier cannot contain null bytes", .{}); return astgen.failTok(token, "identifier cannot contain null bytes", .{});
} else if (buf.items.len == 0) { } else if (buf.items.len == 0) {
return astgen.failTok(token, "identifier cannot be empty", .{}); return astgen.failTok(token, "identifier cannot be empty", .{});
@ -11444,7 +11444,7 @@ fn appendIdentStr(
const start = buf.items.len; const start = buf.items.len;
try astgen.parseStrLit(token, buf, ident_name, 1); try astgen.parseStrLit(token, buf, ident_name, 1);
const slice = buf.items[start..]; const slice = buf.items[start..];
if (mem.indexOfScalar(u8, slice, 0) != null) { if (mem.findScalar(u8, slice, 0) != null) {
return astgen.failTok(token, "identifier cannot contain null bytes", .{}); return astgen.failTok(token, "identifier cannot contain null bytes", .{});
} else if (slice.len == 0) { } else if (slice.len == 0) {
return astgen.failTok(token, "identifier cannot be empty", .{}); return astgen.failTok(token, "identifier cannot be empty", .{});
@ -11701,7 +11701,7 @@ fn strLitAsString(astgen: *AstGen, str_lit_token: Ast.TokenIndex) !IndexSlice {
const token_bytes = astgen.tree.tokenSlice(str_lit_token); const token_bytes = astgen.tree.tokenSlice(str_lit_token);
try astgen.parseStrLit(str_lit_token, string_bytes, token_bytes, 0); try astgen.parseStrLit(str_lit_token, string_bytes, token_bytes, 0);
const key: []const u8 = string_bytes.items[str_index..]; const key: []const u8 = string_bytes.items[str_index..];
if (std.mem.indexOfScalar(u8, key, 0)) |_| return .{ if (std.mem.findScalar(u8, key, 0)) |_| return .{
.index = @enumFromInt(str_index), .index = @enumFromInt(str_index),
.len = @intCast(key.len), .len = @intCast(key.len),
}; };

View file

@ -3686,7 +3686,7 @@ fn eatDocComments(p: *Parse) Allocator.Error!?TokenIndex {
} }
fn tokensOnSameLine(p: *Parse, token1: TokenIndex, token2: TokenIndex) bool { fn tokensOnSameLine(p: *Parse, token1: TokenIndex, token2: TokenIndex) bool {
return std.mem.indexOfScalar(u8, p.source[p.tokenStart(token1)..p.tokenStart(token2)], '\n') == null; return std.mem.findScalar(u8, p.source[p.tokenStart(token1)..p.tokenStart(token2)], '\n') == null;
} }
fn eatToken(p: *Parse, tag: Token.Tag) ?TokenIndex { fn eatToken(p: *Parse, tag: Token.Tag) ?TokenIndex {

View file

@ -109,7 +109,7 @@ fn iterateAndFilterByVersion(
.build = "", .build = "",
}; };
const suffix = entry.name[prefix.len..]; const suffix = entry.name[prefix.len..];
const underscore = std.mem.indexOfScalar(u8, entry.name, '_'); const underscore = std.mem.findScalar(u8, entry.name, '_');
var num_it = std.mem.splitScalar(u8, suffix[0 .. underscore orelse suffix.len], '.'); var num_it = std.mem.splitScalar(u8, suffix[0 .. underscore orelse suffix.len], '.');
version.nums[0] = Version.parseNum(num_it.first()) orelse continue; version.nums[0] = Version.parseNum(num_it.first()) orelse continue;
for (version.nums[1..]) |*num| for (version.nums[1..]) |*num|

View file

@ -120,7 +120,7 @@ pub const NullTerminatedString = enum(u32) {
/// Given an index into `string_bytes` returns the null-terminated string found there. /// Given an index into `string_bytes` returns the null-terminated string found there.
pub fn nullTerminatedString(code: Zir, index: NullTerminatedString) [:0]const u8 { pub fn nullTerminatedString(code: Zir, index: NullTerminatedString) [:0]const u8 {
const slice = code.string_bytes[@intFromEnum(index)..]; const slice = code.string_bytes[@intFromEnum(index)..];
return slice[0..std.mem.indexOfScalar(u8, slice, 0).? :0]; return slice[0..std.mem.findScalar(u8, slice, 0).? :0];
} }
pub fn refSlice(code: Zir, start: usize, len: usize) []Inst.Ref { pub fn refSlice(code: Zir, start: usize, len: usize) []Inst.Ref {

View file

@ -221,7 +221,7 @@ pub const Node = union(enum) {
pub const NullTerminatedString = enum(u32) { pub const NullTerminatedString = enum(u32) {
_, _,
pub fn get(nts: NullTerminatedString, zoir: Zoir) [:0]const u8 { pub fn get(nts: NullTerminatedString, zoir: Zoir) [:0]const u8 {
const idx = std.mem.indexOfScalar(u8, zoir.string_bytes[@intFromEnum(nts)..], 0).?; const idx = std.mem.findScalar(u8, zoir.string_bytes[@intFromEnum(nts)..], 0).?;
return zoir.string_bytes[@intFromEnum(nts)..][0..idx :0]; return zoir.string_bytes[@intFromEnum(nts)..][0..idx :0];
} }
}; };

View file

@ -487,7 +487,7 @@ fn appendIdentStr(zg: *ZonGen, ident_token: Ast.TokenIndex) error{ OutOfMemory,
} }
const slice = zg.string_bytes.items[start..]; const slice = zg.string_bytes.items[start..];
if (mem.indexOfScalar(u8, slice, 0) != null) { if (mem.findScalar(u8, slice, 0) != null) {
try zg.addErrorTok(ident_token, "identifier cannot contain null bytes", .{}); try zg.addErrorTok(ident_token, "identifier cannot contain null bytes", .{});
return error.BadString; return error.BadString;
} else if (slice.len == 0) { } else if (slice.len == 0) {
@ -586,7 +586,7 @@ fn strLitAsString(zg: *ZonGen, str_node: Ast.Node.Index) error{ OutOfMemory, Bad
}, },
} }
const key: []const u8 = string_bytes.items[str_index..]; const key: []const u8 = string_bytes.items[str_index..];
if (std.mem.indexOfScalar(u8, key, 0) != null) return .{ .slice = .{ if (std.mem.findScalar(u8, key, 0) != null) return .{ .slice = .{
.start = str_index, .start = str_index,
.len = @intCast(key.len), .len = @intCast(key.len),
} }; } };
@ -785,7 +785,7 @@ fn lowerStrLitError(
} }
fn lowerNumberError(zg: *ZonGen, err: std.zig.number_literal.Error, token: Ast.TokenIndex, bytes: []const u8) Allocator.Error!void { fn lowerNumberError(zg: *ZonGen, err: std.zig.number_literal.Error, token: Ast.TokenIndex, bytes: []const u8) Allocator.Error!void {
const is_float = std.mem.indexOfScalar(u8, bytes, '.') != null; const is_float = std.mem.findScalar(u8, bytes, '.') != null;
switch (err) { switch (err) {
.leading_zero => if (is_float) { .leading_zero => if (is_float) {
try zg.addErrorTok(token, "number '{s}' has leading zero", .{bytes}); try zg.addErrorTok(token, "number '{s}' has leading zero", .{bytes});

View file

@ -115,7 +115,7 @@ fn PromoteIntLiteralReturnType(comptime SuffixType: type, comptime number: compt
else else
&signed_oct_hex; &signed_oct_hex;
var pos = std.mem.indexOfScalar(type, list, SuffixType).?; var pos = std.mem.findScalar(type, list, SuffixType).?;
while (pos < list.len) : (pos += 1) { while (pos < list.len) : (pos += 1) {
if (number >= std.math.minInt(list[pos]) and number <= std.math.maxInt(list[pos])) { if (number >= std.math.minInt(list[pos]) and number <= std.math.maxInt(list[pos])) {
return list[pos]; return list[pos];

View file

@ -26,7 +26,7 @@ pub fn BitcodeWriter(comptime types: []const type) type {
widths: [types.len]u16, widths: [types.len]u16,
pub fn getTypeWidth(self: BcWriter, comptime Type: type) u16 { pub fn getTypeWidth(self: BcWriter, comptime Type: type) u16 {
return self.widths[comptime std.mem.indexOfScalar(type, types, Type).?]; return self.widths[comptime std.mem.findScalar(type, types, Type).?];
} }
pub fn init(allocator: std.mem.Allocator, widths: [types.len]u16) BcWriter { pub fn init(allocator: std.mem.Allocator, widths: [types.len]u16) BcWriter {

View file

@ -1076,7 +1076,7 @@ fn detectAbiAndDynamicLinker(io: Io, cpu: Target.Cpu, os: Target.Os, query: Targ
const path_maybe_args = mem.trimEnd(u8, trimmed_line, "\n"); const path_maybe_args = mem.trimEnd(u8, trimmed_line, "\n");
// Separate path and args. // Separate path and args.
const path_end = mem.indexOfAny(u8, path_maybe_args, &.{ ' ', '\t', 0 }) orelse path_maybe_args.len; const path_end = mem.findAny(u8, path_maybe_args, &.{ ' ', '\t', 0 }) orelse path_maybe_args.len;
const unvalidated_path = path_maybe_args[0..path_end]; const unvalidated_path = path_maybe_args[0..path_end];
file_name = if (fs.path.isAbsolute(unvalidated_path)) unvalidated_path else return error.RelativeShebang; file_name = if (fs.path.isAbsolute(unvalidated_path)) unvalidated_path else return error.RelativeShebang;
continue; continue;

View file

@ -35,7 +35,7 @@ const SparcCpuinfoImpl = struct {
fn line_hook(self: *SparcCpuinfoImpl, key: []const u8, value: []const u8) !bool { fn line_hook(self: *SparcCpuinfoImpl, key: []const u8, value: []const u8) !bool {
if (mem.eql(u8, key, "cpu")) { if (mem.eql(u8, key, "cpu")) {
inline for (cpu_names) |pair| { inline for (cpu_names) |pair| {
if (mem.indexOfPos(u8, value, 0, pair[0]) != null) { if (mem.findPos(u8, value, 0, pair[0]) != null) {
self.model = pair[1]; self.model = pair[1];
break; break;
} }
@ -147,7 +147,7 @@ const PowerpcCpuinfoImpl = struct {
// The model name is often followed by a comma or space and extra // The model name is often followed by a comma or space and extra
// info. // info.
inline for (cpu_names) |pair| { inline for (cpu_names) |pair| {
const end_index = mem.indexOfAny(u8, value, ", ") orelse value.len; const end_index = mem.findAny(u8, value, ", ") orelse value.len;
if (mem.eql(u8, value[0..end_index], pair[0])) { if (mem.eql(u8, value[0..end_index], pair[0])) {
self.model = pair[1]; self.model = pair[1];
break; break;
@ -318,7 +318,7 @@ const ArmCpuinfoImpl = struct {
self.have_fields += 1; self.have_fields += 1;
} else if (mem.eql(u8, key, "model name")) { } else if (mem.eql(u8, key, "model name")) {
// ARMv6 cores report "CPU architecture" equal to 7. // ARMv6 cores report "CPU architecture" equal to 7.
if (mem.indexOf(u8, value, "(v6l)")) |_| { if (mem.find(u8, value, "(v6l)")) |_| {
info.is_really_v6 = true; info.is_really_v6 = true;
} }
} else if (mem.eql(u8, key, "CPU revision")) { } else if (mem.eql(u8, key, "CPU revision")) {
@ -427,7 +427,7 @@ fn CpuinfoParser(comptime impl: anytype) type {
fn parse(arch: Target.Cpu.Arch, reader: *Io.Reader) !?Target.Cpu { fn parse(arch: Target.Cpu.Arch, reader: *Io.Reader) !?Target.Cpu {
var obj: impl = .{}; var obj: impl = .{};
while (try reader.takeDelimiter('\n')) |line| { while (try reader.takeDelimiter('\n')) |line| {
const colon_pos = mem.indexOfScalar(u8, line, ':') orelse continue; const colon_pos = mem.findScalar(u8, line, ':') orelse continue;
const key = mem.trimEnd(u8, line[0..colon_pos], " \t"); const key = mem.trimEnd(u8, line[0..colon_pos], " \t");
const value = mem.trimStart(u8, line[colon_pos + 1 ..], " \t"); const value = mem.trimStart(u8, line[colon_pos + 1 ..], " \t");
if (!try obj.line_hook(key, value)) break; if (!try obj.line_hook(key, value)) break;

View file

@ -539,7 +539,7 @@ pub const Iterator = struct {
if (options.allow_backslashes) { if (options.allow_backslashes) {
std.mem.replaceScalar(u8, filename, '\\', '/'); std.mem.replaceScalar(u8, filename, '\\', '/');
} else { } else {
if (std.mem.indexOfScalar(u8, filename, '\\')) |_| if (std.mem.findScalar(u8, filename, '\\')) |_|
return error.ZipFilenameHasBackslash; return error.ZipFilenameHasBackslash;
} }
@ -626,7 +626,7 @@ pub const Diagnostics = struct {
if (!self.saw_first_file) { if (!self.saw_first_file) {
self.saw_first_file = true; self.saw_first_file = true;
std.debug.assert(self.root_dir.len == 0); std.debug.assert(self.root_dir.len == 0);
const root_len = std.mem.indexOfScalar(u8, name, '/') orelse return; const root_len = std.mem.findScalar(u8, name, '/') orelse return;
std.debug.assert(root_len > 0); std.debug.assert(root_len > 0);
self.root_dir = try self.allocator.dupe(u8, name[0..root_len]); self.root_dir = try self.allocator.dupe(u8, name[0..root_len]);
} else if (self.root_dir.len > 0) { } else if (self.root_dir.len > 0) {

View file

@ -1,6 +1,6 @@
//! This script updates the .c, .h, .s, and .S files that make up the start //! This script updates the .c, .h, .s, and .S files that make up the start
//! files such as crt1.o. Not to be confused with //! files such as crt1.o. Not to be confused with
//! https://github.com/ziglang/glibc-abi-tool/ which updates the `abilists` //! https://codeberg.org/ziglang/libc-abi-tools which updates the `abilists`
//! file. //! file.
//! //!
//! Example usage: //! Example usage: