Before, this code constructed an arena allocator and then used it when
handling redirects.
You know what's better than having threads fight over an allocator?
Avoiding dynamic memory allocation in the first place.
This commit reuses the http headers static buffer for handling
redirects. The new location is copied to the beginning of the static
header buffer and then the subsequent request uses a subslice of that
buffer.
* "storage" is a better name than "strategy".
* The most flexible memory-based storage API is appending to an
ArrayList.
* HTTP method should default to POST if there is a payload.
* Avoid storing unnecessary data in the FetchResult
* Avoid the need for a deinit() method in the FetchResult
The decisions that this logic made about how to handle files is beyond
repair:
- fail to use sendfile() on a plain connection
- redundant stat
- does not handle arbitrary streams
So, file-based response storage is no longer supported. Users should use
the lower-level open() API which allows avoiding these pitfalls.
* add API for iterating over custom HTTP headers
* remove `trailing` flag from std.http.Client.parse. Instead, simply
don't call parse() for trailers.
* fix the logic inside that parse() function. it was using wrong std.mem
functions, ignoring malformed data, and returned errors on dead
branches.
* simplify logic inside wait()
* fix HeadersParser not dropping the 2 read bytes of \r\n after a
chunked transfer
* move the trailers test to be a std lib unit test and make it pass
This reverts commit 42be972a72c86b32ad8403d082ab42763c6facec.
Using a bit to distinguish between headers and trailers is fine. It was
just named and documented poorly.
I originally removed these in 402f967ed5.
I allowed them to be added back in #15299 because they were smuggled in
alongside a bug fix, however, I wasn't kidding when I said that I wanted
to take the design of std.http in a different direction than using this
data structure.
Instead, some headers are provided via explicit field names populated
while parsing the HTTP request/response, and some are provided via
new fields that support passing extra, arbitrary headers.
This resulted in simplification of logic in many places, as well as
elimination of the possibility of failure in many places. There is
less deinitialization code happening now.
Furthermore, it made it no longer necessary to clone the headers data
structure in order to handle redirects.
http_proxy and https_proxy fields are now pointers since it is common
for them to be unpopulated.
loadDefaultProxies is changed into initDefaultProxies to communicate
that it does not actually load anything from disk or from the network.
The function now is leaky; the API user must pass an already
instantiated arena allocator. Removes the need to deinitialize proxies.
Before, proxies stored arbitrary sets of headers. Now they only store
the authorization value.
Removed the duplicated code between https_proxy and http_proxy. Finally,
parsing failures of the environment variables result in errors being
emitted rather than silently ignoring the proxy.
error.CompressionNotSupported is renamed to
error.CompressionUnsupported, matching the naming convention from all
the other errors in the same set.
Removed documentation comments that were redundant with field and type
names.
Disabling zstd decompression in the server for now; see #18937.
I found some apparently dead code in src/Package/Fetch/git.zig. I want
to check with Ian about this.
I discovered that test/standalone/http.zig is dead code, it is only
being compiled but not being run. Furthermore it hangs at the end if you
run it manually. The previous commits in this branch were written under
the assumption that this test was being run with
`zig build test-standalone`.
This is a state machine that already has a `state` field. No need to
additionally store "done" - it just makes things unnecessarily
complicated and buggy.
The buffer for HTTP headers is now always provided via a static buffer.
As a consequence, OutOfMemory is no longer a member of the read() error
set, and the API and implementation of Client and Server are simplified.
error.HttpHeadersExceededSizeLimit is renamed to
error.HttpHeadersOversize.
Running fuzzing tar test with [zig std lib
fuzzing](https://github.com/squeek502/zig-std-lib-fuzzing) reached and
assert in tar implementation. Assert (in std lib) should not be
reachable by external input, so I'm fixing this to return error.
If an adapted string key with embedded nulls was put in a hash map with
`std.hash_map.StringIndexAdapter`, then an incorrect hash would be
entered for that entry such that it is possible that when looking for
the exact key that matches the prefix of the original key up to the
first null would sometimes match this entry due to hash collisions and
sometimes not if performed later after a grow + rehash, causing the same
key to exist with two different indices breaking every string equality
comparison ever, for example claiming that a container type doesn't
contain a field because the field name string in the struct and the
string representing the identifier to lookup might be equal strings but
have different string indices. This could maybe be fixed by changing
`std.hash_map.StringIndexAdapter.hash` to only hash up to the first
null, therefore ensuring that the entry's hash is correct and that all
future lookups will be consistent, but I don't trust anything so instead
I assert that there are no embedded nulls.
BufferedTee provides reader interface to the consumer. Data read by consumer
is also written to the output. Output is hold lookahead_size bytes behind
consumer. Allowing consumer to put back some bytes to be read again. On flush
all consumed bytes are flushed to the output.
input -> tee -> consumer
|
output
input - underlying unbuffered reader
output - writer, receives data read by consumer
consumer - uses provided reader interface
If lookahead_size is zero output always has same bytes as consumer.