Skip to content

Conversation

@jagerman
Copy link
Member

@jagerman jagerman commented Oct 11, 2025

This adds a QUIC API allowing for streaming retrieval and delivery of uploads/downloads, for future use with Lokinet.

This server implements file upload and download that uploads and downloads files via QUIC streams, thus allowing both parallel downloads/uploads, as well as allowing more efficient storage as streamed data can be sent piece-by-piece as it loads from disk, rather than needing large all-at-once reads and writes.

This implementation makes heavy use of io_uring for efficiency, and as such will never work outside Linux.

Each upload or download uses exactly one stream (skipping the first stream): an upload is prefixed with a bt-encoded file metadata block (containing the "PUT" request type, size, and optional ttl), while a download request consists of the same sort of block but containing "GET" and the file id.

For PUT, all data on the stream after the initial block is the file data, terminated by a FIN on the stream. For GET, the FIN is sent immediately after the request block.

For responses, PUT consists of a metadata block containing the uploaded id, expiry, and upload info metadata. GET consists of a metadata block (containing size+expiry+upload info), which is immediately followed by the stream data and FIN. (Thus PUT/GET responses are analogous to the GET/PUT request data in the opposite direction).

This initial commit supports upload, download, querying, and extend. The non-file related ancillary endpoints (querying session releases, and token info) are still a work in progress.

@jagerman jagerman changed the title [IP] QUIC server [WIP] QUIC server Oct 11, 2025
@jagerman jagerman marked this pull request as draft October 11, 2025 02:13
This adds a QUIC-based implementation of file upload and download that
uploads and downloads files on streams, thus allowing both parallel
downloads/uploads, as well as allowing more efficient storage as
streamed data can be sent piece-by-piece as it loads from disk, rather
than needing large all-at-once reads and writes.

This implementation makes heavy use of io_uring for efficiency, and as
such will never work outside Linux.

Each upload or download uses exactly one stream (skipping the first
stream): an upload is prefixed with a bt-encoded file metadata block
(containing the "PUT" request type, size, and optional ttl), while a
download request consists of the same sort of block but containing "GET"
and the file id.

For PUT, all data on the stream after the initial block is the file
data, terminated by a FIN on the stream.  For GET, the FIN is sent
immediately after the request block.

For responses, PUT consists of a metadata block containing the uploaded
id, expiry, and upload info metadata.  GET consists of a metadata block
(containing size+expiry+upload info), which is immediately followed by
the stream data and FIN.  (Thus PUT/GET responses are analogous to the
GET/PUT request data in the opposite direction).

This initial commit supports read, write, and automatic expiry from
disk; the following commits will add support for the non-transfer
endpoints (such as updating expiries, querying info, and session version
retrieval), which will all happen via the initial (id=0) bt request
stream.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant