Quick start
Three primitives. Io owns the shard, poll() drives one tick, next_event() drains the queue.
let mut io = Io::new(Config::default())?; io.quic_listen("0.0.0.0:4433".parse()?, &cert, &key)?; loop { io.poll(Duration::from_millis(10))?; while let Some(ev) = io.next_event() { handle(ev); } }
Feature gates
Compile only what you use. Defaults: quic + tcp. Everything else is opt-in.
| Feature | Enables | Implies |
|---|---|---|
| quic | QUIC listen/connect, datagrams, streams | — |
| tcp | TCP listen/connect, WriteBufferPool | — |
| websocket | WebSocket listen/connect, masking, ping/pong | tcp |
| websocket-deflate | permessage-deflate compression | websocket |
| http | HTTP/1.1 + HTTP/2 server | tcp |
| http-client | HTTP client + HttpPool | http |
| webtransport | H3 CONNECT + WT sessions | quic |
| tls-ktls | kernel TLS offload | tcp / http |
| tower | tower::Service<HttpOwnedRequest> impls | — |
| tower-http-compat | HttpOwnedRequest → http::Request adapter | tower |
| linux-af-xdp | AF_XDP backend (tier 2) | — |
| linux-userspace-tcp | FreeBSD userspace TCP stack on AF_XDP (tier 3, experimental) | — |
| socks5 / dns / ntp / mdns | opt-in feature gates per protocol | — |
Io
The shard handle. Owns sockets, the io_uring ring, the payload pool, and the connection table. Single-threaded; do not Send.
| Method | Purpose |
|---|---|
fn new(config: Config) -> io::Result<Self> | Construct a single-shard Io. Allocates pools, opens io_uring. |
fn poll(&mut self, timeout: Duration) -> io::Result<()> | One tick: drain CQEs → process_dirty → flush TX → fire timers. |
fn next_event(&mut self) -> Option<Event<'_>> | Drain the per-tick event queue. Borrows &mut self. |
fn detach_event_data(&mut self) -> Option<OwnedSlot> | Promote the current event's payload to an owned, Send + Sync slot. |
fn detach_http_request(&mut self) -> Option<HttpOwnedRequest> | HTTP-only. Detach the current request as a 96–112 B owned struct. |
fn send_buffer(&mut self, min: usize) -> io::Result<SendBuffer> | Check out a writable pool slot for zero-copy TX. |
fn close(&mut self, conn: ConnId) -> io::Result<()> | Immediate close. |
fn close_graceful(&mut self, conn: ConnId, timeout: Duration) | Drain in-flight, then close. |
fn pool_pressure(&self) -> Option<PoolPressureInfo> | Snapshot of pool utilization for back-pressure checks. |
fn pool_stats(&self) -> PoolStats | Current / peak / capacity per pool. |
fn conn_stats(&self, conn) -> io::Result<ConnStats> | RTT, cwnd, bytes, packets-lost per connection. |
fn handle(&self) -> IoHandle | Cross-thread send-side handle. Cheaply cloneable. |
Event<'poll>
Borrowed from the current poll. Invalidated by the next poll(). Process synchronously or call detach_event_data() for cross-thread.
| Variant | Carries |
|---|---|
UdpRecv | endpoint, from, data: &'poll [u8] |
Connected | conn, peer, protocol: Protocol |
Disconnected | conn, error_code: u64, reason: &'poll [u8] |
Datagram | conn, data: &'poll [u8] |
StreamFrame | conn, stream, kind: MessageKind, data: &'poll [u8] |
StreamReset · StopSending | conn, stream, error_code |
SessionReady | conn · WebTransport CONNECT 200 |
PathMigration | conn, old_peer, new_peer |
HttpRequest · HttpBodyChunk · HttpResponse | HTTP feature only |
PoolPressure · DnsResolved · MqttEvent | Per-feature |
MessageKind
Typed message discriminator on StreamFrame and ConnDatagram. Replaces the old opaque msg_type: u8.
Binary·WsText·WsBinary·MqttPacket·GrpcFrame·FixText·Sbe·User(u8)
Config
Per-shard knobs. #[non_exhaustive] — extend without breaking semver.
| Field | Default |
|---|---|
pool_slot_count: usize | 4096 |
pool_slot_size: usize | 2048 (≥ 1200, RFC 9000) |
huge_pages: Toggle | Auto |
max_connections: usize | 1024 |
max_events_per_poll: usize | 256 |
pool_pressure_pct: u8 | 80 |
compression_threshold: usize | 128 B |
uring: UringConfig linux | auto-tuned |
debug: DebugConfig | disabled |
IoHandle
Send-side handle obtained via io.handle(). Send + Sync + Clone. Cross-thread paths funnel through this — workers send, the shard wakes and writes.
fn send_datagram(&self, conn, data: &[u8]) -> io::Result<()>fn stream_write(&self, conn, stream, data: &[u8]) -> io::Result<()>fn send_datagram_buffer(&self, conn, buf: SendBuffer) -> io::Result<()>fn stream_write_buffer(&self, conn, stream, buf: SendBuffer) -> io::Result<()>fn http_respond(&self, conn, request_id, response: ZeroResponse)fn close(&self, conn) · close_graceful(&self, conn, timeout)
OwnedSlot · SendBuffer
OwnedSlot is a payload pulled out of the per-poll lifetime and made Send + Sync. Internally an Arc over a pool slot — refcounted, recycled on drop.
SendBuffer is a writable pool slot for zero-copy TX. Acquire via io.send_buffer(n), write into as_mut_slice(), hand to send_datagram_buffer / stream_write_buffer.
IoCluster · multi-shard
Production entry point for > 1 shard. Owns N reuseport sockets (or one shared socket + DCID dispatcher) and exposes the same listen / connect surface, fanned out across shards.
| Item | Purpose |
|---|---|
ClusterConfig | shard_count, routing, cpu_affinity, expected_protocols |
RoutingStrategy | ReusePortCbpf · ReusePortEbpf · DcidDispatch |
ScidGenerator | 14 bits encode (server_id, shard_id) jointly in QUIC SCIDs — 16 384 cluster slots partitioned across the two fields. SERVER_ID_MASK = 0x3FFF, shard count is a power of 2 within each server. |
ShardIo | Per-shard handle; identical surface to Io. |
Io for tests, single-core deploys, tools. IoCluster for production servers. Don't roll your own N Io instances — you'll miss the routing.
UDP
fn udp_bind(&mut self, addr: SocketAddr) -> io::Result<EndpointId>fn udp_bind_with(&mut self, config: UdpEndpointConfig) -> io::Result<EndpointId>fn udp_send(&mut self, endpoint, to, buf: SendBuffer) -> io::Result<()>fn udp_send_bytes(&mut self, endpoint, to, data: &[u8]) -> io::Result<()>· convenience, 1 memcpyfn multicast_join · multicast_leave (group: MulticastGroup)
MulticastGroup is typed: AnySource (mDNS, RFC 1112) or SourceSpecific (CME / Eurex feeds, RFC 4607).
QUIC
fn quic_listen(&mut self, addr, cert, key) -> io::Result<EndpointId>fn quic_listen_with(&mut self, config: QuicListenConfig)fn quic_connect(&mut self, addr) -> io::Result<ConnId>fn quic_connect_with(&mut self, config: QuicConnectConfig)· supports Happy Eyeballs (RFC 8305) whenHostOrAddr::Host.fn send_datagram(&mut self, conn, data: &[u8])fn stream_write(&mut self, conn, stream, data) -> io::Result<usize>fn stream_read(&mut self, conn, stream, &mut [u8])· QUIC / WT only · returnsStreamNotPullableon TCP/WSfn early_data_send · session_ticket · set_session_ticket· 0-RTT
QuicListenConfig covers idle timeout, stream / data limits, congestion (Reno · Cubic · BBRv2), DPLPMTUD, retry tokens, ECN, allowed origins.
TCP · Unix Domain Sockets
fn tcp_listen · tcp_listen_withfn tcp_connect · tcp_connect_with· Happy Eyeballs supportedfn uds_listen(&mut self, path: &str)fn uds_connect(&mut self, path: &str)
TCP RX is push-only. Data arrives via Event::StreamFrame { kind: MessageKind::Binary }. There is no tcp_stream_read; calling stream_read on a TCP ConnId returns IoError::StreamNotPullable.
WebSocket
fn ws_listen · ws_listen_tls · ws_listen_withfn ws_connect · ws_connect_withfn ws_send(&mut self, conn, data: &[u8], text: bool)fn ws_send_buffer(&mut self, conn, buf: SendBuffer, text: bool)fn ws_close(&mut self, conn, code: u16, reason: &str)
Frames arrive as Event::StreamFrame { kind: WsText | WsBinary }. Ping/pong handled internally.
HTTP · HTTP/2
fn http_listen · http_listen_tls · http_listen_with(HttpListenConfig)fn http_respond(&mut self, conn, request_id, response: ZeroResponse)fn http_request(&mut self, …) -> io::Result<RequestId>· client
HttpListenConfig: max_header_count, max_header_size, max_body_inline, request_timeout_ms, H/2 streams / window / frame / header-list, compression threshold.
WebTransport
fn wt_connect(&mut self, addr, path: &str)fn wt_connect_with(WtConnectConfig)- Server:
quic_listen_with(QuicListenConfig { enable_webtransport: true, allowed_origins, … }) Event::SessionReady· H3 CONNECT 200 accepted
One session per connection. Datagrams + streams over the H3 CONNECT.
TLS · STARTTLS · hot-reload
fn tls_upgrade(&mut self, conn, config: TlsClientConfig)· client STARTTLSfn tls_accept_upgrade(&mut self, conn, config: TlsServerConfig)· server STARTTLSfn enable_cert_hot_reload(&mut self, endpoint) -> CertReloadHandleCertReloadHandle::reload_from_pem · reload_from_bytes · reload_quic_from_pem·Send + Sync + Clone, atomic swap viaarc-swap
Auto-attempts kTLS after handshake if available (Linux ≥ 6.7). Falls back to rustls in-process if not.
Multicast · DNS · NTP · mDNS · SOCKS5
| Method | Purpose |
|---|---|
fn dns_init · dns_resolve · dns_result | Async DNS via UDP, optional TCP fallback, optional DoT/DoH. |
fn ntp_init(NtpConfig) · ntp_offset_us · ntp_now_us | SNTP / NTP, multi-server, KoD. |
fn mdns_init · mdns_register(MdnsService) · mdns_discover · mdns_resolve | RFC 6762 / 6763, ASM 224.0.0.251. |
fn tcp_connect_socks5(proxy, dest, auth) | RFC 1928. Universal. |
fn quic_connect_socks5(proxy, dest, auth) | UDP ASSOCIATE. Best-effort, server allowlist required, MTU auto-adjusted, migration disabled. |
ZeroRuntime · async bridge
Wraps Io with a Tokio-friendly driver. Four modes pick a different point on the latency / ergonomics curve.
| Method | Allocs | Best for |
|---|---|---|
fn run_sync<H: SyncHandler>(self, handler) -> io::Result<ShutdownHandle> | 0 B | CPU-bound inline handlers |
fn run_async<H: AsyncHandler + Clone>(self, handler) | ~64 B | DB queries, slow handlers |
fn run_tower<S: tower::Service<HttpOwnedRequest>>(self, svc) | ~64 B | Tower middleware, generic Tower |
fn run_per_core<H: AsyncHandler + Clone>(self, cluster, handler) | 0 / 64 B | Per-shard tokio runtime, mixed inline + streaming |
HttpOwnedRequest · BodyStream
~96–112 B owned struct. Send + Sync. Path / headers / body offsets stored as a 12-byte table inside the pool slot. Zero-copy accessors return &str slices into the slot.
fn path(&self) -> &strfn header(&self, name: &str) -> Option<&str>fn method(&self) -> HttpMethodfn body(&self) -> &[u8]· inline bodyfn body_stream(&mut self) -> Option<BodyStream>· streaming uploads, 8-slot SPSC ring per request
ZeroResponse mirrors this on the response side. Builders: ZeroResponse::ok().json(&value), ZeroResponse::not_found(), etc.
Native middleware (zero alloc)
| Layer | tower-http equivalent |
|---|---|
ZeroCorsLayer | tower_http::cors::CorsLayer |
ZeroAuthLayer | tower_http::auth::ValidateRequestHeader |
ZeroTraceLayer | tower_http::trace::TraceLayer |
ZeroCompressionLayer | tower_http::compression::CompressionLayer |
ZeroRequestIdLayer | tower_http::request_id::SetRequestIdLayer |
ZeroNormalizePathLayer | tower_http::normalize_path::NormalizePathLayer |
ZeroSensitiveHeadersLayer | tower_http::sensitive_headers::SetSensitiveHeadersLayer |
For anything outside this list, use tower-http-compat at a 640 B / req cost.
zero-io-axum
fn serve(io: Io, app: axum::Router) -> io::Result<ShutdownHandle>
Two-line migration from axum::serve. Cost: ~200 B steady-state with the default header-map-pool feature (which reclaims axum's HeaderMap per request); 640 B without the pool. The HeaderMap itself is structural — axum's signature requires it.
REST · gRPC
Higher-level crates building on the HTTP base.
zero-rest:Router,RestRequest,RestResponse,PathParams, optionalCacheMiddleware.zero-grpc:GrpcService,ServerStream,ClientStream,BidiStream,Code,Status. Code generated from.protoviazero-grpc-build.
MQTT · Redis
zero-mqtt:MqttClient,MqttBroker, QoS 0/1/2, MQTT 3.1.1 + 5, trie-based topic match.zero-redis:RedisClient,RedisPipeline, RESP2 / RESP3, pub/sub.
FIX · SBE
zero-fix: zero-copy text FIX 4.4 parser/builder, session FSM. Persistence inSessionWal(PLAN-STEP193b) — append-only WAL per session, CRC32C, atomic checkpoint.zero-sbe: flyweight SBE decoder for CME MDP 3.0 / Eurex T7. Multicast feed handler with explicit gap-recovery FSM (T1..T14 transitions, I1..I5 invariants).
SMTP · FTP
zero-smtp: SMTP client + server, STARTTLS, AUTH PLAIN / LOGIN / XOAUTH2, MIME, DKIM (Ed25519 / RSA), pipelining.zero-ftp: FTP client + server, AUTH TLS (FTPS), passive / EPSV, splice / mmap for transfers.
Ops CLI
charting-status binary. UDS at /run/charting-server/ops.sock (mode 0600). Mandatory HMAC. Two-phase commit for destructive actions. Profile-based allowlist (dev / staging / prod). Sealed-token for prod-restricted operations.
charting-status snapshot · healthz · readyz· read-only, no auth needed beyond peer-credcharting-status drain-tx · conn-kill · bpf reload-ports · cert-reload · reset-peaks· privileged, audit-logged
IoError
Structured. Variants pinned to #[non_exhaustive]. Diagnostic strings are actionable — they name the syscall, the cause, and the fix.
KernelTooOld { required: KernelVersion, found: KernelVersion }StreamNotPullable { protocol: Protocol }· useEvent::StreamFrameNotSupportedOnPlatform { platform }NotSupportedOnBackend { backend, feature }PoolExhausted · DnsError · ConnectTimeout · TlsHandshakeFailedHmacMismatch · NonceReplay · ProfileForbidden· ops API
Backpressure cascade
Seven pools (FreeStack · FillRing · RxRing · TxRing · CompletionRing · ScratchPool · PerConn) feed one cascade state: Healthy → Warning → Critical → Drain. Each transition has a budget partition (60% RX / 10% TX critical / 25% TX bulk / 5% scratch) and a drop policy. Live snapshot via Io::pool_pressure() or the ops endpoint.