Preamble

Rust with Tokio is the second implementation here of A Language-Agnostic Concurrent Workload for 2025 Comparisons. The shape is intentionally dull: bounded mpsc (or similar) channels, worker tasks that run the same deterministic compute + tokio::time::sleep I/O simulation, and aggregator tasks that collect completion metrics. Dull harnesses produce comparable numbers; clever harnesses produce blog posts.


Channels, backpressure, and bounds

Rust makes backpressure explicit at the type level when you use bounded channels: send awaits capacity. That aligns with A Language-Agnostic Concurrent Workload for 2025 Comparisons’s requirement that producers cannot pretend infinite buffering exists. Channel capacities are kept identical to the BEAM mailbox bounds from Gleam and the BEAM Scheduler Under Load unless a run explicitly targets a different policy—any deviation should be documented.


Runtime configuration

A Runtime is constructed with an explicit worker thread count when scaling behavior is under test. Tokio is not magic; CPU-bound work still needs spawn_blocking or dedicated threadpools—otherwise you starve the executor and blame “async” for cooperative blocking mistakes.


Safety: Send across task boundaries

The compiler enforces Send (and sometimes Sync) when spawning tasks. That friction is design feedback: shared mutable state must be synchronized or owned by a single task with a message API. Send, Sync, and Fearless Concurrency in Rust revisits Send/Sync in depth; this post’s benchmark is where those bounds first bite aggregators.


Measurement parity

The same throughput, p95/p99 completion, error counts, and recovery interval after injected failures are recorded as in March. Without parity, May becomes storytelling.


Code: bounded channel + worker task (mirrors the Erlang loop in Gleam on the BEAM: Actors, Types, and OTP Primitives)

use tokio::sync::mpsc;
use std::time::Duration;

type JobId = u64;

#[derive(Clone)]
struct Job {
    id: JobId,
    work_units: u32,
    sleep_ms: u64,
}

#[tokio::main]
async fn main() {
    let queue_cap = 256_usize; // same as BEAM mailbox policy in January
    let (tx, mut rx) = mpsc::channel::<Job>(queue_cap);

    let worker = tokio::spawn(async move {
        while let Some(job) = rx.recv().await {
            for _ in 0..job.work_units {
                std::hint::black_box(job.id ^ job.work_units as u64);
            }
            tokio::time::sleep(Duration::from_millis(job.sleep_ms)).await;
            // emit JSONL metric: job_done
        }
    });

    // producers: tx.send(...).await respects backpressure (bounded channel)
    // inject crashes: std::panic::catch_unwind on isolated threads or worker-supervisor pattern (June)

    let _ = worker.await;
}

Parity notes: queue_cap must match the BEAM run’s credit limit; work_units and sleep_ms distributions must use the same seed as March. black_box stops LLVM from deleting the “compute” loop during optimization—same spirit as Erlang busy/1.

CPU-heavy work: if work_units is large, move the loop to tokio::task::spawn_blocking or a dedicated pool so you do not starve the async executor—BEAM Scheduler Internals: A Practitioner’s View’s NIF analogy applies to blocking the Tokio worker threads.


Conclusion

This post shows the workload ports cleanly to async Rust. Rust versus Gleam on the Same Bench: What the Numbers Suggest compares BEAM and Tokio results on equal footing; Supervision Trees and Rust Task Hierarchies contrasts OTP supervision with Rust’s panic and join ergonomics.