Preamble
Gleam compiles to Erlang on the BEAM and adds a static type system and syntax that feel closer to ML-family languages than to Erlang’s Prolog-flavored surface. For someone coming from Python or Java services, it is a friendly on-ramp to actors, message passing, and OTP—with the compiler nagging you about impossible cases before production does.
This post is the first implementation pass against A Language-Agnostic Concurrent Workload for 2025 Comparisons’s concurrent workload spec: processes instead of shared mutable heaps, mailboxes instead of mutex soup.
Processes and messages
BEAM processes are cheap. You spawn many of them, send messages asynchronously, and avoid sharing memory by default. That isolation is the platform’s central bet: failure is localized unless you explicitly link processes.
Gleam’s FFI to Erlang/OTP allows calling into supervisor, gen_server-shaped patterns, and battle-tested libraries while keeping Gleam modules at the edge for typed orchestration.
OTP primitives in the mental model
Supervisors encode restart strategies—which children restart, how often, and whether siblings come along for the ride. Monitors and links express “tell me if this dies” versus “die with this.” Those primitives are not boilerplate trivia; they are how you implement recovery when A Language-Agnostic Concurrent Workload for 2025 Comparisons’s workload injects crashes.
Mapping the workload
Workers are modeled as process families with explicit mailboxes. Producers respect bounded mailboxes or explicit backpressure policies from the spec. Collectors may be a dedicated process receiving tallies—message passing keeps hot paths easier to reason about than ad hoc shared counters, though you can still bottleneck if design is careless.
Debugging early
Structured logs keyed by process id (and correlation IDs if HTTP sits upstream) pay dividends immediately. The BEAM’s introspection culture—observer, remote shells, tracing—rewards operable code. “It works on my laptop” is insufficient; traces should survive load tests.
Code shape: typed jobs in Gleam, processes on the BEAM
Gleam gives you algebraic data types for the workload job envelope from A Language-Agnostic Concurrent Workload for 2025 Comparisons; the runtime still delivers messages the Erlang way. A minimal type you can share across modules:
pub type Job {
Job(id: Int, tag: String, work_units: Int, sleep_ms: Int)
}
The worker loop is still “receive, compute, reply”—here in Erlang so you can run it in erl today; Gleam on the BEAM compiles to Erlang modules and uses the same mailbox and scheduler semantics:
-module(worker).
-export([start/0, loop/0]).
start() -> spawn(fun loop/0).
loop() ->
receive
{job, JobId, Work, SleepMs} ->
busy(Work),
timer:sleep(SleepMs),
collector ! {done, JobId},
loop()
end.
busy(0) -> ok;
busy(N) -> busy(N - 1).
Backpressure pattern: use a blocking send to a bounded mailbox (or a credit/token process) so producers cannot allocate without bound—mirroring the “bounded queue” rule in A Language-Agnostic Concurrent Workload for 2025 Comparisons. In Gleam, that is often a small OTP-style process that owns the queue depth counter; the types keep the message variants honest even when the VM is dynamically typed underneath.
Supervision hook: workers are started under a one_for_one supervisor with MaxR/Within tuned so injected crashes from that spec become measurable recovery intervals, not infinite restart storms—Supervision Trees and Rust Task Hierarchies goes deeper on intensity windows.
Conclusion
Gleam does not replace Erlang’s runtime; it rides it with types and ergonomics. Gleam and the BEAM Scheduler Under Load runs the benchmark harness hard and watches scheduler fairness; Rust and Tokio: The Same Concurrent Workload in Type-Safe Threads ports the same workload to Tokio for a typed async comparison.