Back to graph

Topic analysis

A tail-call interpreter in (nightly) Rust

Last week, I wrote a tail-call interpreter using the become keyword, which was recently added to nightly Rust (seven months ago is recent, right?). It was a surprisingly pleasant experience, and the resulting VM outperforms both my previous Rust implementation and my hand-coded ARM64 assembly. Tailcall-based techniques have been all the rage recently (see this overview ); consider this my trip report implementing a simple but non-trivial system. For those keeping track at home, this is the latest in my exploration of high-performance emulation of the Uxn CPU , which runs a bunch of applications in the Hundred Rabbits ecosystem. If you want to read the whole saga, here's the list: Experimenting with LLMs proved controversial , which wasn't a surprise; I'm pleased to declare that all of the tail-call code is human-written, and the new backend can be used as a substitute for the x86 assembly backend at a minor performance penalty. (This blog post is also entirely human-written, per my personal standards ) The next few sections summarize previous work, so feel free to skim them if you've done the reading and jump straight to tailcalls in Rust . Uxn is a simple stack machine with 256 instructions. The whole CPU has just over 64K of space, split between a few memories: The simplest emulator reads a byte from RAM at the program counter, then calls into an instruction (which may update the program counter): There are 256 instructions, many of which are parameterized with flags. Here's the INC instruction, which increments the top byte on the stack: All of the opcode implementations are inlined into the main op function, but there's room for improvement : some values are stored in memory rather than registers, and the main op selection branch is unpredictable. In our assembly implementation, we can instead use threaded code (specifically token threading ). We store all of the CPU state in registers, then end each instruction with a jump to the subsequent instruction: This distributes the dispatch operation across every opcode, making it easier for the branch predictor to learn sequences of opcodes in the program. Overall speedups were significant: 40-50% faster on ARM64, and about 2× faster on x86-64. Unfortunately, it requires maintaining about 2000 lines of code , and is incredibly unsafe . In my x86 port, I introduced an out-of-bounds write, which stomped on a few bytes outside of device RAM; the only symptom was that the fuzzer would segfault when exiting after running a very particular program. So, what's to be done? Tail calls in Rust We'd like to get the same behavior as our assembly implementation – VM state stored in registers, dispatch at the end of each opcode – without hand-writing every instruction in assembly. Fortunately, there is hope! The core idea has almost certainly been reinvented a bunch of times, but I first encountered the idea of tail-call interpreters in the Massey Meta Machine writeup , which was a mind-expanding read. There are two pieces: We could write this today in Rust; here's our inc function: We want to reuse our existing Uxn opcode implementations, so we reconstruct the core: Uxn object at the beginning of the function, call its inc function, then deconstruct it again when calling the next operation. There's a lot of boilerplate, and it's tempting to just pass a &mut Uxn argument, but that removes the "state is stored in registers" benefit; we'll remove boilerplate with a macro later on. Unfortunately, there's a problem with this implementation: Even in a release build, the compiler has not optimized out the stack. As we execute more and more operations, the stack gets deeper and deeper until it inevitably overflows. We need tell the compiler to generate a br (branch to register) instead of a bl (branch-and-link) instruction, and – more importantly – not to allocate any persistent space on the stack. In other words, we need a tail call . In nightly Rust, this is a one-word fix: With this change in place, the Rust compiler makes a guarantee : When tail calling a function, instead of its stack frame being added to the stack, the stack frame of the caller is directly replaced with the callee’s. That's it, everything works! End of writeup! Okay, okay, I've got a little more to say. First, I promised a macro to eliminate the boilerplate. As always, it's a horrifying thing to behold: (This is now from the actual implementation, so some types are slightly different than the simplified code earlier in this writeup) The macro is very awkward, but it lets us declare all three kinds of functions: Wait, no, wrong list; the three kinds of functions are Here's what all three look like: You don't need to spend much time puzzling over the macro; we're firmly in "if it compiles, it works" territory here. It's also worth noting that this is still 100% safe Rust: our #![forbid(unsafe_code)] attribute remains untriggered. The compiler does a good job of inlining and stripping functions down to their essential operations; the boilerplate of constructing and deconstructing the UxnCore is fully optimized out. I see two main differences from our hand-written implementation: We could fix the latter by also threading the table through the argument list, which improves performance on x86 (but doesn't seem to matter on ARM64). Speaking of performance, how does it do? I've got two main benchmarks: On my laptop (M1 Macbook), I'm pleased to report that I'm no longer beating the compiler: the tail-call interpreter handily beats my hand-written assembly on both benchmarks. (all times in milliseconds, smaller is faster) Now, let's take a big sip of seasonally-inappropriate tea and test on x86— — oh no . It's outperforming the VM, but is still losing to the assembly backend by a noticeable amount (especially in the Fibonacci microbenchmark). What's going on here? Let's start by looking at the generated code for INC , our simplest opcode: This implementation looks broadly fine: it's doing the minimal number of reads and writes, and is basically what I'd expect. Indeed, incrementing the byte by address may be more efficient than my assembly, which loads and stores that byte. (Also, I didn't harp on this before, but declaring these functions as extern "rust-preserve-none" is very important for the x86 implementation. The default calling convention doesn't use enough registers for all of our arguments, which adds tremendous amounts of overhead) So, INC is inoffensive. Let's now look at ADD2 , which adds the top two 16-bit values on the stack. I'll start by showing you the hand-tuned implementation from the assembly backend: This implementation is 79 bytes, and does the bare minimum number of reads and writes: 4 byte reads + 2 byte writes to the data stack, one byte read from RAM (to get the next opcode), and one qword read from the jump table (to get the jump target). In contrast, here's the compiled tailcall implementation: This is 121 bytes, and – more concerningly – spills and restores two full 64-bit registers to the stack. This looks like (to use a technical term) real bad codegen . On one hand, this is an unfinished nightly feature in rustc , so it's understandable; on the other hand, I'm surprised this isn't well optimized by the LLVM backend, even if the rustc side is immature. This blog post is getting unwieldy, so I won't speculate too much further, but I will make a few observations: WebAssembly also supports tail calls , and Raven supports compilation to WebAssembly! I wonder how the tail-call interpreter will fare, compared to both native performance and the simple VM interpreter? I can only benchmark the Mandelbrot example; the Fibonacci program is too fast (given limitations on web timer resolution). Here are the numbers: Surprise, it's terrible : 1.2× slower on Firefox, 3.7× slower on Chrome, and 4.6× slower in wasmtime . I guess patterns which generate good assembly don't map well to the WASM stack machine, and the JITs aren't smart enough to lower it to optimal machine code. wasmtime did manage impressive performance on the traditional VM implementation though — within a few percentage points of the native Rust build! (All of these tests were on my M1 Max laptop; wasmtime was built from e9e1665c5 , Firefox 149.0, and Chrome 146.0.7680.178) The tailcall interpreter PR is merged , and has been deployed in the 0.3.0 release. When enabled, it's the default on ARM64 systems, and the second choice on x86-64 systems (if the native feature is not enabled). I'd be very curious to get tips on improving x86 and WASM performance; ping me via email or on social media .

Heat score

1

Sources

1

Platforms

1

Relations

0
First seen
Apr 5, 2026, 11:18 PM
Last updated
Apr 6, 2026, 8:00 AM

Why this topic matters

A tail-call interpreter in (nightly) Rust is currently shaped by signals from 1 source platforms. This page organizes AI analysis summaries, 1 timeline events, and 0 relationship edges so search engines and AI systems can understand the topic's factual basis and propagation arc.

News

Keywords

7 tags
tailcallinterpreternightlyweekwroteusing

Source evidence

1 evidence items

Timeline

A tail-call interpreter in (nightly) Rust

Apr 5, 2026, 11:18 PM

Related topics

No related topics have been aggregated yet, but this page still preserves the AI summary, source links, and timeline.