Does rustc perform better on a ramdisk? (no)

So, rust is slow to compile (see https://vfoley.xyz/rust-compile-speed-tips/ for some background) and I thought it’d be interesting to see if disk i/o made a big difference on a moderate project. Would writing to a RAM disk speed things up?

Quick answer – nope. Not at all.

The approach was to run this clean build and time it;

rm -rf target/* && time cargo build

For the disk runs, nothing clever – just as normal, with target as standard directory on the SSD.

For the ram disk runs, I wrote a little script to prepare a ram disk – see this gist – and created a 6Gb ram disk to serve as the target dir like so;

cd "$HOME/src/my-rust-project/workspaces"
mv target target.disk
create-ram-disk 6 "$HOME/src/my-rust-project/workspaces/target"

The summary – it’s no different. At all.

# Results
DISK1: real   2m28.139s | user   22m20.473s | sys   1m49.867s
DISK2: real   2m35.500s | user   23m7.738s  | sys   1m55.345s
RAM1:  real   2m37.218s | user   22m43.319s | sys   1m53.874s
RAM2:  real   2m27.837s | user   22m58.143s | sys   1m56.239s

Well, maybe that saves you an afternoon of mucking about, or gives you enough info to waste some of your own time 🙂

3 thoughts on “Does rustc perform better on a ramdisk? (no)

  1. Hi Steve,
    Could filesystem cache be making these results similar? Maybe if none of the source was in the cache, the disk version would go slower?

    • Very possibly!

      In other languages I’ve used, particularly C#, the compiler copied loads of interim files. For example, a typical situation was to have a C# solution with, say 25 projects. Each project would reference other projects – a `common` project linked to a `datalayer` project which linked to a `service` project which linked to a `webapp` project, say. Whenever you compiled something, it would take a copy of all the dependent projects – so you’d build `common`, and then when you build `datalayer`, it’d copy `common.dll` into`datalayer/bin`. Then when you compile `service` it’d copy both `datalayer.dll` and `common.dll` into `service/bin` … and so on. More than half of ‘compile’ time was actually disk i/o.

      It got to the point on that project where my boss and I realised we’d save around £50,000 developer time a year by buying devs SSDs rather than spinning rust. So, that project left a scar! And I guess I was hoping something similar would happen here. But, it seems like either there is that much less i/o going on here due to a better compiler design, or the SDD and whatever disk/fs caching is occuring is essentially negating any issues in the compiler.

      I would have hoped for a _bit_ of speedup!

      • The £50,000 was on a team less than ten devs, IIRC. Taking compile times from 20m to 5m, say, where developers compile tens of times a day, basically gives you your day back. Rust’s incremental compiler is much better for devs, though, in many cases.

Leave a reply to Steve Cooper Cancel reply