Note: this comment is long, because it is important and the idea that “systemd is always better, no matter the situation” is absolutely dangerous for the entire FOSS ecosystem: both diversity and rationality are essential.
Systemd can get more efficient than running hundreds of poorly integrated scripts
In theory yes. In practice, systemd is a huge monolithic single-point-of-failure system, with several bottlenecks and reinventing-the-wheel galore. And openrc is a far cry from “hundreds of poorly integrated scripts”.
I think it is crucial we stop having dogmatic “arguments” with argumentum ad populum or arguments of authority, or we will end up recreating a Microsoft-like environment in free software.
Let’s stop trying to shoehorn popular solutions into ill suited use cases, just because they are used elsewhere with different limitations.
Systemd might make sense for most people on desktop targets (CPUs with several cores, and several GB of RAM), because convenience and comfort (which systemd excels at, let’s be honest) but as we approach “embedded” targets, simpler and smaller is always better.
And no matter how much optimisation you cram into the bigger software, it will just not perform like the simpler software, especially with limited resources.
Now, I take OpenRC as an example here, because it is AFAIR the default in devuan, but it also supports runit, sinit, s6 and shepherd.
And using s6, you just can’t say “systemd is flat out better in all cases”, that would be simply stupid.
And Docker initially used Ubuntu. They explicitly and specifically switched to Alpine in 2016 for performance, to minimise the overhead.