Autoserver is a GNU/Linux system that I made to support servers on various embedded systems and single-board computers. The project is designed with a few main goals in mind:
- Read-only and compressed filesystem (squashfs) to save space and reduce excessive writing to micro SD cards. Note that the actual root filesystem structure is slightly different.
- Extensive use of Linux containers to allow flexibility if the existing root filesystem is insufficient for a particular need. Naturally, this would necessitate building a custom kernel to support Linux containers, as the kernel is shared between all containers (however, in general, any kernel that claims to support Docker in userns-remap mode will also work with ctrtool).
Autoserver and its container modules have been used on server systems, routers, embedded devices, virtual machines, and even simple client systems.
Source code for the Autoserver build system can be found at https://git2.peterjin.org/autoserver. The LICENSE file in the repository only applies to the scripts that make up the build system (i.e. files in the repo); the finished Autoserver product obviously has components that are licensed under other licenses (like the Linux kernel which is GPL licensed).
Autoserver only currently fully supports x86-64, but we are currently working on getting ARM and ARM64 devices to work too.
Root filesystem structure
Most GNU/Linux systems use an actual disk mount (like /dev/sda1) as the root (/) filesystem. Other systems (like some OpenWRT images) use a squashfs as the root, which would make the whole thing read-only. Other systems combine this squashfs with an overlay filesystem.
Autoserver, on the other hand, does none of that. Instead, we use a tmpfs as the root filesystem. To allow programs to still be stored on disk, we actually create symlinks to a squashfs mount containing all of our programs. The squashfs mount is actually a subdirectory of the tmpfs root, so that the filesystem containing all of the programs can be easily unmounted and replaced if needed to (e.g. if upgrading the system on-the-fly).
Upgrading the system
Upgrading the system on-the-fly by replacing the squashfs image with a new one is currently a work-in-progress. The current approach to do so would be (completely untested):
- Create an empty directory called _upgrade on all versions of the root filesystem.
- When upgrading the system, the following actions are performed:
mount -t squashfs new_root_img.img /rofs_root/_upgrade chroot /rofs_root pivot_root _upgrade _upgrade/_upgrade # Note that the actual operation, when implemented, will only involve a single command rather than separate pivot_root and chroot commands. umount -l /rofs_root/_upgrade # At this point, we can restart all programs and services one by one.
In doing so, we atomically swap the old root filesystem with the new root filesystem using the pivot_root operation, even though it is not on the actual /.
The fact that we swap the old root filesystem and replace it with the new one also precludes the use of e.g. systemd as the init system, as it would depend on the libraries present in the old root filesystem, and there are too many libraries to copy and cache into RAM. To mitigate against this, we actually wrote our own minimal init system, which only depends on the C library (libc.so.6 is small enough so that it can actually be copied into the tmpfs); alternatively, Busybox init could be used.
If you actually need to use systemd, you can run it in a container.
To do list
- Fix high RAM usage when building the system.img squashfs.
- Procedure to make Autoserver more reproducible, for example, by running all of the scripts in a Ubuntu or Debian Live CD/DVD.
- Move vmlinuz and initrd.xz to /as_boot.
- "Docker-in-ctrtool" may require the rootless variant from upstream Docker (it will be packaged in container module #2), rather than the one packaged from Ubuntu (as is currently from container module #1's "generic"), possibly due to needing to support the "devices" cgroup (CAP_SYS_ADMIN in the initial user namespace is required to change the allow/deny lists).
- ARM32/ARM64 builds for container module #2.
- Container module #1 currently does not support building x86/ARM/ARM64 at the same time (i.e. each CPU architecture has to be built separately).