myOS is an opinionated BootC image ecosystem organized around a shared core substrate, a feature-complete full core composition, clear per-distro layers, and workstation image families that stay easy to extend.
Both Alma 9 and 10 now follow one core tier plus workstation image families:
core-full-alma9/core-full-alma10core-full-alma9-nvidia-open/core-full-alma10-nvidia-opencore-full-alma9-nvidia-legacygnome-alma9/gnome-alma10gnome-alma9-nvidia-open/gnome-alma10-nvidia-opengnome-alma9-nvidia-legacycosmic-alma9/cosmic-alma10cosmic-alma9-nvidia-open/cosmic-alma10-nvidia-opencosmic-alma9-nvidia-legacy
GNOME remains the default documented workstation path in the README examples and VM/ISO examples, while COSMIC is now a parallel supported workstation family.
NVIDIA streams are explicit:
-nvidia-openuses the open kernel module path and is the default fit for newer supported GPUs.- Alma 9
-nvidia-legacyis the older-GPU AI host lane. It pins the proprietarynvidia-driver:580stream and expects kernel-version-specific prebuiltkmod-nvidia-*packages on EL9 for Maxwell-, Pascal-, and similar legacy-supported GPUs. - Alma 10 support in myOS is
-nvidia-open.
For the legacy AI lane, keep the host driver and the AI userspace stack as separate decisions. The host stays on the pinned R580 proprietary branch, while older-GPU AI workloads should start from CUDA 12.6-class userspace stacks, such as PyTorch cu126, instead of assuming newer CUDA 13-era examples are the safest default for Maxwell/Pascal-class hardware.
core-full-* is the single full base tier. It includes tenant commands, persistent-user enrollment tooling, Cockpit admin services, shared ROCm userspace, the Kubernetes CLI, and OpenClaw platform-host scaffolding for both distros. CUDA repo/toolkit content is layered through the NVIDIA image paths rather than the plain non-NVIDIA core-full-* images.
Alma 10 also carries the packaged RamaLama CLI in its distro-specific core layer as an operator utility for local model testing and artifact generation. It is not part of the tenant, openclaw-host, or per-user openquad runtime contracts.
For per-user OpenClaw on core-full-*, gnome-*, and cosmic-* images, the host contract is explicit:
openquadmanages the rootless per-useropenclaw.serviceQuadlet runtime.- The upstream
openclawCLI runs inside the container rather than through a separate host wrapper. - Use
openquad exec -- ...for one-shot commands oropenquad shellto enter the runtime container interactively.
Typical flow:
openquad start
openquad exec -- openclaw onboard
openquad exec -- openclaw chat
openquad status
openquad doctorThe first openquad start renders the shipped per-user openclaw.container
into ~/.config/containers/systemd/ and starts the generated user service for
that user only. Lingering across logout still requires
myos persistent-user-enroll --user <name>. The runtime config file itself
lives at ~/.local/share/openclaw/openclaw.json.
On first-time setups, openquad start now leaves the runtime up in a safe
unconfigured mode so openquad exec -- openclaw onboard can finish the initial
setup without a restart loop.
recipes/
images/
core/
gnome/
cosmic/
layers/
shared/
alma9/
alma10/
features/
files/
base/
workstation/
gnome/
agent/
dnf/
justfiles/
recipes/imagescontains only buildable images.recipes/layers/sharedcontains the shared composition layers:core,full,workstation-common,workstation-gnome,workstation-cosmic,nvidia-common,nvidia-cuda,nvidia-open, andnvidia-workstation.gnome-baseandnvidia-gnomeremain as thin compatibility wrappers.recipes/layers/alma9andrecipes/layers/alma10keep version differences explicit without spreading them across lots of tiny files.modules/os-release-metaruns first fromcore, before branding, and is the shared EL metadata source of truth.core.ymlcarries the shared EL Tailscale setup, base system, and common runtime defaults, whilefull.ymladds the platform-host scaffolding, shared AI/infrastructure tooling, and Kubernetes-ready operator surface used by the publishedcore-full-*images.workstation-common.ymlcarries the shared workstation baseline plus boot-time display-manager reconciliation,alma9/workstation.ymlandalma10/workstation.ymlkeep shared workstation drift explicit, andworkstation-gnome.yml/workstation-cosmic.ymlown the DE-specific session layers and desktop markers.files/base,files/workstation,files/gnome,files/cosmic, andfiles/agentmirror those concerns in the payloads.
Kubernetes is now included in the published core-full-* image line via recipes/layers/features/kubernetes-cli.yml, so the full core/workstation stack is ready to talk to Terraform and Kubernetes out of the box.
The full architecture and migration notes live in docs/image-architecture.md. The rootless persistence model lives in docs/rootless-persistence.md. Runtime ownership and path contracts live in docs/runtime-contracts.md. Validation guidance lives in docs/validation.md. Current Alma 9 versus Alma 10 intentional drift is tracked in docs/alma-drift.md.
The workflow now publishes full core images before GNOME and COSMIC workstation builds run. Workstation recipes use ghcr.io/myos-dev/core-full-* as their base image, so the published workstation tags reflect the latest published full core tags.
myOS keeps two rootless planes:
- persistent or background rootless services for dedicated tenant accounts and explicitly enrolled login users with lingering
- desktop or session rootless services for workstation helpers that are separate from the per-user OpenClaw runtime, with the current session-bound helper surface still living in the GNOME layer
See docs/rootless-persistence.md for the full model, including owner-only versus per-user baseline units and the supported self-service Quadlet path.
myOS disables the stock bootc-fetch-apply-updates.service and bootc-fetch-apply-updates.timer so hosts do not surprise-reboot on their own. Use myos update-system or myos rebase, then reboot on your own schedule or during a maintenance window. Workstation images now re-apply the selected desktop environment's display-manager ownership on boot, so GNOME/COSMIC rebases do not require manual display-manager.service cleanup. For per-user runtime maintenance, myos update-user runs openquad update alongside the user-space refresh steps.
The supported OpenClaw operator workflow is exposed through the myos just wrapper instead of hand-editing tenant files under /srv/tenants. That flow remains the dedicated service-account path for persistent background OpenClaw hosting.
Supported tenant operator surface:
tenant-createtenant-configuretenant-configtenant-secret-settenant-secret-importtenant-start,tenant-stop,tenant-restarttenant-status,tenant-validate,tenant-dashboardtenant-tailscaletenant-backup,tenant-restore,tenant-remove- advanced tenant-context entry points:
tenant-models,tenant-openclaw
The important tenant storage split is runtime state vs durable workspace vs
host-managed secrets. The current zone-c/* path names are legacy labels, not
an active multi-zone product concept.
Internal implementation helpers under /usr/local/libexec/myos/tenant-* still exist for reconciliation, rendering, and scaffolding, but they are not intended as the stable day-to-day operator surface.
Common tenant flows:
myos tenant-create --tenant demo
myos tenant-secret-set --tenant demo --key OPENROUTER_API_KEY
myos tenant-configure --tenant demo --model openrouter/anthropic/claude-sonnet-4-5
myos tenant-start --tenant demo
myos tenant-status --tenant demoFor persistent login users, use the separate enrollment flow:
myos persistent-user-enroll --user alice
myos persistent-user-set-owner --user alice
myos persistent-user-install-quadlet --file ./my-api.container --enableFor an owner-friendly hosted OpenClaw service that stays on the dedicated tenant path and is easy to expose over Tailscale, use the wrapper surface:
myos openclaw-host enable --user alice --model openrouter/anthropic/claude-sonnet-4-5
myos openclaw-host secret-set --key OPENROUTER_API_KEY
myos openclaw-host start
myos openclaw-host tailscale enable
myos openclaw-host qrThat wrapper assigns the selected login user through the persistent-user owner
role, but it keeps the remotely exposed service on the dedicated owner-openclaw
tenant path instead of trying to repurpose the per-user openquad runtime.
Generic myos commands live in the shared core. Tenant commands, persistent-user commands, and the owner-host wrapper are layered into the core-full-* and workstation images for both distros.
- Start from
core-full-*, which is now the single published core tier. - Use
core-full-*when the image needs tenant or OpenClaw host features. - Treat RamaLama as an Alma 10 full-core add-on until Alma 9 has a supported package source.
- Use
nvidia-open.ymlfor shared core/server NVIDIA support,alma9/nvidia-legacy.ymlonly for the EL9 proprietary R580 older-GPU AI path, andnvidia-workstation.ymlfor workstation-display NVIDIA extras. - Build workstation images from
workstation-commonplus an explicit DE layer such asworkstation-gnomeorworkstation-cosmic. - Keep optional capabilities in
recipes/layers/features/instead of silently growing every image.
TMP=$(mktemp) && \
curl -fsSL https://raw.githubusercontent.com/myos-dev/myOS/stable/image.toml -o "$TMP" && \
sudo podman pull ghcr.io/myos-dev/gnome-alma10:latest && \
sudo podman pull quay.io/centos-bootc/bootc-image-builder:latest && \
sudo podman run --rm -it --privileged --pull=newer \
--security-opt label=type:unconfined_t \
--network=host \
-v /var/lib/containers/storage:/var/lib/containers/storage \
-v "$(pwd)/output:/output" \
-v "$TMP:/config.toml:ro" \
quay.io/centos-bootc/bootc-image-builder:latest \
--type qcow2 \
--progress verbose \
--use-librepo=false \
--config /config.toml \
ghcr.io/myos-dev/gnome-alma10:latest
rm -f "$TMP"The old single-stream *-nvidia image names were replaced by explicit *-nvidia-open images, with Alma 9 also retaining *-nvidia-legacy for the supported proprietary older-GPU AI path.
The VM and ISO examples intentionally keep gnome-alma10 as the default documented workstation path. Swap in cosmic-alma10 if you want the COSMIC workstation image instead.
TMP=$(mktemp) && \
curl -fsSL https://raw.githubusercontent.com/myos-dev/myOS/stable/iso.toml -o "$TMP" && \
sudo podman pull ghcr.io/myos-dev/gnome-alma10:latest && \
sudo podman pull quay.io/centos-bootc/bootc-image-builder:latest && \
sudo podman run --rm -it --privileged --pull=newer \
--security-opt label=type:unconfined_t \
--network=host \
-v /var/lib/containers/storage:/var/lib/containers/storage \
-v "$(pwd)/output:/output" \
-v "$TMP:/config.toml:ro" \
quay.io/centos-bootc/bootc-image-builder:latest \
--type iso \
--progress verbose \
--use-librepo=false \
--config /config.toml \
ghcr.io/myos-dev/gnome-alma10:latest
rm -f "$TMP"