Can I Run Docker Containers On S390x Machines?

2025-09-03 04:02:24 106

2 Answers

Gavin
Gavin
2025-09-07 12:26:42
Curious and a bit geeky, I’d say yes — s390x can absolutely run containers — but you need to match the pieces.

If your box runs Linux for s390x (SUSE, RHEL, Ubuntu variants), you can install a container runtime like Docker or Podman from your distro repo. The trickiest part is images: containers are architecture-specific. Many official images already include s390x builds behind a multi-arch manifest, so pulling normally often gets the right binary. If not, you either build the image for s390x or use QEMU emulation (qemu-user-static + binfmt_misc) and docker buildx to emulate amd64 builds; I’ve used that to cross-build when no native image existed.

A couple of quick checks: verify overlayfs and cgroups support in the kernel, inspect docker info to see the detected architecture, and prefer distro packages or Podman if Docker binaries aren’t provided. Performance is best with native s390x images; emulation is great for testing but slower. If you want to experiment, try pulling an image with --platform linux/s390x or set up a buildx builder to create and push multi-arch images — it’s very satisfying when it all lines up.
Charlotte
Charlotte
2025-09-08 07:52:38
Oh, yes — you can run containers on s390x machines, but there are some practical things to keep in mind before you dive in.

I've run Linux on big iron and toyed with containers there enough to know the main checklist: the machine needs a Linux distro built for s390x (think SLES, RHEL, Ubuntu on IBM Z or LinuxONE), and the container runtime must be available for that architecture. Many modern distros provide Docker or Podman packages for s390x directly through their repositories. I usually reach for Podman these days on enterprise Linux because it’s packaged well for s390x and works rootless, but plain Docker Engine is also possible — just install the distro-specific package rather than expecting Docker Desktop binaries.

A technical caveat that trips people up is image architecture. Containers are not magically architecture-agnostic: if you pull an image built for amd64 it won’t run natively on s390x. The good news is many official images are multi-arch (manifest lists) and include an s390x variant; you can do things like docker pull --platform linux/s390x image:tag or let Docker/Podman pick the right one automatically. If an s390x build doesn't exist, you can either build an s390x image yourself or use emulation with qemu-user-static and buildx. Emulation works (I’ve used qemu via buildx to cross-build and test), but expect a performance hit compared to native s390x images.

Other practical tips: ensure the kernel supports required container features (cgroups and overlayfs usually), check docker info to confirm the architecture, and if you plan to build multi-arch images, set up buildx and register qemu with binfmt_misc (multiarch/qemu-user-static is handy). Also, don’t assume Docker Desktop workflows will apply — you’ll be working with CLI tooling on a server. Running containers on IBM Z is surprisingly smooth once images are available; it’s a powerful way to get modern workloads on mainframes and LinuxONE hardware, and it can feel oddly satisfying spinning up a tiny container on such a massive machine.
View All Answers
Scan code to download App

Related Books

You Can Run But...
You Can Run But...
UNDER HEAVY EDITING. ***** He chuckled at her desperate attempt to make the lie believable. "Pretty little liar, your face betrays a lot, sadly" he placed his hand on her cheeks, his face dark "you can't run from me, Maya; no matter how hard you try to, I'll always find you. Even in the deepest part of hell, And when I find you, you get punished according to how long you were away from me, understand?" His tone was so soft and gentle it could have fooled anybody but not her. She could see through him, and She trembled under his touch. "Y-yes, maestro" **** Though her sister commits the crime, Maya Alfredo is turned in by her parents to be punished by the Ruthless Don Damon Xavier for selling information about the Costa Nostra to the police. Her world is overturned and shattered; she is taken to the Don's Manor, where she is owned by him and treated like his plaything, meanwhile knowing his intentions to destroy her. But then things get dark in the Don's Manor, with the presence of Derinem Xavier. Maya doesn't stand a chance in Damon's furnace. Will he destroy her and everything she loves for the sins he thinks she committed? Or does luck have other plans for her? Note— This is a dark romance. Not all lovey-dovey. ML is a psychopath. Trigger warnings!!! **** TO READ THE EDITED VERSION, PLEASE LOG OUT AND LOG IN AGAIN.
9.6
188 Chapters
Run, Camille, Run
Run, Camille, Run
God didn't hire me to play guardian angel. He'd send the devil instead and he did in the form of a woman. It's her. My downfall, my saviour, my redemption, my woman. Run, Camille, Run.
10
42 Chapters
Luna On The Run!
Luna On The Run!
Marked and mated, Luna Ariel lives her best life alongside her fated mate, Alpha Gorgio of Eclipse Howl, the most handsome leader of the werewolf community worldwide. The Alpha adores his Luna, never takes his eyes off her. Until she leaves for the General Meeting of the Council of Alphas in the Midnight Moon pack. And Ariel realizes she actually has two mates. Heartbroken and unable to tell the possessive Gorgio, Ariel flees to Human Town. She fears confronting him with her betrayal. But neither of her mates is willing to give her up. 'It is impossible to escape the fate of the Moon Goddess. I am waiting for you,' echoes Alpha Zane. But in the iron mate bond, she hears another call, longing yet tender. 'Where are you, my Luna? Remember your promise to love me forever.' Devastated, she listens to the two mates, knowing she can't run for long and must face this twist of fate. Until her sister offers Alpha Gorgio a great solution... or so it seems...
10
105 Chapters
Run! Alpha Run!
Run! Alpha Run!
Remus is the next line to be an alpha of the Crescent pack. He is now studying outside of the pack with his cousin Sirius. Remus is trying to find a wife from human society and Intends to avoid going back to the pack. He knew that being alpha of the Crescent pack means he must suffer the curse. The curse that his father has, up until now, that an alpha of the pack will only have one child and the Luna will die. That is what happens to his father, he is the only child and his mother passed away when she was giving birth to him. He can't lift the curse, so he will run. He chose a human girl to be his temporary mate, wishing his father will stop match-making him. A human girl who is also a new maid at the Packhouse. He never ever imagine, that he will lust over the human girl. Will the two be a real mated couple?
10
69 Chapters
I Can Hear You
I Can Hear You
After confirming I was pregnant, I suddenly heard my husband’s inner voice. “This idiot is still gloating over her pregnancy. She doesn’t even know we switched out her IVF embryo. She’s nothing more than a surrogate for Elle. If Elle weren’t worried about how childbirth might endanger her life, I would’ve kicked this worthless woman out already. Just looking at her makes me sick. “Once she delivers the baby, I’ll make sure she never gets up from the operating table. Then I’ll finally marry Elle, my one true love.” My entire body went rigid. I clenched the IVF test report in my hands and looked straight at my husband. He gazed back at me with gentle eyes. “I’ll take care of you and the baby for the next few months, honey.” However, right then, his inner voice struck again. “I’ll lock that woman in a cage like a dog. I’d like to see her escape!” Shock and heartbreak crashed over me all at once because the Elle he spoke of was none other than my sister.
8 Chapters
Run.
Run.
Wulver Pack Series: 1 (standalone) I run. It’s just who I am. Whenever things get tough, I bail. Every new situation I find myself in, I have an exit strategy. Because I know what could happen if I don’t. Things are about to get bad, and I don’t understand how or why. I’ve developed a life for myself where no one could suspect a thing out of the ordinary. I fit in - or at least try to. But here I am, ready to run. Let’s just hope I do so in time. *** I didn’t ask to be in these shoes. In fact, I was thoroughly looking forward to a life of little more than personal responsibility. I never saw my future tied to this place, no matter how much it is a part of me. The position was thrust upon me, though, and with no one else to step up, I had no choice. I do love it here. These are my people - my family - and this is my home. I couldn’t turn my back, even if I wanted to. That’s a type of betrayal I would never be able to stomach. If things had gone how they were supposed to, none of this would have fallen in my lap. Now that we’ve made it through the adjustment of transition of power, I am happy this is how my life has ended up, and my people are, too. Any semblance of my plans years ago have fallen by the wayside, but that’s just the nature of the beast - and I am the beast. Times are changing. I can feel it in my bones. I just hope we are ready, and I am capable of protecting those that are relying on me.
10
82 Chapters

Related Questions

Why Do CI Pipelines Fail For S390x Builds?

3 Answers2025-09-03 23:13:31
This one always feels like peeling an onion of tiny architecture quirks — s390x builds fail in CI for a handful of recurring, predictable reasons, and I usually see several stacked at once. First, classic hardware and emulator gaps: there simply aren’t as many native runners for IBM Z, so teams rely on QEMU user/system emulation or cross-compilation. Emulation is slower and more fragile — long test runtimes hit CI timeouts, and subtle qemu version mismatches (or broken binfmt_misc registration) can cause weird exec failures. Then there’s the big-endian twist: s390x is big‑endian, so any code or tests that assume little-endian byte order (serialization, hashing, bit-twiddling, network code) will misbehave. Low-level code also trips up — use of architecture-specific assembly, atomic ops, or CPU features (SIMD/AVX assumptions from x86 land) will fail at build or runtime. Beyond that, package and toolchain availability matters. Docker images and prebuilt dependencies for s390x are less common, so CI jobs often break because a required binary or library isn’t available for that arch. Language runtimes sometimes need special flags: Rust/C/C++ cross toolchains must be set up correctly, Go needs GOARCH= s390x and matching C toolchains for cgo, Java JITs may produce different behavior. Finally, flaky tests and insufficient logging make diagnosis slow — you can get a “build failed” with little actionable output, especially under emulation. If I’m triaging this on a project I’ll prioritize getting a minimal reproduction on real hardware or a well-configured qemu runner, add arch-specific CI stages, and audit endian- and platform-specific assumptions in code and tests so failures become understandable rather than magical.

Which Cloud Providers Offer S390x Virtual Instances?

3 Answers2025-09-03 15:26:25
I've spent a lot of late nights tinkering with odd architectures, and the short story is: if you want true s390x (IBM Z / LinuxONE) hardware in the cloud, IBM is the real, production-ready option. IBM Cloud exposes LinuxONE and z Systems resources—both bare-metal and virtualized offerings that run on s390x silicon. There's also the 'LinuxONE Community Cloud', which is great if you're experimenting or teaching, because it gives developers time on real mainframe hardware without the full enterprise procurement dance. Outside of IBM's own public cloud, you'll find a handful of specialized managed service providers and system integrators (think the folks who historically supported mainframes) who will host s390x guests or provide z/VM access on dedicated hardware. Names change thanks to mergers and spinoffs, but searching for managed LinuxONE or z/VM hosting usually surfaces options like Kyndryl partners or regional IBM partners who do rent time on mainframe systems. If you don't strictly need physical s390x hardware, a practical alternative is emulation: you can run s390x under QEMU on ordinary x86 VMs from AWS, GCP, or Azure for development and CI. It’s slower but surprisingly workable for builds and tests, and a lot of open-source projects publish multi-arch s390x images on Docker Hub. So for production-grade s390x VMs, go IBM Cloud or a mainframe hosting partner; for dev, consider 'LinuxONE Community Cloud' or QEMU emulation on common clouds.

How Do I Cross-Compile Go Binaries For S390x?

3 Answers2025-09-03 10:17:32
Building Go for s390x is way easier than I used to expect — once you know the tiny set of knobs to flip. I’ve cross-compiled a couple of small services for IBM Z boxes and the trickiest part was simply remembering to disable cgo unless I had a proper cross-GCC toolchain. Practically, for a pure Go program the canonical command I use is very simple: set GOOS=linux and GOARCH=s390x, and turn off CGO so the build doesn’t try to invoke a C compiler. For example: GOOS=linux GOARCH=s390x CGO_ENABLED=0 go build -o myprog_s390x ./... That produces an s390x ELF binary you can check with file myprog_s390x. If you need smaller binaries I usually add ldflags like -ldflags='-s -w'. If your project uses cgo (native libs), you’ll need a cross-compiler for s390x and to set CC appropriately (e.g. CC=s390x-linux-gnu-gcc), but the package names and toolchain installation vary by distro. When I couldn’t access hardware I tested with qemu (qemu-system-s390x for full systems, or register qemu-user in binfmt_misc) to sanity-check startup. I also sometimes use Docker buildx or CI (GitHub Actions) to cross-build images, but for pure Go binaries the env-variable approach is the fastest way to get a working s390x binary on an x86 machine. If you run into weird syscalls or platform-specific bugs, running the binary on a real s390x VM or CI runner usually tells you what to fix.

How Does S390x Performance Compare To X86_64?

2 Answers2025-09-03 16:48:12
I’m often torn between geeky delight and pragmatic analysis when comparing s390x to x86_64, and honestly the differences read like two different design philosophies trying to solve the same problems. On paper, s390x (the IBM Z 64-bit architecture) is built for massive, predictable throughput, top-tier reliability, and hardware-assisted services: think built-in crypto, compression, and I/O plumbing that shine in transaction-heavy environments. That pays off in real-world workloads like large-scale OLTP, mainframe-hosted JVM applications, and legacy enterprise stacks where consistent latency, hardware offloads (zIIP-like processors), and crazy dense virtualization are the priorities. Benchmarks you hear about often favor s390x for throughput-per-chassis and for workloads that leverage those special features and the mainframe’s I/O subsystem; it’s also built to keep the lights on with near-zero interruptions, which changes how you measure “performance” compared to raw speed. By contrast, x86_64 CPUs from Intel and AMD are the everyman champions: higher clock speeds, aggressive single-thread boosts, and a monstrous software ecosystem tuned for them. For single-threaded tasks, developer tooling, desktop-like responsiveness, and the vast majority of open-source binaries, x86_64 usually feels faster and is far easier to optimize for. The compilers, libraries, and prebuilt packages are more mature and more frequently tuned for these chips, which translates to better out-of-the-box performance for many workloads. If you’re running microservices, cloud-native stacks, or latency-insensitive batch jobs, x86_64 gives you flexibility, cheaper entry costs, and a huge talent pool. Power efficiency per core and raw FLOPS at consumer prices also often lean in x86_64’s favor, especially at smaller scales. When I’m actually tuning systems, I think about practical trade-offs: if I need predictable 24/7 transaction processing with hardware crypto and great virtualization density, I’ll favor s390x; if I need rapid scaling, a broad toolchain, and cheap instances, x86_64 wins. Porting code to s390x means paying attention to endianness, recompiling with architecture flags, and sometimes rethinking assumptions about atomic operations or third-party binaries. On the flip side, s390x’s specialty engines and massive memory bandwidth can make it surprisingly efficient per transaction, even if its per-thread peak may not match the highest-clocked x86 cores. Honestly, the best choice often comes down to workload characteristics, ecosystem needs, and cost model — not a simple “better-or-worse” verdict — so I tend to prototype both where possible and measure real transactions rather than relying on synthetic numbers. I’ve had projects where a JVM app moved to s390x and suddenly cryptographic-heavy endpoints got cheaper and faster thanks to on-chip crypto, and I’ve also seen microservice farms on x86_64 scale out at way lower upfront cost. If you’re curious, try running your critical path on each architecture in a constrained test and look at latency distributions, throughput under contention, and operational overhead — that’s where the truth lives.

What Linux Distros Officially Support S390x Today?

3 Answers2025-09-03 10:53:11
Honestly, if you're digging into s390x support today, the landscape is surprisingly tidy compared to other niche architectures. In plain terms: the big mainstream distributions offer official support, because IBM Z and LinuxONE are widely used in enterprise settings. The names you should know: Debian (official s390x port with regular images and repos), Fedora (s390x is an official Fedora architecture with regular composes), openSUSE/Leap and Tumbleweed (plus SUSE Linux Enterprise which is the commercial offering) and Red Hat Enterprise Linux (RHEL) all provide official builds for s390x. Canonical also ships Ubuntu images for IBM Z (s390x) for supported releases. Gentoo has maintained s390x support too, though its workflow is source-based rather than binary-focused. These are the ones you can reasonably point to as officially supported by their projects or vendors. Beyond that, some distributions provide community or experimental s390x images — Alpine and certain RHEL rebuilds or downstreams may have builds contributed by their communities, and projects like Rocky or AlmaLinux occasionally have community efforts, but their s390x coverage is more hit-or-miss and varies by release. If you need production stability, stick with Debian, Fedora, SUSE/SLES, Ubuntu, RHEL, or Gentoo depending on your preferred model (binary vs source). For getting started, look for images labeled 's390x' on each distro's download or cloud image pages, and check release notes for kernel and z/VM compatibility. I'm always tickled by how resilient these platforms are on mainframe iron — it's a different vibe from desktop Linux, but super solid if you need uptime.

Is QEMU Emulation Reliable For S390x Development?

3 Answers2025-09-03 19:01:19
I've been using QEMU for s390x work for years, and honestly, for most development tasks it's impressively dependable. For bringing up kernels, testing initramfs changes, and iterating on system services, QEMU will save you endless time: fast cycles with snapshots, easy serial logs, and straightforward debugging with gdb. The system emulation supports the common channel-attached (CCW) devices and block/network backends well enough to boot mainstream distributions, run systemd, and validate functionality without needing iron in the room. That said, reliability depends on what you mean by "reliable." If you need functional correctness—does the kernel boot, do filesystems mount, do userspace services run—QEMU is solid. If you need hardware-accurate behavior, cycle-exact timing, or access to specialized on-chip accelerators (cryptographic units, proprietary telemetry, or mainframe-specific features), QEMU's TCG emulation will fall short. KVM on real IBM Z hardware is the path for performance parity and hardware feature exposure, but of course that requires access to real machines. My usual workflow is to iterate fast in QEMU, use lightweight reproducible images, write tests that run in that environment, then smoke-test on actual hardware before merging big changes. For everyday development it's a huge productivity boost, but I always treat the emulator as the first step, not the final authority.

What Kernel Versions Best Support S390x Features?

3 Answers2025-09-03 18:48:05
When I dive into s390x support, I tend to look at two things: how mature a feature is in upstream mainline, and what enterprise distributions have backported. Historically, s390x has been part of the kernel for a long time (the s390/s390x tree matured through the 2.6 and 3.x eras), but the real message is that modern LTS kernels are where you'll find the best, most polished support for contemporary mainframe features. If you want concrete guidance: pick a modern long-term-stable kernel — think 5.10, 5.15, or 6.1 — or newer 6.x kernels if you need bleeding-edge fixes. Those LTS lines collect important fixes for KVM on s390x, DASD/CCW improvements, zfcp (Fibre Channel) robustness, zcrypt and CPACF crypto support, and paravirtual I/O enhancements. Enterprise distros (RHEL, SLES, Ubuntu LTS) often backport features into their kernel trees, so a distribution-provided LTS kernel can be the safest route for production while still giving you modern hardware support. Practically, if I’m deploying to a z15/z16 or running heavy KVM workloads, I’ll test on the latest upstream stable or a 6.x kernel to catch recently merged performance and crypto improvements, then switch to the distribution LTS that includes those backports for production. Also check kernel config options (look for s390, CCW, DASD, zcrypt-related flags) and read the s390-specific changelogs in the kernel git to verify feature flags you rely on.

Where Can I Find S390x Docker Images On Docker Hub?

3 Answers2025-09-03 08:06:24
Whenever I need s390x images I treat Docker Hub like a little scavenger hunt — it’s oddly satisfying when you find exactly the manifest you need. I’ll usually start at hub.docker.com, search for the image name (for example 'ubuntu', 'alpine', or whatever project you care about) and open the Tags view. Click a tag and look for the 'Supported architectures' section: if the repository publishes a manifest list, Docker Hub will show whether 's390x' (aka IBM Z) is included. That visual check saves a lot of time before attempting a pull. If I want to be 100% sure from the command line I run a few quick checks: docker pull --platform=linux/s390x IMAGE:TAG to try pulling the s390x variant (Docker will error if it doesn’t exist), or docker manifest inspect IMAGE:TAG | jq '.' to inspect the manifest list and see which platforms are present. For more advanced work I use docker buildx imagetools inspect IMAGE:TAG or skopeo inspect docker://IMAGE:TAG — those return the manifest and platform info reliably. If an image doesn’t include s390x you’ll either need to find a different image, look for a vendor that publishes s390x builds, or build one yourself with buildx and qemu emulation. A few practical tips from my experiments: official images like 'ubuntu', 'debian', 'alpine' and many OpenJDK variants frequently include s390x builds, but not every tag/version will. Some community or vendor images explicitly add a '-s390x' suffix in their tag names, though relying on manifest lists is safer. If you’re running on non‑Z hardware and testing, remember to enable qemu (multiarch/qemu-user-static) or use a CI with actual s390x runners. Happy hunting — once you get the hang of manifest inspection it becomes second nature and saves many wasted pulls.
Explore and read good novels for free
Free access to a vast number of good novels on GoodNovel app. Download the books you like and read anywhere & anytime.
Read books for free on the app
SCAN CODE TO READ ON APP
DMCA.com Protection Status