Build Your Private Image Generator: Docker Model Runner & Open WebUI Step-by-Step
Overview
We've all been there: you need a few quick images for a project, so you fire up an AI image service—and suddenly you're wondering where your prompts go, how many credits you have left, or why that "safe content" filter rejected your perfectly reasonable request for a dragon wearing a business suit. What if you could skip all that and run the whole thing on your own machine, with a slick chat interface on top?

That's exactly what Docker Model Runner now makes possible. With just a couple of commands, you can pull an image-generation model, connect it to Open WebUI, and start generating images right from a chat interface—fully local, fully private, fully yours.
Let's build it. Your own private DALL-E, no cloud subscription required.
What You'll Need
Before diving in, make sure you have the following:
- Docker Desktop (macOS) or Docker Engine (Linux) installed and running.
- At least 8 GB of free RAM for a small model (more RAM is better).
- A GPU is optional but highly recommended: NVIDIA (CUDA), Apple Silicon (MPS), or CPU fallback.
- Confirm your setup by running
docker model version—if it returns without errors, you're good to go.
Step-by-Step Setup
Step 1: Pull an Image Generation Model
Docker Model Runner uses a compact packaging format called DDUF (Diffusers Unified Format) to distribute image generation models through Docker Hub—just like any other OCI artifact.
Pull a model to get started:
docker model pull stable-diffusion
You can confirm it's ready by inspecting the model:
docker model inspect stable-diffusion
{
"id": "sha256:5f60862074a4c585126288d08555e5ad9ef65044bf490ff3a64855fc84d06823",
"tags": [
"docker.io/ai/stable-diffusion:latest"
],
"created": 1768470632,
"config": {
"format": "diffusers",
"architecture": "diffusers",
"size": "6.94GB",
"diffusers": {
"dduf_file": "stable-diffusion-xl-base-1.0-FP16.dduf",
"layout": "dduf"
}
}
}
What's happening under the hood? The model is stored locally as a DDUF file—a single-file format that bundles all the components of a diffusion model (text encoder, VAE, UNet/DiT, scheduler config) into one portable artifact. Docker Model Runner knows how to unpack it at runtime.
Step 2: Launch Open WebUI
Here comes the magic: Docker Model Runner has a built-in launch command that knows exactly how to wire up Open WebUI against your local inference endpoint.
docker model launch openwebui
That's it. Behind the scenes, this command:
- Starts a local inference server using the pulled model (with GPU acceleration if available).
- Exposes a 100% OpenAI-compatible API—including the
POST /v1/images/generationsendpoint. - Launches Open WebUI pre-configured to talk to that endpoint, so you can start generating images instantly from the chat interface.
Step 3: Generate Your First Image
Once Open WebUI is running, open your browser to the provided URL (usually http://localhost:8080). You'll see a familiar chat interface. Type a prompt like:

"A dragon wearing a business suit, sitting at a boardroom table, photorealistic, cinematic lighting"
Press enter, and within seconds your image appears—no credits, filters, or privacy concerns. You can iterate freely, refine prompts, or generate variations. All data stays on your machine.
Common Pitfalls and How to Avoid Them
- Not enough RAM: Docker Model Runner requires at least 8 GB for small models like Stable Diffusion XL. If you see out-of-memory errors or crashes, close other applications or increase your Docker memory limit in settings.
- GPU not detected: If your GPU isn't used, check that Docker Desktop has access to it (e.g., enable NVIDIA Container Toolkit or set Docker to use Apple Silicon). Fall back to CPU but expect slower generation.
- Model not pulled: Make sure you ran
docker model pull stable-diffusionbefore launching Open WebUI. The launch command expects the model to be present. - Port conflict: If port 8080 is already in use, you can change it by setting environment variables or modifying the launch command (check documentation).
- Outdated Docker version: Docker Model Runner is a relatively new feature. Ensure you're running Docker Desktop 4.27+ or Docker Engine 24+ with the model-runner plugin installed.
Summary
You now have a fully private, local image generation setup running on your own machine—no internet required, no data leaving your computer, and no recurring fees. With Docker Model Runner handling the heavy lifting and Open WebUI providing a clean chat interface, you can create images as easily as sending a message. This approach puts the power of modern diffusion models in your hands, with complete control and privacy. Now go ahead—your dragon in a business suit awaits.
Related Articles
- Tackling Staleness in Kubernetes Controllers: How to Use v1.36's New Mitigation and Observability Features
- How to Configure Tiered Memory Protection in Kubernetes v1.36 with Memory QoS
- AWS and Anthropic Forge Deeper AI Alliance: Claude Now Trained on Custom Chips, Cowork Debuts in Bedrock
- The PCPJack Worm: A Dual-Purpose Threat Cleansing and Credential Theft in Cloud Environments
- Overcoming Container Security Scans: Deploying ClickHouse with Docker Hardened Images
- Enhancing Memory Management in Kubernetes 1.36: Tiered Protection and Opt-In Reservation
- How to Tailor Cloud Service Dashboards in Grafana Cloud: A Step-by-Step Customization Guide
- Cloudflare Launches Dynamic Workflows: Custom Durable Execution for Every Tenant