Stacker CLI v0.2.6 — The single-file deployment configuration for containerised applications.
stacker.yml is the only file you need to add to your project. Stacker reads it to auto-generate Dockerfiles, docker-compose definitions, and deploy your application locally or to cloud infrastructure.
- Quick Start
- Minimal Example
- Full Example
- Top-Level Fields
- name · version · organization
- app — Application Source
- type · path · dockerfile · image · build · ports · volumes · environment
- services — Sidecar Containers
- proxy — Reverse Proxy
- type · auto_detect · domains · config
- deploy — Deployment Target
- target · compose_file · cloud · server
- ai — AI Assistant
- monitoring — Health & Metrics
- hooks — Lifecycle Scripts
- env / env_file — Environment Variables
- Environment Variable Interpolation
- Auto-Detection
- Generated Dockerfiles
- Validation Rules
- CLI Commands Reference
- Recipes
- FAQ
# 1. Install stacker
curl -fsSL https://stacker.try.direct/install.sh | bash
# 2. Initialize in your project directory
cd my-project
stacker init
# 3. Review the generated config
cat stacker.yml
# 4. Deploy locally
stacker deploy --target local
# 5. Check status
stacker statusThe smallest valid stacker.yml:
name: my-app
app:
type: static
path: ./public
deploy:
target: localThis tells Stacker to:
- Generate an nginx-based Dockerfile serving static files from
./public - Create a docker-compose.yml with the app service
- Deploy locally via
docker compose up
A production-ready configuration using all available sections:
name: my-saas-app
version: "2.0"
organization: acme-corp
app:
type: node
path: ./src
ports:
- "8080:3000"
environment:
NODE_ENV: production
build:
context: .
args:
NODE_ENV: production
services:
- name: postgres
image: postgres:16
ports:
- "5432:5432"
environment:
POSTGRES_DB: myapp
POSTGRES_USER: app
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- pgdata:/var/lib/postgresql/data
- name: redis
image: redis:7-alpine
ports:
- "6379:6379"
- name: worker
image: myapp-worker:latest
depends_on:
- postgres
- redis
environment:
REDIS_URL: redis://redis:6379
proxy:
type: nginx
auto_detect: true
domains:
- domain: app.example.com
ssl: auto
upstream: app:3000
- domain: api.example.com
ssl: auto
upstream: app:3000
deploy:
target: cloud
cloud:
provider: hetzner
region: fsn1
size: cpx21
ssh_key: ~/.ssh/id_ed25519
ai:
enabled: true
provider: ollama
model: llama3
endpoint: http://localhost:11434
timeout: 600
tasks:
- dockerfile
- troubleshoot
monitoring:
status_panel: true
healthcheck:
endpoint: /health
interval: 30s
metrics:
enabled: true
telegraf: true
hooks:
pre_build: ./scripts/pre-build.sh
post_deploy: ./scripts/post-deploy.sh
on_failure: ./scripts/notify-failure.sh
env_file: .env
env:
APP_PORT: "3000"
LOG_LEVEL: info
NODE_ENV: productionRequired · string · Max 128 characters
The project name. Used as the docker-compose project name, container name prefix, and displayed in status output.
name: my-awesome-appOptional · string · Default: none
A version label for the configuration. Informational only — does not affect behaviour.
version: "1.0"Optional · string · Default: none
Organisation slug. Used for scoping cloud deployments and linking to your TryDirect account.
organization: acme-corpApplication source configuration. Tells Stacker what kind of app you're building and where the source code lives.
Optional · enum · Default: static
The application framework/runtime. Determines which Dockerfile template is generated.
| Value | Description | Default Base Image | Default Port |
|---|---|---|---|
static |
Static HTML/CSS/JS site | nginx:alpine |
80 |
node |
Node.js application | node:20-alpine |
3000 |
python |
Python application | python:3.12-slim |
8000 |
rust |
Rust application | rust:1.77-alpine |
8080 |
go |
Go application | golang:1.22-alpine |
8080 |
php |
PHP application | php:8.3-fpm-alpine |
9000 |
custom |
User-provided Dockerfile | — | — |
app:
type: nodeTip: If you omit
type, Stacker auto-detects it from your project files. See Auto-Detection.
Optional · string (path) · Default: .
Path to the application source directory, relative to the stacker.yml location.
app:
path: ./srcOptional · string (path) · Default: none
Path to a custom Dockerfile. When set, Stacker uses your Dockerfile instead of generating one. Requires type: custom or will override the generated template.
app:
type: custom
dockerfile: ./docker/Dockerfile.prodOptional · string · Default: none
Use a pre-built Docker image instead of building from source. Mutually exclusive with dockerfile and auto-generation.
app:
type: custom
image: ghcr.io/myorg/myapp:latestOptional · object · Default: none
Docker build configuration. Controls the build context and build arguments passed to docker build.
| Field | Type | Default | Description |
|---|---|---|---|
context |
string |
. |
Build context directory |
args |
map<string, string> |
{} |
Build arguments (--build-arg) |
app:
type: node
build:
context: .
args:
NODE_ENV: production
API_URL: https://api.example.comOptional · string[] · Default: [] (auto-derived from type)
Explicit port mappings for the main app container in "host:container" format. When omitted, Stacker derives a default port from app.type (e.g. node → 3000, python → 8000).
app:
type: node
ports:
- "8080:3000"
- "9229:9229" # Node debuggerOptional · string[] · Default: []
Volume mounts for the main app container. Supports bind mounts (./host:/container) and named volumes (name:/path).
app:
type: node
volumes:
- "./uploads:/app/uploads"
- "app_cache:/app/.cache"Optional · map<string, string> · Default: {}
Per-app environment variables. Merged with the top-level env: section — app-level values take precedence on conflict. Supports ${VAR} interpolation.
app:
type: node
environment:
NODE_ENV: production
DATABASE_URL: postgres://app:${DB_PASSWORD}@postgres:5432/myappOptional · array · Default: []
Additional containers deployed alongside your main application — databases, caches, message queues, workers, etc. Each entry maps directly to a service in the generated docker-compose.yml.
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
name |
string |
yes | — | Service name (used as container/hostname) |
image |
string |
yes | — | Docker image reference |
ports |
string[] |
no | [] |
Port mappings ("host:container") |
environment |
map<string, string> |
no | {} |
Environment variables |
volumes |
string[] |
no | [] |
Volume mounts ("name:/path" or "./host:/container") |
depends_on |
string[] |
no | [] |
Services this depends on (started first) |
services:
- name: postgres
image: postgres:16
ports:
- "5432:5432"
environment:
POSTGRES_DB: myapp
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- pgdata:/var/lib/postgresql/data
- name: redis
image: redis:7-alpine
ports:
- "6379:6379"
- name: minio
image: minio/minio:latest
ports:
- "9000:9000"
- "9001:9001"
environment:
MINIO_ROOT_USER: admin
MINIO_ROOT_PASSWORD: ${MINIO_PASSWORD}
volumes:
- minio-data:/dataNote: Stacker detects port conflicts across services during validation. If two services bind the same host port, you'll get a warning (
W001).
Optional · object · Default: type: none, auto_detect: true
Reverse proxy configuration. Stacker can auto-detect a running proxy or generate configuration for one.
Optional · enum · Default: none
| Value | Description |
|---|---|
nginx |
Standard Nginx reverse proxy |
nginx-proxy-manager |
Nginx Proxy Manager (NPM) with web UI |
traefik |
Traefik reverse proxy with auto-discovery |
none |
No proxy configured |
proxy:
type: nginxOptional · bool · Default: true
When enabled, Stacker scans running Docker containers for an existing reverse proxy before deploying. If found, it connects your app to the existing proxy instead of creating a new one.
Detection checks for these container images (in priority order):
jc21/nginx-proxy-manager/nginx-proxy-manager→nginx-proxy-managertraefik→traefiknginx→nginx
proxy:
auto_detect: false # Don't look for existing proxiesOptional · array · Default: []
Domain routing rules. Each entry generates a proxy virtual host configuration.
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
domain |
string |
yes | — | Domain name (e.g. app.example.com) |
upstream |
string |
yes | — | Backend address (e.g. app:3000, http://web:8080) |
ssl |
enum |
no | off |
SSL certificate mode |
SSL modes:
| Value | Description |
|---|---|
auto |
Automatic certificate provisioning (Let's Encrypt) |
manual |
Use manually provided certificates |
off |
No SSL (HTTP only) |
proxy:
type: nginx
domains:
- domain: app.example.com
ssl: auto
upstream: app:3000
- domain: api.example.com
ssl: auto
upstream: app:3000
- domain: staging.example.com
ssl: off
upstream: app:3000Optional · string (path) · Default: none
Path to a custom proxy configuration file. When set, Stacker uses your config instead of generating one.
proxy:
type: nginx
config: ./nginx/custom.confDeployment target configuration. Controls where and how your stack is deployed.
Optional · enum · Default: local
| Value | Description |
|---|---|
local |
Deploy on the local machine via docker compose |
cloud |
Provision cloud infrastructure and deploy (requires deploy.cloud) |
server |
Deploy to an existing remote server via SSH (requires deploy.server) |
deploy:
target: localOptional · string (path) · Default: none
Use a custom docker-compose file instead of the auto-generated one. Stacker will skip generation and use this file directly.
deploy:
target: local
compose_file: ./docker-compose.prod.ymlRequired when target: cloud · object
Cloud infrastructure provisioning settings. Stacker uses Terraform/Ansible under the hood to create servers and deploy your stack.
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
provider |
enum |
yes | — | Cloud provider |
region |
string |
no | Provider default | Data center region |
size |
string |
no | Provider default | Server size/type |
ssh_key |
string (path) |
no | none | Path to SSH private key |
Supported cloud providers:
| Value | Provider | Example Regions | Example Sizes |
|---|---|---|---|
hetzner |
Hetzner Cloud | fsn1, nbg1, hel1 |
cpx21, cpx31, cpx41 |
digitalocean |
DigitalOcean | nyc1, sfo3, ams3 |
s-1vcpu-1gb, s-2vcpu-4gb |
aws |
Amazon Web Services | us-east-1, eu-west-1 |
t3.micro, t3.small |
linode |
Linode (Akamai) | us-east, eu-west |
g6-nanode-1, g6-standard-2 |
vultr |
Vultr | ewr, lhr, fra |
vc2-1c-1gb, vc2-2c-4gb |
deploy:
target: cloud
cloud:
provider: hetzner
region: fsn1
size: cpx21
ssh_key: ~/.ssh/id_ed25519Important: Cloud deployment requires authentication. Run
stacker loginfirst to store your TryDirect credentials.
Required when target: server · object
Remote server settings for deploying to an existing machine via SSH.
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
host |
string |
yes | — | Server hostname or IP address |
user |
string |
no | root |
SSH username |
ssh_key |
string (path) |
no | none | Path to SSH private key |
port |
integer |
no | 22 |
SSH port |
deploy:
target: server
server:
host: 203.0.113.42
user: deploy
ssh_key: ~/.ssh/deploy_key
port: 22Optional · object
Docker registry credentials for pulling private images during cloud/server deployment. When provided, docker login is executed on the target server before docker compose pull.
Credentials can be specified in stacker.yml or via environment variables. Environment variables take precedence.
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
username |
string |
yes | — | Registry username |
password |
string |
yes | — | Registry password or access token |
server |
string |
no | Docker Hub | Registry server URL |
Environment variables (override stacker.yml values):
| Variable | Fallback | Description |
|---|---|---|
STACKER_DOCKER_USERNAME |
DOCKER_USERNAME |
Registry username |
STACKER_DOCKER_PASSWORD |
DOCKER_PASSWORD |
Registry password |
STACKER_DOCKER_REGISTRY |
DOCKER_REGISTRY |
Registry server URL |
deploy:
target: cloud
cloud:
provider: hetzner
region: fsn1
size: cpx21
registry:
username: "${DOCKER_USERNAME}"
password: "${DOCKER_PASSWORD}"
# server: "https://index.docker.io/v1/" # Docker Hub (default)Security tip: Use environment variables or
${VAR}syntax to keep credentials out of version control.
Optional · object · Default: enabled: false
AI/LLM assistant configuration. When enabled, stacker ai ask uses the configured provider to answer questions about your Dockerfile, docker-compose, and deployment.
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
enabled |
bool |
no | false |
Enable AI features |
provider |
enum |
no | openai |
LLM provider |
model |
string |
no | Provider default | Model name |
api_key |
string |
no* | none | API key (supports ${VAR} syntax) |
endpoint |
string |
no | Provider default | Custom API endpoint URL |
timeout |
integer |
no | 300 |
Request timeout in seconds (increase for slow models / weak hardware) |
tasks |
string[] |
no | [] |
Allowed AI task types |
Supported providers:
| Value | Provider | Default Endpoint | Requires API Key |
|---|---|---|---|
openai |
OpenAI | https://api.openai.com/v1 |
Yes |
anthropic |
Anthropic | https://api.anthropic.com/v1 |
Yes |
ollama |
Ollama (local) | http://localhost:11434 |
No |
custom |
Any OpenAI-compatible API | Must specify endpoint |
Varies |
Task types (used for prompt specialisation):
dockerfile— Dockerfile optimisation and generationtroubleshoot— Debugging deployment issuescompose— docker-compose configuration helpsecurity— Security review and hardening
# Using OpenAI
ai:
enabled: true
provider: openai
model: gpt-4
api_key: ${OPENAI_API_KEY}
tasks:
- dockerfile
- troubleshoot
# Using local Ollama
ai:
enabled: true
provider: ollama
model: llama3
endpoint: http://localhost:11434
timeout: 600 # 10 minutes for large models on slower hardware
# Using a custom OpenAI-compatible API (e.g. Groq, Together AI)
ai:
enabled: true
provider: custom
model: mixtral-8x7b-32768
api_key: ${GROQ_API_KEY}
endpoint: https://api.groq.com/openai/v1Optional · object · Default: status_panel: false
Monitoring and health check configuration.
Optional · bool · Default: false
Enable the Stacker status panel — a web UI showing container health, resource usage, and deployment status.
monitoring:
status_panel: trueOptional · object · Default: none
Application health check settings.
| Field | Type | Default | Description |
|---|---|---|---|
endpoint |
string |
/health |
HTTP path to probe |
interval |
string |
30s |
Time between checks |
monitoring:
healthcheck:
endpoint: /api/health
interval: 15sOptional · object · Default: none
Metrics collection settings.
| Field | Type | Default | Description |
|---|---|---|---|
enabled |
bool |
false |
Enable metrics collection |
telegraf |
bool |
false |
Deploy Telegraf agent for metrics |
monitoring:
metrics:
enabled: true
telegraf: trueOptional · object · Default: none
Lifecycle hook scripts. Stacker runs these at specific points during the build and deploy process.
| Field | Type | Description | When it runs |
|---|---|---|---|
pre_build |
string (path) |
Script to run before Docker build | Before docker build |
post_deploy |
string (path) |
Script to run after successful deployment | After docker compose up succeeds |
on_failure |
string (path) |
Script to run on deployment failure | When any deploy step fails |
hooks:
pre_build: ./scripts/pre-build.sh
post_deploy: ./scripts/seed-database.sh
on_failure: ./scripts/alert-team.shHook scripts must be executable (
chmod +x).
Optional · map<string, string> · Default: {}
Inline environment variables passed to all containers. Supports ${VAR} interpolation.
env:
APP_PORT: "3000"
LOG_LEVEL: info
DATABASE_URL: postgres://app:${DB_PASSWORD}@postgres:5432/myappOptional · string (path) · Default: none
Path to a .env file. Loaded before the config is parsed, so variables defined here can be referenced with ${VAR} syntax anywhere in stacker.yml.
env_file: .envExample .env:
DB_PASSWORD=s3cret
MINIO_PASSWORD=admin123
OPENAI_API_KEY=sk-...
Any value in stacker.yml can reference environment variables using ${VAR_NAME} syntax. Variables are resolved from the process environment at parse time.
name: ${PROJECT_NAME}
app:
type: node
services:
- name: postgres
image: postgres:${PG_VERSION}
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
deploy:
target: cloud
cloud:
provider: ${CLOUD_PROVIDER}
ai:
api_key: ${OPENAI_API_KEY}Rules:
- Syntax:
${VARIABLE_NAME}(curly braces required) - Undefined variables cause a parse error (fail-fast, no silent empty strings)
- Interpolation happens before YAML parsing
- Works in all string values including paths, URLs, and map values
When you run stacker init without specifying --app-type, Stacker scans your project directory and looks for these marker files:
| Files Found | Detected Type |
|---|---|
package.json |
node |
requirements.txt, Pipfile, pyproject.toml, setup.py |
python |
Cargo.toml |
rust |
go.mod |
go |
composer.json |
php |
index.html, *.html |
static |
Detection priority is top-to-bottom. If none of these files are found, it defaults to static.
When you run stacker deploy, Stacker generates a Dockerfile in .stacker/Dockerfile based on app.type. Here's what each template produces:
FROM nginx:alpine
COPY . /usr/share/nginx/html
EXPOSE 80FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["python", "-m", "uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]FROM rust:1.77-alpine
WORKDIR /app
RUN apk add --no-cache musl-dev
COPY . .
RUN cargo build --release
EXPOSE 8080
CMD ["./target/release/app"]FROM golang:1.22-alpine
WORKDIR /app
COPY go.mod ./
COPY go.sum ./
RUN go mod download
COPY . .
RUN go build -o /app/server .
EXPOSE 8080
CMD ["/app/server"]FROM php:8.3-fpm-alpine
WORKDIR /var/www/html
RUN docker-php-ext-install pdo pdo_mysql
COPY . .
EXPOSE 9000No Dockerfile is generated. You must provide either app.dockerfile or app.image.
Customisation: To modify the generated Dockerfile, deploy once with
--dry-run, edit.stacker/Dockerfile, then deploy again with--force-rebuild.
Stacker validates your configuration both syntactically (YAML structure) and semantically (cross-field logic). Run stacker config validate to check.
| Code | Rule | Field |
|---|---|---|
E001 |
Cloud deployment requires deploy.cloud.provider |
deploy.cloud.provider |
E002 |
Server deployment requires deploy.server.host |
deploy.server.host |
E003 |
Custom app type requires app.image or app.dockerfile |
app |
| Code | Rule | Field |
|---|---|---|
W001 |
Port conflict — multiple services bind the same host port | services.ports |
$ stacker config validate
Configuration issues:
- [E001] Cloud provider configuration is required for cloud deployment (deploy.cloud.provider)
- [W001] Port 8080 is used by multiple services: api, worker (services.ports)
| Command | Description |
|---|---|
stacker init |
Initialize a new project — generates stacker.yml and .stacker/ directory (Dockerfile + docker-compose.yml) |
stacker deploy |
Build and deploy the stack (reuses existing .stacker/ artifacts if present) |
stacker status |
Show container status |
stacker logs |
Show container logs |
stacker destroy |
Tear down the stack |
stacker config validate |
Validate stacker.yml |
stacker config show |
Display resolved configuration |
stacker config fix |
Interactively fix missing required config fields |
stacker login |
Authenticate with TryDirect |
stacker ai ask |
Ask the AI assistant a question |
stacker proxy add |
Add a reverse-proxy domain entry |
stacker proxy detect |
Detect running reverse proxies |
stacker ssh-key generate |
Generate a Vault-backed SSH key pair for a server |
stacker ssh-key show |
Display the public SSH key for a server |
stacker ssh-key upload |
Upload an existing SSH key pair for a server |
stacker service add |
Add a service from the template catalog to stacker.yml |
stacker service list |
List available service templates (20+ built-in) |
stacker agent health |
Check Status Panel agent connectivity and health |
stacker agent status |
Display agent snapshot — containers, versions, uptime |
stacker agent logs <app> |
Retrieve container logs from the remote agent |
stacker agent restart <app> |
Restart a container via the agent |
stacker agent deploy-app |
Deploy or update an app container on the target server |
stacker agent remove-app |
Remove an app container (optional volume/image cleanup) |
stacker agent configure-proxy |
Configure Nginx Proxy Manager via the agent |
stacker agent history |
Show recent agent command execution history |
stacker agent exec |
Execute a raw agent command with JSON parameters |
stacker update |
Check for CLI updates |
| Flag | Description |
|---|---|
--app-type <TYPE> |
Application type: static, node, python, rust, go, php, custom |
--with-proxy |
Include reverse-proxy (nginx) configuration |
--with-ai |
Use AI to scan the project and generate a tailored stacker.yml |
--ai-provider <PROVIDER> |
AI provider: openai, anthropic, ollama, custom (default: ollama) |
--ai-model <MODEL> |
AI model name (e.g. gpt-4o, claude-sonnet-4-20250514, qwen2.5-coder, deepseek-r1) |
--ai-api-key <KEY> |
AI API key (or set OPENAI_API_KEY / ANTHROPIC_API_KEY env var) |
stacker init generates:
stacker.yml— project configuration.stacker/Dockerfile— generated Dockerfile (skipped ifapp.imageorapp.dockerfileis set).stacker/docker-compose.yml— generated compose definition (skipped ifdeploy.compose_fileis set)
# Init
stacker init # Auto-detect project type
stacker init --app-type node --with-proxy # Explicit type + proxy
stacker init --with-ai # AI-powered generation (Ollama default)
stacker init --with-ai --ai-model qwen2.5-coder # Specify Ollama model
stacker init --with-ai --ai-provider ollama --ai-model deepseek-r1
stacker init --with-ai --ai-provider openai --ai-api-key sk-...
stacker init --with-ai --ai-provider anthropic --ai-model claude-sonnet-4-20250514
# AI init environment variables (override CLI defaults)
# STACKER_AI_PROVIDER — AI provider (openai, anthropic, ollama, custom)
# STACKER_AI_MODEL — Model name
# STACKER_AI_API_KEY — API key (generic, provider-specific vars also supported)
# STACKER_AI_ENDPOINT — Custom endpoint URL
# STACKER_AI_TIMEOUT — Request timeout in seconds (default: 300)
# OPENAI_API_KEY — OpenAI API key (used when provider is openai)
# ANTHROPIC_API_KEY — Anthropic API key (used when provider is anthropic)
STACKER_AI_TIMEOUT=900 stacker init --with-ai # 15 min timeout for slow modelsstacker deploy --target local # Deploy locally
stacker deploy --target cloud # Deploy to cloud
stacker deploy --target local --dry-run # Generate files without deploying
stacker deploy --file custom.yml # Use a custom config file
stacker deploy --force-rebuild # Force regenerate .stacker/ artifacts
Troubleshooting: On deploy build/runtime failures, Stacker attempts AI-assisted diagnosis using your configured AI provider. If AI is unavailable, it prints fallback fix suggestions. Note:
deployreuses existing.stacker/Dockerfileand.stacker/docker-compose.ymlif present (e.g. fromstacker init). Use--force-rebuildto regenerate them.
# Logs
stacker logs # All services
stacker logs --service postgres # Specific service
stacker logs --follow # Stream logs
stacker logs --tail 100 # Last 100 lines
stacker logs --since 1h # Logs from the last hour
# Status
stacker status # Table format
stacker status --json # JSON output
stacker status --watch # Auto-refresh
# Destroy
stacker destroy --confirm # Required flag (safety guard)
stacker destroy --confirm --volumes # Also remove volumes
# Config
stacker config validate # Check stacker.yml
stacker config validate --file prod.yml
stacker config show # Display resolved config
# AI
stacker ai ask "How can I optimise this Dockerfile?"
stacker ai ask "Why is my container crashing?" --context ./logs.txt
# Proxy
stacker proxy add example.com --upstream http://app:3000 --ssl auto
stacker proxy detect
# Update
stacker update # Check stable channel
stacker update --channel beta # Check beta channel
# Config
stacker config fix # Interactively fix missing fields
stacker config fix --file prod.yml # Fix a specific config fileManage Vault-backed SSH keys for your deployed servers. Keys are stored securely in HashiCorp Vault.
# Generate a new SSH key pair for a server
stacker ssh-key generate --server-id 42
# Generate and save the private key locally
stacker ssh-key generate --server-id 42 --save-to ~/.ssh/my-server.pem
# Show the public SSH key
stacker ssh-key show --server-id 42
stacker ssh-key show --server-id 42 --json # JSON output
# Upload an existing SSH key pair
stacker ssh-key upload --server-id 42 \
--public-key ~/.ssh/id_rsa.pub \
--private-key ~/.ssh/id_rsaAdd services to your stacker.yml from a built-in catalog of 20+ templates. Each template includes a production-ready image, default ports, environment variables, and volumes.
# Add a service (creates backup, checks for duplicates)
stacker service add postgres
stacker service add redis
stacker service add wordpress # auto-adds mysql dependency
# Use aliases
stacker service add wp # → wordpress + mysql
stacker service add pg # → postgres
stacker service add es # → elasticsearch
# Specify a custom stacker.yml path
stacker service add mongodb --file ./configs/stacker.yml
# List all available templates
stacker service list # offline catalog (20+ services)
stacker service list --online # also query marketplace APIBuilt-in services: postgres, mysql, mariadb, mongodb, redis, memcached, rabbitmq, traefik, nginx, nginx_proxy_manager, wordpress, elasticsearch, kibana, qdrant, telegraf, phpmyadmin, mailhog, minio, portainer
Aliases: wp→wordpress, pg/postgresql→postgres, my→mysql, mongo→mongodb, es→elasticsearch, mq→rabbitmq, pma→phpmyadmin, mh→mailhog, npm→nginx_proxy_manager
Manage the Status Panel agent deployed on your target server. All commands communicate through the Stacker API using a pull-based architecture — the CLI enqueues commands, the agent polls for work, executes locally, and reports results.
Every command supports:
--json— machine-readable JSON output--deployment <HASH>— target a specific deployment (auto-resolved if omitted)
Deployment hash resolution order: --deployment flag → DeploymentLock (from a previous deploy) → stacker.yml project identity → API lookup.
# Health & status
stacker agent health # Check agent connectivity
stacker agent health --app nginx # Health of a specific container
stacker agent status # Agent snapshot: containers, versions, uptime
stacker agent status --json # JSON output
# Logs
stacker agent logs my-app # Fetch container logs
stacker agent logs my-app --lines 200 # Last 200 lines
stacker agent logs my-app --json # JSON output
# Container lifecycle
stacker agent restart my-app # Restart a container
stacker agent deploy-app --app my-app --image myorg/myapp --tag v2.1
stacker agent remove-app --app my-app # Remove container
stacker agent remove-app --app my-app --remove-volumes --remove-images
# Reverse proxy
stacker agent configure-proxy --app my-app --domain app.example.com --ssl
# History & raw commands
stacker agent history # Recent command history
stacker agent exec --command-type health # Raw command
stacker agent exec --command-type stacker.exec --params '{"container":"app","command":"ls -la"}'
# Target a specific deployment
stacker agent status --deployment abc123defThe AI assistant can manage the agent via built-in tools:
# AI agent control in write mode
stacker ai ask --write "check if the agent is healthy"
stacker ai ask --write "show me the logs for the nginx container"
stacker ai ask --write "deploy app my-service with image myorg/myapp:latest"
# Interactive chat
stacker ai --write
> what's the status of the agent?
> restart the postgres containerThe AI assistant can also add services via the add_service tool:
# AI adds services using the template catalog
stacker ai ask --write "add wordpress and redis to my stack"
stacker ai ask --write "I need a postgres database with custom port 5433"
# Interactive chat mode
stacker ai --write
> add elasticsearch and kibana for loggingStacker provides MCP tools for configuring iptables firewall rules on target servers. Rules can be derived from Ansible role port definitions or specified manually.
| Method | Description | When to use |
|---|---|---|
| Status Panel | Commands executed via Status Panel agent | Preferred — runs directly on target |
| SSH | Commands executed via SSH/Ansible | Fallback for servers without Status Panel |
| Type | Source | Use case |
|---|---|---|
| Public | 0.0.0.0/0 (any IP) |
HTTP, HTTPS, public APIs |
| Private | Specific CIDR | Databases, internal services |
configure_firewall — Configure iptables rules on a deployment:
{
"deployment_hash": "abc123",
"public_ports": [
{"port": 80, "protocol": "tcp"},
{"port": 443, "protocol": "tcp"}
],
"private_ports": [
{"port": 5432, "protocol": "tcp", "source": "10.0.0.0/8", "comment": "PostgreSQL"}
],
"action": "add",
"persist": true,
"execution_method": "status_panel"
}list_firewall_rules — List current iptables rules:
{
"deployment_hash": "abc123"
}configure_firewall_from_role — Auto-configure from Ansible role:
{
"role_name": "postgres",
"deployment_hash": "abc123",
"action": "add",
"private_network": "10.0.0.0/8"
}| Action | Description |
|---|---|
add |
Add firewall rules |
remove |
Remove firewall rules |
list |
List current rules |
flush |
Remove all rules |
# Configure firewall via AI
stacker ai ask --write "open ports 80 and 443 publicly"
stacker ai ask --write "allow postgres port 5432 from internal network only"
# Interactive chat
stacker ai --write
> configure firewall to allow HTTP and HTTPS
> add private port 3306 for MySQL from 10.0.0.0/8name: my-website
app:
type: static
path: ./dist
deploy:
target: localname: my-api
app:
type: node
path: .
services:
- name: postgres
image: postgres:16
ports:
- "5432:5432"
environment:
POSTGRES_DB: api_db
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- pgdata:/var/lib/postgresql/data
deploy:
target: local
env:
DATABASE_URL: postgres://postgres:${DB_PASSWORD}@postgres:5432/api_dbname: django-app
app:
type: python
path: .
build:
args:
DJANGO_SETTINGS_MODULE: myapp.settings.production
services:
- name: redis
image: redis:7-alpine
- name: celery
image: django-app:latest
depends_on:
- redis
environment:
CELERY_BROKER_URL: redis://redis:6379/0
proxy:
type: nginx
domains:
- domain: myapp.example.com
ssl: auto
upstream: app:8000
deploy:
target: cloud
cloud:
provider: hetzner
region: fsn1
size: cpx21
ssh_key: ~/.ssh/id_ed25519name: rust-api
app:
type: rust
path: .
deploy:
target: server
server:
host: api.example.com
user: deploy
ssh_key: ~/.ssh/deploy_key
monitoring:
status_panel: true
healthcheck:
endpoint: /api/health
interval: 15sname: wordpress-site
app:
type: custom
image: wordpress:6-apache
services:
- name: mysql
image: mysql:8
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: wordpress
volumes:
- db-data:/var/lib/mysql
proxy:
type: nginx
domains:
- domain: blog.example.com
ssl: auto
upstream: app:80
deploy:
target: localname: ${APP_NAME}
version: ${APP_VERSION}
app:
type: node
build:
args:
NODE_ENV: ${NODE_ENV}
API_URL: ${API_URL}
services:
- name: postgres
image: postgres:${PG_VERSION}
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
deploy:
target: ${DEPLOY_TARGET}Run with different environments:
# Development
APP_NAME=myapp APP_VERSION=dev NODE_ENV=development \
API_URL=http://localhost:3000 PG_VERSION=16 \
DB_PASSWORD=devpass DEPLOY_TARGET=local \
stacker deploy
# Production
APP_NAME=myapp APP_VERSION=1.2.3 NODE_ENV=production \
API_URL=https://api.example.com PG_VERSION=16 \
DB_PASSWORD=$PROD_DB_PASSWORD DEPLOY_TARGET=cloud \
stacker deployQ: Where are generated files stored?
A: In the .stacker/ directory. This includes Dockerfile, docker-compose.yml, and any proxy configuration. Add .stacker/ to your .gitignore.
Q: Can I edit the generated Dockerfile?
A: Yes. After stacker init (or stacker deploy --dry-run), edit .stacker/Dockerfile, then stacker deploy to build from your modified version. Stacker reuses existing .stacker/ files unless --force-rebuild is passed.
Q: What if I already have a Dockerfile?
A: Set app.type: custom and app.dockerfile: ./Dockerfile. Stacker will use yours instead of generating one.
Q: Do I need Docker installed? A: Yes. Stacker requires Docker (with Compose v2) for local deployments. For cloud deployments, Docker is provisioned on the remote server automatically.
Q: How do I keep secrets out of stacker.yml?
A: Use environment variable interpolation (${SECRET_VAR}) and store actual values in .env (referenced via env_file: .env). Never commit .env to version control.
Q: Can I use Stacker with an existing docker-compose.yml?
A: Yes. Set deploy.compose_file: ./docker-compose.yml and Stacker will use it directly without generating a new one.
Q: What cloud providers are supported?
A: Hetzner, DigitalOcean, AWS, Linode, and Vultr. You must stacker login first and have the appropriate API keys configured in your TryDirect account.
After stacker init, your project will look like:
my-project/
├── stacker.yml ← Your configuration (you write this)
├── .stacker/ ← Generated artifacts (auto-created)
│ ├── Dockerfile ← Generated Dockerfile
│ └── docker-compose.yml ← Generated compose definition
├── .env ← Secrets (optional, gitignored)
├── src/ ← Your application source
└── scripts/ ← Hook scripts (optional)
├── pre-build.sh
├── post-deploy.sh
└── notify-failure.sh
Stacker CLI is part of the TryDirect platform.