Warning
This repository has been archived. The Helm chart has been moved to the official deco Studio repository (formerly known as deco Studio).
The chart is now maintained at deploy/helm/ and published to ghcr.io/decocms/chart-deco-studio.
This chart provides a complete and parameterizable solution for deploying the application with support for persistence, authentication, autoscaling, and much more.
- Overview
- Prerequisites
- Quick Start
- Installation
- Configuration
- Chart Structure
- Templates and Functionality
- Configurable Values
- Usage Examples
- Maintenance and Updates
This Helm chart encapsulates all Kubernetes resources necessary to run the application:
- Deployment: Main application with security configurations
- Service: Internal application exposure
- ConfigMap: Non-sensitive configurations
- Secret: Sensitive data (authentication)
- PersistentVolumeClaim: Persistent storage for database
- ServiceAccount: Service account for the pod
- HorizontalPodAutoscaler: Automatic autoscaling (optional)
- ✅ Parameterizable: All configurations via
values.yaml - ✅ Reusable: Deploy to multiple environments with different values
- ✅ Flexible: Support for additional volumes, tolerations, affinity
- ✅ Observable: Health checks, standardized labels
- ✅ Scalable: Optional HPA for autoscaling
- Kubernetes 1.32+
- Helm 3.0+
kubectlconfigured to access the cluster- StorageClass configured (for PVC)
The simplest way to get the application up and running on k8s:
# 1. Generate a secure secret for authentication
SECRET=$(openssl rand -base64 32)
# 2. Install the chart with the generated secret
helm install deco-studio . \
--namespace deco-studio \
--create-namespace \
--set secret.BETTER_AUTH_SECRET="$SECRET"
# 3. Wait for pods to be ready
kubectl wait --for=condition=ready pod \
-l app.kubernetes.io/instance=deco-studio \
-n deco-studio \
--timeout=300s
# 4. Access via port-forward
kubectl port-forward svc/deco-studio 8080:80 -n deco-studioThe application will be available at http://localhost:8080.
⚠️ Important for Production: This configuration uses SQLite and is suitable only for development/testing. For production environments, configure:
- PostgreSQL as the database engine (
database.engine: postgresql)- Autoscaling enabled (
autoscaling.enabled: true) with appropriate values- Distributed persistence (
persistence.distributed: true) or PostgreSQL to allow multiple replicasSee the Configuration section for more details on production configuration.
# Preparing necessary parameters
Adjust values.yaml with desired configurations to run in your environment
# Install with default values
helm install deco-studio . --namespace deco-studio --create-namespace
# Install with custom values
helm install deco-studio . -f my-values.yaml -n deco-studio --create-namespace# View release status
helm status deco-studio -n deco-studio
# View created resources
kubectl get all -l app.kubernetes.io/instance=deco-studio -n deco-studio
# View logs
kubectl logs -l app.kubernetes.io/instance=deco-studio -n deco-studioTo keep sensitive values out of your values.yaml file (useful for GitOps workflows like ArgoCD), you can create Secrets manually and reference them in your values file.
Edit examples/secrets-example.yaml with your actual values and apply it:
# Edit the file with your real values
# Then apply the Secrets
kubectl apply -f examples/secrets-example.yaml -n deco-studioThe Secrets file contains:
- Main Secret (
deco-studio-secrets): ContainsBETTER_AUTH_SECRETandDATABASE_URL - Auth Config Secret (
deco-studio-auth-secrets): Contains OAuth client IDs/secrets and API keys
In your values.yaml or values-custom.yaml, configure:
secret:
# Reference the existing Secret created manually
secretName: "deco-studio-secrets"
# Reference the authConfig Secret
authConfigSecretName: "deco-studio-auth-secrets"
database:
engine: postgresql
# Leave url empty when using Secret - value comes from DATABASE_URL in Secret
url: ""
configMap:
authConfig:
socialProviders:
google:
# Leave empty when using Secret - values come from Secret
clientId: ""
clientSecret: ""
github:
clientId: ""
clientSecret: ""
emailProviders:
- id: "resend-primary"
provider: "resend"
config:
apiKey: "" # Leave empty when using Secret
fromEmail: "noreply@decocms.com"# Install with values that reference external Secrets
helm install deco-studio . -f values-custom.yaml -n deco-studio --create-namespace
# Or upgrade existing release
helm upgrade deco-studio . -f values-custom.yaml -n deco-studioNote: When secret.secretName is defined, the chart will use the existing Secret instead of creating a new one. Values defined in values.yaml take precedence over Secret values (for backward compatibility).
helm uninstall deco-studio -n deco-studioThe main configurable values are in values.yaml.
Main sections:
| Parameter | Description | Default |
|---|---|---|
replicaCount |
Number of replicas | 3 |
image.repository |
Image repository | ghcr.io/decocms/studio/studio |
image.tag |
Image tag | latest |
service.type |
Service type | ClusterIP |
persistence.enabled |
Enable PVC | true |
persistence.distributed |
PVC supports ReadWriteMany | true |
persistence.accessMode |
PVC access mode | ReadWriteMany |
persistence.storageClass |
PVC StorageClass | efs |
autoscaling.enabled |
Enable HPA | false |
database.engine |
Database (sqlite/postgresql) |
sqlite |
database.url |
Database URL when PostgreSQL | "" |
database.caCert |
CA certificate for SSL validation (managed databases) | "" |
Create a custom-values.yaml file:
replicaCount: 2
image:
tag: "v1.2.3" # Example
service:
type: LoadBalancer
port: 80
database:
engine: postgresql
url: "postgresql://studio_user:studio_password@studio.example.com:5432/studio_db"
caCert: |
-----BEGIN CERTIFICATE-----
aaaaaaaabbbbbbcccccccccddddddd
aaaaaaaabbbbbbcccccccccddddddd
aaaaaaaabbbbbbcccccccccddddddd
aaaaaaaabbbbbbcccccccccddddddd
aaaaaaaabbbbbbcccccccccddddddd
aaaaaaaabbbbbbcccccccccddddddd
aaaaaaaabbbbbbcccccccccddddddd
aaaaaaaabbbbbbcccccccccddddddd
-----END CERTIFICATE-----
resources:
requests:
memory: "300Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
persistence:
size: 10Gi
storageClass: "gp3"Install with custom values:
helm install deco-studio . -f custom-values.yaml -n deco-studio --create-namespacechart-deco-studio/
├── Chart.yaml # Chart metadata
├── values.yaml # Default values
├── templates/ # Kubernetes templates
│ ├── _helpers.tpl # Helper functions
│ ├── deployment.yaml # Application deployment
│ ├── service.yaml # Service
│ ├── configmap.yaml # Main ConfigMap
│ ├── configmap-auth.yaml # Authentication ConfigMap
│ ├── secret.yaml # Secret
│ ├── pvc.yaml # PersistentVolumeClaim
│ ├── serviceaccount.yaml # ServiceAccount
│ ├── hpa.yaml # HorizontalPodAutoscaler
│ └── NOTES.txt # Post-installation messages
└── README.md # This file
This file defines reusable functions used in all templates:
Returns the base chart name:
{{- define "chart-deco-studio.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}- Uses
nameOverrideif defined, otherwise usesChart.Name - Truncates to 63 characters (Kubernetes limit)
Returns the full resource name:
{{- define "chart-deco-studio.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}- Important: Uses only
Release.Name, ignoring the chart name - If you install with
helm install deco-studio, all resources will have the namedeco-studio
Example:
helm install deco-studio . -n deco-studio --create-namespace- Deployment:
deco-studio - Service:
deco-studio - ConfigMap:
deco-studio-config - Secret:
deco-studio-secrets - PVC:
deco-studio-data
Generates standardized labels:
helm.sh/chart: chart-deco-studio-0.1.0
app.kubernetes.io/name: chart-deco-studio
app.kubernetes.io/instance: deco-studio
app.kubernetes.io/version: latest
app.kubernetes.io/managed-by: HelmLabels used for selection (selectors):
app.kubernetes.io/name: chart-deco-studio
app.kubernetes.io/instance: deco-studio{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}- If HPA is enabled, does not define
replicas(HPA controls)
strategy:
type: {{ include "chart-deco-studio.deploymentStrategy" . }}The chart automatically detects the appropriate deployment strategy:
- RollingUpdate: used when PostgreSQL OR distributed storage (ReadWriteMany)
- Recreate: used by default for SQLite with ReadWriteOnce
You can override by explicitly defining strategy.type in values.yaml.
env:
- name: NODE_ENV
valueFrom:
configMapKeyRef:
name: {{ include "chart-deco-studio.fullname" . }}-config
key: NODE_ENV
- name: DATABASE_URL
{{- if eq (lower (default "sqlite" .Values.database.engine)) "postgresql" }}
valueFrom:
secretKeyRef:
name: {{ include "chart-deco-studio.fullname" . }}-secrets
key: DATABASE_URL
{{- else }}
valueFrom:
configMapKeyRef:
name: {{ include "chart-deco-studio.fullname" . }}-config
key: DATABASE_URL
{{- end }}- References ConfigMap dynamically using
fullname - Security:
DATABASE_URLusessecretKeyRefwhen PostgreSQL (sensitive credentials) orconfigMapKeyRefwhen SQLite (just file path)
{{- if .Values.persistence.enabled }}
- name: data
persistentVolumeClaim:
claimName: {{ include "chart-deco-studio.fullname" . }}-data
{{- else }}
- name: data
emptyDir: {}
{{- end }}- If
persistence.enabled: false, usesemptyDir(temporary data)
{{- with .Values.volumes }}
{{- toYaml . | nindent 8 }}
{{- end }}- Allows adding custom volumes via
values.yaml
{{- with .Values.lifecycle }}
lifecycle:
{{- toYaml . | nindent 12 }}
{{- end }}- Conditionally renders lifecycle hooks (preStop, postStart) if defined in
values.yaml - Optional
terminationGracePeriodSecondsfor graceful shutdown (e.g. with PostgreSQL to avoid connection resets during deploy) - Useful for graceful shutdowns, cleanup tasks, or initialization scripts
- Supports both
exec(command execution) andhttpGet(HTTP requests)
{{- if and .Values.topologySpreadConstraints (gt (len .Values.topologySpreadConstraints) 0) }}
topologySpreadConstraints:
{{- range .Values.topologySpreadConstraints }}
- maxSkew: {{ .maxSkew }}
topologyKey: {{ .topologyKey }}
whenUnsatisfiable: {{ .whenUnsatisfiable }}
labelSelector:
{{- toYaml .labelSelector | nindent 12 }}
{{- end }}
{{- end }}- Distributes pods evenly across zones/availability
- Important:
labelSelectoris required when configured
selector:
{{- include "chart-deco-studio.selectorLabels" . | nindent 4 }}- Uses
selectorLabelsto connect to Deployment
{{- if .Values.service.sessionAffinity }}
sessionAffinity: {{ .Values.service.sessionAffinity }}
{{- end }}- Renders only if
sessionAffinityis defined - By default: no session affinity (requests distributed among all pods)
- If configured:
sessionAffinity: ClientIPensures requests from the same IP are directed to the same pod
data:
NODE_ENV: {{ .Values.configMap.meshConfig.NODE_ENV | quote }}
PORT: {{ .Values.configMap.meshConfig.PORT | quote }}
{{- if ne (lower (default "sqlite" .Values.database.engine)) "postgresql" }}
DATABASE_URL: {{ include "chart-deco-studio.databaseUrl" . | trim | quote }}
{{- end }}| quoteensures values are valid strings in YAML- Security:
DATABASE_URLonly goes in ConfigMap when SQLite (file path, not sensitive) - When PostgreSQL,
DATABASE_URLgoes in Secret (contains credentials)
auth-config.json: |
{
"emailAndPassword": {
"enabled": {{ .Values.configMap.authConfig.emailAndPassword.enabled }}
}
}- Generates JSON from YAML values
- Mounted as file in pod
{{- if not .Values.secret.secretName }}
apiVersion: v1
kind: Secret
...
stringData:
BETTER_AUTH_SECRET: {{ .Values.secret.BETTER_AUTH_SECRET | quote }}
{{- end }}The chart supports two secret management scenarios:
-
Create new Secret (default):
- If
secret.secretNameis empty or undefined, creates a new Secret - Uses
stringData(Helm automatically encodes to base64) - Secret name:
{{ release-name }}-secrets
- If
-
Use existing Secret:
- If
secret.secretNameis defined, does not create a new Secret - Deployment references the existing Secret specified in
secretName - Useful for using secrets managed by External Secrets Operator, etc.
- If
Logic summary:
- If
secret.secretNameempty/undefined → creates new Secret - If
secret.secretNamedefined → does not create Secret, only references existing one
{{- if .Values.persistence.enabled -}}
{{- if not .Values.persistence.claimName }}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "chart-deco-studio.fullname" . }}-data
spec:
accessModes:
- {{ .Values.persistence.accessMode }}
resources:
requests:
storage: {{ .Values.persistence.size }}
{{- end }}
{{- end }}The chart supports three persistence scenarios:
-
Create new PVC (default):
persistence: enabled: true claimName: "" # or omit
- Creates a new PVC with name
{{ release-name }}-data - Uses parameters defined in
persistence(size, storageClass, accessMode)
- Creates a new PVC with name
-
Use existing PVC:
persistence: enabled: true claimName: "my-existing-pvc"
- Does not create a new PVC
- References the existing PVC specified in
claimName - Useful for reusing data from previous installations or manually created PVCs
-
No persistence:
persistence: enabled: false
- Does not create PVC
- Deployment uses
emptyDir(temporary data, lost on pod restart)
Logic summary:
- If
persistence.enabled: false→ no PVC (usesemptyDir) - If
persistence.enabled: trueANDpersistence.claimNameempty/undefined → creates new PVC - If
persistence.enabled: trueANDpersistence.claimNamedefined → does not create PVC, only references existing one
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
...
{{- end }}- Creates ServiceAccount only if
serviceAccount.create: true
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
...
{{- end }}- Creates HPA only if
autoscaling.enabled: true - When enabled, removes
replicasfrom Deployment
Displays instructions after install/upgrade:
{{- if contains "ClusterIP" .Values.service.type }}
echo "To access the application, run:"
echo " kubectl port-forward svc/$SERVICE_NAME 8080:80"
{{- end }}- Different messages based on Service type
image:
repository: ghcr.io/decocms/studio/studio
pullPolicy: Always # Always, IfNotPresent, Never
tag: "latest" # Overrides Chart.AppVersion if definedreplicaCount: 3 # Ignored if autoscaling.enabled: true
strategy:
# type: "" # Leave empty for auto-detection:
# - RollingUpdate: if database.engine=postgresql OR persistence.distributed=true OR accessMode=ReadWriteMany
# - Recreate: if SQLite with ReadWriteOnce (default)
# rollingUpdate:
# maxSurge: 1
# maxUnavailable: 0Strategy Auto-detection: If strategy.type is empty or undefined, the chart automatically detects the appropriate strategy:
- RollingUpdate: used when
database.engine=postgresqlORpersistence.distributed=trueORaccessMode=ReadWriteMany - Recreate: used by default for SQLite with ReadWriteOnce (when only one pod can mount the volume)
You can also explicitly define strategy.type: "RollingUpdate" or "Recreate" if you want to override auto-detection.
replicaCount > 1is only allowed when you have distributed storage (persistence.distributed: trueoraccessMode: ReadWriteMany) or are using PostgreSQL (database.engine: postgresql).autoscaling.enabled: truerequires the same condition above (distributed storage or PostgreSQL).- If these requirements are not met, keep
replicaCount: 1and make capacity adjustments via vertical scaling (CPU/RAM).
service:
type: ClusterIP # ClusterIP, NodePort, LoadBalancer
port: 80
targetPort: 3000resources:
requests:
memory: "300Mi"
cpu: "250m"
limits:
memory: "600Mi"
cpu: "500m"livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 4You can configure lifecycle hooks (preStop, postStart) to execute commands during pod lifecycle events. This is useful for graceful shutdowns, cleanup tasks, or initialization scripts.
# Optional lifecycle hooks (preStop, postStart)
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- "wget -q -O - http://localhost:9229/prestop-hook || true"Common use cases:
- preStop: Graceful shutdown, draining connections, cleanup tasks
- postStart: Initialization scripts, warm-up tasks
Important notes:
- The
lifecycleconfiguration is optional - if not defined, no lifecycle hooks will be added - The
preStophook runs before the container is terminated, allowing for graceful shutdowns - The
postStarthook runs immediately after the container starts - Both hooks support
exec(command execution) orhttpGet(HTTP requests)
terminationGracePeriodSeconds (optional): Time in seconds the pod has to terminate gracefully after receiving SIGTERM (e.g. during a rolling update). When using PostgreSQL, set this (e.g. 60) together with a preStop hook so the app can drain database connections and avoid "Connection reset by peer" in the DB logs. If not set, the default is 30 seconds.
terminationGracePeriodSeconds: 60Example with preStop hook for graceful shutdown:
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- "wget -q -O - http://localhost:9229/prestop-hook || true"persistence:
enabled: true
storageClass: "efs" # "" uses default
accessMode: ReadWriteMany
size: 10Gi
claimName: "" # If defined, uses existing PVC
distributed: true # Mark true if PVC offers ReadWriteMany
**Important**: mark `distributed: true` or change `accessMode` to `ReadWriteMany` when using distributed storage (EFS, NFS, CephFS, etc.). Without this, the chart will block multiple replicas and autoscaling usage.database:
engine: sqlite # sqlite | postgresql
url: "" # Required when engine=postgresqlsqlite: uses local file/app/data/mesh.db(suitable for one replica).postgresql: requiresdatabase.url(e.g.,postgresql://user:pass@host:5432/db) and does not require shared storage to scale horizontally.
Security: DATABASE_URL is stored securely:
- SQLite: goes in ConfigMap (just a file path, not sensitive)
- PostgreSQL: goes in Secret (contains sensitive credentials like user and password)
The Deployment automatically references the correct location (configMapKeyRef for SQLite or secretKeyRef for PostgreSQL) based on database.engine.
When connecting to managed databases (such as AWS RDS, Google Cloud SQL, Azure Database, etc.), it is common for the server to use self-signed SSL certificates or provider-specific Certificate Authority (CA) certificates. To ensure secure and validated connections, you can configure the provider's CA certificate.
When to use:
- Connecting to managed databases that require SSL certificate validation
- Providers like AWS RDS, Google Cloud SQL, Azure Database, DigitalOcean Managed Databases, etc.
- When you receive errors like
SELF_SIGNED_CERT_IN_CHAINorUNABLE_TO_VERIFY_LEAF_SIGNATURE
Configuration:
database:
engine: postgresql
url: "postgresql://user:password@host:5432/dbname?sslmode=verify-ca"
caCert: |
-----BEGIN CERTIFICATE-----
MIID/jCCAuagAwIBAgIQdOCSuA9psBpQd8EI368/0DANBgkqhkiG9w0BAQsFADCB
... (complete CA certificate content)
-----END CERTIFICATE-----
configMap:
meshConfig:
DATABASE_PG_SSL: "true"
NODE_EXTRA_CA_CERTS: "/etc/ssl/certs/ca-cert.pem" # Path where certificate will be mountedHow to obtain the CA certificate:
-
AWS RDS: Download the certificate bundle for the desired region:
curl -o sa-east-1-bundle.pem https://truststore.pki.rds.amazonaws.com/sa-east-1/sa-east-1-bundle.pem
Available URLs:
https://truststore.pki.rds.amazonaws.com/{region}/{region}-bundle.pem -
Other providers (Google Cloud SQL, Azure Database, DigitalOcean, etc.): Consult your managed database provider's documentation to obtain the appropriate CA certificate.
How it works:
- The CA certificate is defined in
database.caCert(complete certificate content) - Helm creates a ConfigMap with the certificate
- The certificate is mounted in the pod at
/etc/ssl/certs/ca-cert.pem - The
NODE_EXTRA_CA_CERTSvariable points to the mounted certificate - Node.js uses the certificate to validate the SSL connection with the database
Important notes:
- If
caCertis not provided, the ConfigMap and volume will not be created for it - The
NODE_EXTRA_CA_CERTSvariable is only added ifcaCertis defined - This configuration is optional if you are running a self-managed PostgreSQL
Complete example for AWS RDS:
database:
engine: postgresql
url: "postgresql://postgres:password@rds-instance.region.rds.amazonaws.com:5432/dbname?sslmode=verify-ca"
caCert: |
-----BEGIN CERTIFICATE-----
MIID/jCCAuagAwIBAgIQdOCSuA9psBpQd8EI368/0DANBgkqhkiG9w0BAQsFADCB
lzELMAkGA1UEBhMCVVMxIjAgBgNVBAoMGUFtYXpvbiBXZWIgU2VydmljZXMsIElu
... (complete bundle content)
-----END CERTIFICATE-----
configMap:
meshConfig:
DATABASE_PG_SSL: "true"
NODE_EXTRA_CA_CERTS: "/etc/ssl/certs/ca-cert.pem"autoscaling:
enabled: false
minReplicas: 3
maxReplicas: 6
# targetCPUUtilizationPercentage: 80
targetMemoryUtilizationPercentage: 80Important: enable autoscaling only if persistence.distributed: true (or accessMode: ReadWriteMany) or if using PostgreSQL (database.engine: postgresql). Otherwise, the chart will fail during render.
configMap:
meshConfig:
NODE_ENV: "production"
PORT: "3000"
HOST: "0.0.0.0"
BETTER_AUTH_URL: "http://localhost:8080"
BASE_URL: "http://localhost:8080"
# DATABASE_URL is automatically filled from database.engine/url
authConfig:
emailAndPassword:
enabled: true
socialProviders:
google:
clientId: "your-google-client-id.apps.googleusercontent.com"
clientSecret: "your-google-client-secret"
github:
clientId: "your-github-client-id"
clientSecret: "your-github-client-secret"
saml:
enabled: false
providers: []
emailProviders:
- id: "resend-primary"
provider: "resend"
config:
apiKey: "your-resend-api-key"
fromEmail: "noreply@example.com"
inviteEmailProviderId: "resend-primary"
magicLinkConfig:
enabled: true
emailProviderId: "resend-primary"The chart supports three secret management scenarios:
-
Create new Secret (default):
secret: secretName: "" # or omit BETTER_AUTH_SECRET: "change-this-to-a-secure-random-string-at-least-32-chars"
- Creates a new Secret with name
{{ release-name }}-secrets - Uses values defined in
secret(BETTER_AUTH_SECRET and optionally DATABASE_URL for PostgreSQL)
- Creates a new Secret with name
-
Use existing Secret:
secret: secretName: "my-existing-secret" # Name of secret that already exists in cluster # BETTER_AUTH_SECRET not required when using existing secret
- Does not create a new Secret
- References the existing Secret specified in
secretName - The existing Secret must contain the necessary keys:
BETTER_AUTH_SECRET(required)DATABASE_URL(required only ifdatabase.engine=postgresql)
- Useful for using secrets managed by External Secrets Operator, Sealed Secrets, or other systems
-
No Secret (not supported):
- Secret is always required for
BETTER_AUTH_SECRET
- Secret is always required for
openssl rand -base64 32Logic summary:
- If
secret.secretNameempty/undefined → creates new Secret - If
secret.secretNamedefined → does not create Secret, only references existing one
podSecurityContext:
fsGroup: 1001
fsGroupChangePolicy: "OnRootMismatch"
securityContext:
runAsNonRoot: true
runAsUser: 1001
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: falsenodeSelector:
kubernetes.io/arch: amd64
tolerations: []
# - key: "env"
# operator: "Equal"
# value: "dev"
# effect: "NoSchedule"
affinity: {}
# - podAntiAffinity:
# preferredDuringSchedulingIgnoredDuringExecution:
# - weight: 100
# podAffinityTerm:
# labelSelector:
# matchLabels:
# app.kubernetes.io/name: chart-deco-studio
# topologyKey: kubernetes.io/hostname# Topology Spread Constraints (optional - leave empty [] to disable)
# IMPORTANT: labelSelector is required when topologySpreadConstraints is configured
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/name: chart-deco-studio
app.kubernetes.io/instance: deco-studioImportant: The labelSelector is required when topologySpreadConstraints is configured. This ensures pods are distributed evenly across zones/availability, improving application high availability.
volumes: []
# - name: extra-config
# configMap:
# name: my-config
volumeMounts: []
# - name: extra-config
# mountPath: "/etc/config"
# readOnly: trueYou can add extra containers to the Pod (such as sidecars, proxies, etc.) without removing the default application container.
The chart always keeps the main container and concatenates what is defined in extraContainers:
extraContainers: []
# - name: cloudsql-proxy
# image: gcr.io/cloudsql-docker/gce-proxy:1.33.1
# args:
# - "/cloud_sql_proxy"
# - "-instances=PROJECT:REGION:INSTANCE=tcp:5432"- If
extraContainersis not defined or empty, the Pod will have only the default container (current behavior). - If you define
extraContainers, all these containers will be added to the same Pod along with the main container.
serviceAccount:
create: true
automount: true
annotations: {}
name: "" # If defined, uses this name (does not create)nameOverride: "" # Replaces Chart.Name
fullnameOverride: "" # Replaces Release.Name (has priority)helm install deco-studio . -n deco-studio --create-namespace# production-values.yaml
replicaCount: 3
image:
tag: "v1.0.0"
service:
type: LoadBalancer
resources:
requests:
memory: "300Mi"
cpu: "250m"
limits:
memory: "600Mi"
cpu: "500m"
persistence:
size: 10Gi
storageClass: "efs"
configMap:
meshConfig:
NODE_ENV: "production"
BASE_URL: "https://studio.example.com"helm install deco-studio . -f production-values.yaml -n deco-studio --create-namespace# autoscaling-values.yaml
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 6
targetMemoryUtilizationPercentage: 80
resources:
requests:
memory: "300Mi"
cpu: "250m"
limits:
memory: "600Mi"
cpu: "500m"helm install deco-studio . -f autoscaling-values.yaml -n deco-studio --create-namespace# existing-pvc-values.yaml
persistence:
enabled: true
claimName: "existing-studio-data" # Name of PVC that already exists in cluster
# When claimName is defined, chart does NOT create a new PVC
# Only references the specified existing PVC# PVC must exist before installing chart
kubectl get pvc existing-studio-data -n deco-studio
# Install using existing PVC
helm install deco-studio . -f existing-pvc-values.yaml -n deco-studio --create-namespace
# Deployment will be created referencing existing PVC
# No new PVC will be created by this chartWhen to use:
- Migrate data from previous installation
- Reuse data between different Helm releases
- Use PVCs created manually or by other processes
# postgresql-managed-values.yaml
database:
engine: postgresql
url: "postgresql://postgres:password@rds-instance.sa-east-1.rds.amazonaws.com:5432/mydb?sslmode=verify-ca"
caCert: |
-----BEGIN CERTIFICATE-----
MIID/jCCAuagAwIBAgIQdOCSuA9psBpQd8EI368/0DANBgkqhkiG9w0BAQsFADCB
... (complete CA certificate content)
-----END CERTIFICATE-----
configMap:
meshConfig:
DATABASE_PG_SSL: "true"
NODE_EXTRA_CA_CERTS: "/etc/ssl/certs/ca-cert.pem"
persistence:
enabled: false # No PVC needed when using external PostgreSQL# Download CA certificate from AWS RDS (example for sa-east-1)
curl -o sa-east-1-bundle.pem https://truststore.pki.rds.amazonaws.com/sa-east-1/sa-east-1-bundle.pem
# Copy certificate content to values.yaml
cat sa-east-1-bundle.pem
# Install with managed PostgreSQL
helm install deco-studio . -f postgresql-managed-values.yaml -n deco-studio --create-namespaceNote: This example works for AWS RDS and other managed database providers. For other providers, consult the documentation to obtain the appropriate CA certificate.
# dev-values.yaml
persistence:
enabled: false # Uses emptyDir (temporary data)
replicaCount: 1
resources:
requests:
memory: "300Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"helm install deco-studio . -f dev-values.yaml -n deco-studio --create-namespace# Uses only release name
helm install deco-studio . -n deco-studio --create-namespace
# Or completely override
helm install deco-studio . \
--set fullnameOverride=custom-studio \
-n deco-studio --create-namespace# existing-secret-values.yaml
secret:
secretName: "external-secrets-operator-secret" # Name of secret that already exists in cluster
# BETTER_AUTH_SECRET not required when using existing secret
# Existing secret must contain keys:
# - BETTER_AUTH_SECRET (required)
# - DATABASE_URL (required only if database.engine=postgresql)# Secret must exist before installing chart
kubectl get secret external-secrets-operator-secret -n deco-studio
# Verify it contains necessary keys
kubectl get secret external-secrets-operator-secret -n deco-studio -o jsonpath='{.data}' | jq 'keys'
# Install using existing Secret
helm install deco-studio . -f existing-secret-values.yaml -n deco-studio --create-namespace
# Deployment will be created referencing existing Secret
# No new Secret will be created by this chartWhen to use:
- Use secrets managed by External Secrets Operator, etc
- Share secrets between different Helm releases
- Use secrets created manually or by other processes
# lifecycle-values.yaml
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- "wget -q -O - http://localhost:9229/prestop-hook || true"helm install deco-studio . -f lifecycle-values.yaml -n deco-studio --create-namespaceWhen to use:
- Graceful shutdown of connections before pod termination
- Cleanup tasks or resource release
- Notifying external systems about pod termination
- Draining connections or stopping background processes
Note: The preStop hook runs before the container receives the SIGTERM signal, allowing for graceful shutdowns during rolling updates or pod deletions.
# Edit values.yaml or create new file
vim custom-values.yaml
# Update release
helm upgrade deco-studio . -f custom-values.yaml -n deco-studio
# View history
helm history deco-studio -n deco-studio
# Rollback
helm rollback deco-studio -n deco-studio# Option 1: Update values.yaml and upgrade
helm upgrade deco-studio . \
--set image.tag=v1.2.3 \
-n deco-studio
# Option 2: If pullPolicy: Always, just restart
kubectl rollout restart deployment/deco-studio -n deco-studio# Edit values.yaml
vim values.yaml
# Update
helm upgrade deco-studio . -n deco-studio
# Restart pods to pick up changes
kubectl rollout restart deployment/deco-studio -n deco-studio# See what will be generated
helm template deco-studio . -n deco-studio
# See diff between versions
helm diff upgrade deco-studio . -n deco-studio# If using PVC
POD=$(kubectl get pod -l app.kubernetes.io/instance=deco-studio -n deco-studio -o jsonpath='{.items[0].metadata.name}')
kubectl cp deco-studio/$POD:/app/data/mesh.db ./backup-$(date +%Y%m%d).dbRecommended options:
- External Secrets Operator:
secret:
BETTER_AUTH_SECRET: "" # Filled via ExternalSecret- Values via command line:
helm install deco-studio . \
--set secret.BETTER_AUTH_SECRET=$(cat secret.txt) \
-n deco-studio --create-namespaceThe chart already includes:
- ✅
runAsNonRoot: true - ✅
allowPrivilegeEscalation: false - ✅
capabilities.drop: ALL ⚠️ readOnlyRootFilesystem: false(can be enabled with tmpfs volumes)
All resources have standardized labels:
# View all release resources
kubectl get all -l app.kubernetes.io/instance=deco-studio -n deco-studio
# View logs
kubectl logs -l app.kubernetes.io/instance=deco-studio -n deco-studio
# View metrics
kubectl top pods -l app.kubernetes.io/instance=deco-studio -n deco-studio- Liveness: Kills and recreates pods with problems
- Readiness: Removes pods from Service when not ready
This chart is part of the deco-studio project.
