Kustomize for Multi-Environment Kubernetes: Beyond Basic Configuration
You’ve copied your Kubernetes YAML files across dev, staging, and production folders, manually changing replicas, resource limits, and image tags in each. Now a security patch requires updating all three environments, and you’re dreading the error-prone copy-paste marathon ahead. You open your staging deployment, locate the container image line, update the tag, then repeat for production and dev. Somewhere in this process, you fat-finger the replica count in production, setting it to 1 instead of 10. The deployment goes through without errors—Kubernetes doesn’t know you meant something different—and your application’s capacity drops to 10% during peak traffic.
This is the configuration management problem that haunts multi-environment Kubernetes setups. The naive approach is copying complete YAML manifests into environment-specific directories, then maintaining three or more versions of what is essentially the same application definition. Every change becomes a synchronization exercise: did you remember to update the resource limits in all three environments? Did you apply the same security context across the board? The cognitive overhead compounds with each service you add to your cluster.
The standard escape route is Helm, which replaces copy-paste with a templating language. But Helm introduces its own complexity: you’re now maintaining values files, learning template functions, and debugging curly-brace syntax errors in what used to be straightforward YAML. For many teams, this feels like trading one problem for another—you wanted to eliminate duplication, not learn a DSL.
Kustomize offers a different path. Instead of templates, it uses pure YAML and a patch-based approach that keeps your base configuration clean while allowing environment-specific overrides. The key insight is that most environment differences are small deltas—production needs more replicas, staging uses a different domain, dev skips the ingress entirely—and you can express these as transformations rather than rebuilding the entire configuration from scratch.
Why Configuration Management Gets Messy
Every platform engineer has been here: you start with a single Kubernetes deployment YAML for development, then copy-paste it for staging with a few tweaks to resource limits and image tags. When production rolls around, you duplicate it again, changing the replica count, adding PodDisruptionBudgets, and swapping out ConfigMap references. Before long, you’re maintaining three nearly-identical 200-line YAML files where a single typo in the shared portions requires three separate fixes.

This is the DRY (Don’t Repeat Yourself) violation at the heart of multi-environment Kubernetes management. The naive approach—separate YAML files per environment—creates maintenance nightmares. Change your container port? Update it in three places. Add a new label for observability? Hope you remember all the files. The cognitive overhead compounds with every service you deploy.
The Template Escape Hatch (And Its Costs)
Helm emerged as the dominant solution by treating Kubernetes manifests as templates. Variables like {{ .Values.replicaCount }} and conditional logic replace hardcoded values, letting a single chart generate environment-specific manifests. For complex applications with deep dependency trees—think multi-component platforms requiring sub-charts and lifecycle hooks—Helm’s templating power becomes essential.
But this power has a price. Helm introduces a significant learning curve beyond Kubernetes itself: template syntax, value precedence rules, and the helm CLI lifecycle. More critically, it obscures what actually gets deployed. A values.yaml file doesn’t show you the final manifest without running helm template or inspecting a release. When debugging why your production deployment differs from staging, you’re reverse-engineering template logic instead of comparing YAML.
For teams managing straightforward applications across environments—a typical web service with different resource allocations per environment, or a data pipeline with environment-specific credentials—Helm’s abstraction layer often adds complexity without proportional benefit.
Kustomize’s Native Advantage
Kustomize takes a fundamentally different approach: it works with plain Kubernetes YAML and applies declarative transformations. Your base configuration remains valid Kubernetes manifests that kubectl apply can use directly. Environment differences are expressed as patches and overlays, not template variables.
This approach offers immediate advantages. New team members already know the format—it’s just Kubernetes YAML. Reviewing changes in pull requests shows actual manifest modifications, not template logic. Most importantly, Kustomize has been built into kubectl since version 1.14, eliminating external dependencies for basic workflows.
The overlay pattern particularly shines when environment differences are modest but critical. Production needs higher resource limits and stricter security contexts? Apply a patch. Staging uses a different ConfigMap? Override it in the overlay. The base configuration captures what’s common; overlays express only what differs.
This sets up our exploration of how the base and overlay pattern actually works in practice.
The Base and Overlay Pattern Explained
Kustomize operates on a deceptively simple principle: define your common configuration once, then layer environment-specific changes on top. This base-and-overlay architecture eliminates the duplication that plagues traditional YAML management while maintaining full transparency into what’s deployed where.

Bases: Your Single Source of Truth
A base contains the Kubernetes resources that remain consistent across all environments—your deployments, services, and config maps stripped of environment-specific details. Think of it as the “production-ready” configuration with generic values. When you define a Deployment in your base, you specify the container image, resource limits, and essential labels without hardcoding staging URLs or production replica counts.
The base is not a template. It’s valid, deployable YAML that Kustomize will transform, not render. This distinction matters: you can kubectl apply a base directly if needed, making debugging straightforward. No mysterious template variables or rendering failures.
Overlays: Declarative Transformations
Overlays contain the patches and modifications for specific environments. A staging overlay might reduce replica counts and point to staging databases. A production overlay adds resource quotas, increases replicas, and injects production secrets. Each overlay references the base and declares what changes to apply.
Kustomize merges these layers using strategic merge patches and JSON patches. Strategic merge patches work intuitively—specify only the fields you want to change, and Kustomize merges them into the base configuration. Want to change the replica count? Your overlay includes just the spec.replicas field. Want to add an environment variable? Include only the new variable in spec.template.spec.containers[0].env. Kustomize handles the deep merge logic.
The Declarative Merge Strategy
Unlike template engines that perform string substitution, Kustomize understands Kubernetes resource structure. It knows that container environment variables should be merged by name, that labels should be additive, and that certain fields like image tags should be replaced completely. This semantic awareness prevents the subtle bugs that emerge from text-based templating—no more accidentally duplicating array entries or losing critical labels during substitution.
The merge happens at build time, not runtime. Running kustomize build overlays/production produces complete, ready-to-apply YAML. You see exactly what Kubernetes will receive, with no hidden logic or deferred evaluation. This transparency makes code reviews meaningful and debugging trivial.
With this mental model established, you’re ready to build your first base configuration and see how these concepts translate into actual project structure.
Building Your First Base Configuration
A solid base configuration is the foundation of your multi-environment Kustomize setup. This base contains all the common Kubernetes manifests and settings that remain consistent across environments—deployments, services, and shared configuration that doesn’t change whether you’re running in dev or production.
Creating the Base Directory Structure
Start by organizing your project with a clear separation between base and overlays:
my-app/├── base/│ ├── deployment.yaml│ ├── service.yaml│ └── kustomization.yaml└── overlays/ ├── dev/ ├── staging/ └── production/The base directory holds your canonical application manifests. Create a standard Kubernetes deployment:
apiVersion: apps/v1kind: Deploymentmetadata: name: web-appspec: replicas: 2 selector: matchLabels: app: web-app template: metadata: labels: app: web-app spec: containers: - name: web-app image: my-registry.io/web-app:1.0.0 ports: - containerPort: 8080 resources: requests: memory: "256Mi" cpu: "250m" limits: memory: "512Mi" cpu: "500m"Add a corresponding service:
apiVersion: v1kind: Servicemetadata: name: web-appspec: selector: app: web-app ports: - protocol: TCP port: 80 targetPort: 8080Defining the Kustomization File
The kustomization.yaml ties everything together and applies transformations across all resources:
apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
resources:- deployment.yaml- service.yaml
commonLabels: app.kubernetes.io/name: web-app app.kubernetes.io/managed-by: kustomize
commonAnnotations: documentation: https://wiki.company.com/web-app
namespace: defaultThe resources field lists all manifests to include. Kustomize automatically applies commonLabels to all resources, including selector labels and template labels. This ensures consistency and makes querying with kubectl get pods -l app.kubernetes.io/name=web-app reliable across your entire stack.
The namespace field sets a default namespace for all resources. While overlays can override this, defining it in the base provides a sensible default and documents the intended deployment target. Common labels following the app.kubernetes.io/ convention integrate well with Kubernetes tooling and provide standardized metadata for monitoring, logging, and observability platforms.
Understanding Common Labels and Annotations
Common labels and annotations are applied to every resource Kustomize processes. Labels serve as queryable metadata for selecting and grouping resources, while annotations store non-identifying information like documentation links, deployment policies, or integration metadata.
When Kustomize applies commonLabels, it intelligently injects them into multiple locations within each manifest. For a Deployment, labels appear in the metadata section, in the selector, and in the pod template metadata. This automatic propagation eliminates manual duplication and prevents configuration drift where selector labels don’t match pod labels.
Use common labels for operational metadata that genuinely applies across all resources. Avoid environment-specific labels like environment: production in the base—these belong in overlays. Common annotations work well for team ownership, runbook links, cost center tags, or any metadata consumed by external systems like service meshes, policy engines, or cloud provider integrations.
Adding Shared Configuration
ConfigMaps and Secrets often contain values used across environments. Define them in your base when they’re truly universal:
apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
resources:- deployment.yaml- service.yaml
commonLabels: app.kubernetes.io/name: web-app app.kubernetes.io/managed-by: kustomize
commonAnnotations:
configMapGenerator:- name: app-config literals: - LOG_LEVEL=info - MAX_CONNECTIONS=100 - FEATURE_FLAG_NEW_UI=false
secretGenerator:- name: app-secrets literals: - database-name=webapp_dbThe generators create ConfigMaps and Secrets with automatic hash suffixes (e.g., app-config-k4h7m8t9gh). This hash changes when content changes, triggering rolling updates automatically—no manual pod restarts needed.
ConfigMap and Secret generators support multiple input sources. Use literals for simple key-value pairs, files to load entire files as values, or envs to source from environment files. Generators can also set behaviors like disableNameSuffixHash if you need stable names for resources referenced outside Kubernetes.
The hash suffix mechanism provides immutable configuration semantics. When you update a ConfigMap’s contents, Kustomize generates a new object with a different name. Since the Deployment references the ConfigMap by name, Kubernetes sees this as a spec change and triggers a rolling update. This eliminates a common operational pain point where configuration changes don’t propagate without manual intervention.
💡 Pro Tip: Keep sensitive values out of your base entirely. Use
secretGeneratorwithliteralsonly for non-sensitive defaults like database names or API endpoints. Actual credentials belong in environment-specific overlays or external secret management systems.
Validating Your Base Configuration
Before creating overlays, validate that your base produces correct output:
cd basekustomize build .This outputs the complete manifests with all transformations applied. Examine the output for common labels appearing in all resources, generated ConfigMaps with hash suffixes, and properly formed Kubernetes manifests. Piping the output to kubectl apply --dry-run=client -f - provides additional validation against the Kubernetes API schema.
If you see your resources with the common labels and generated ConfigMaps, your base is ready. You’ve established a solid foundation that environment-specific overlays can inherit and modify. This base becomes the single source of truth for your application’s core configuration, while overlays handle the environmental variations that make each deployment unique.
Environment-Specific Overlays in Practice
With the base configuration established, overlays transform generic Kubernetes manifests into environment-specific deployments. Each overlay directory represents a deployment target—dev, staging, or production—with modifications that reflect the operational requirements of that environment.
Structuring Environment Overlays
Create separate overlay directories for each environment:
overlays/├── dev/│ ├── kustomization.yaml│ ├── replica-patch.yaml│ └── resources-patch.yaml├── staging/│ ├── kustomization.yaml│ ├── replica-patch.yaml│ └── config-patch.yaml└── production/ ├── kustomization.yaml ├── replica-patch.yaml ├── resources-patch.yaml └── hpa.yamlEach kustomization.yaml references the base and applies environment-specific transformations. Start with the development overlay:
apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
namePrefix: dev-nameSuffix: -v1
resources: - ../../base
images: - name: myapp newTag: latest
replicas: - name: myapp count: 1
patches: - path: resources-patch.yamlThe namePrefix and nameSuffix fields automatically rename all resources, creating isolation between environments deployed to the same cluster. A deployment named myapp becomes dev-myapp-v1, preventing naming collisions and enabling safe multi-environment deployments within shared infrastructure.
Resource Isolation with Name Transformations
Name prefixes and suffixes serve as more than collision prevention—they establish clear ownership boundaries and enable advanced deployment patterns. When combined with Kubernetes namespaces, name transformations create a multi-layered isolation strategy that prevents cross-environment interference.
Consider a scenario where both staging and production coexist in the same cluster but different namespaces. Without name transformations, resources would share identical names, complicating monitoring, logging aggregation, and incident response. The namePrefix: prod- directive ensures that every resource—deployments, services, configmaps, and secrets—carries an unambiguous environment identifier throughout its lifecycle.
Name suffixes enable versioning strategies for blue-green deployments or canary releases. Setting nameSuffix: -v2 in a production overlay creates prod-myapp-v2, allowing the new version to run alongside prod-myapp-v1 during gradual traffic migration. Once validated, you can update service selectors to target the new version and decommission the old deployment without downtime.
Patching Resources Per Environment
Resource requirements differ dramatically between environments. Development needs minimal resources for rapid iteration, while production demands headroom for traffic spikes:
apiVersion: apps/v1kind: Deploymentmetadata: name: myappspec: template: spec: containers: - name: myapp resources: requests: memory: "128Mi" cpu: "100m" limits: memory: "256Mi" cpu: "200m"Production overlays specify higher resource allocations:
apiVersion: apps/v1kind: Deploymentmetadata: name: myappspec: template: spec: containers: - name: myapp resources: requests: memory: "1Gi" cpu: "500m" limits: memory: "2Gi" cpu: "1000m"Kustomize applies these patches using strategic merge, updating only the specified fields while preserving the base configuration. This surgical approach minimizes overlay complexity—you define only what changes, not the entire resource specification.
Replica counts follow similar patterns. Development typically runs a single replica for cost efficiency, while production scales to handle real traffic and provide high availability:
replicas: - name: myapp count: 5 - name: background-worker count: 3For production workloads that experience variable traffic patterns, combine static replica counts with HorizontalPodAutoscaler resources that dynamically adjust capacity based on CPU, memory, or custom metrics. Include the HPA definition directly in the production overlay:
apiVersion: autoscaling/v2kind: HorizontalPodAutoscalermetadata: name: myappspec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: myapp minReplicas: 5 maxReplicas: 20 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70Managing Environment-Specific Configuration
Each environment requires distinct configuration values—API endpoints, feature flags, and external service URLs. Use ConfigMap generators to create environment-specific config:
apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
resources: - ../../base
namePrefix: staging-
configMapGenerator: - name: app-config behavior: merge literals: - API_ENDPOINT=https://api.staging.example.com - LOG_LEVEL=debug - FEATURE_NEW_UI=true - CACHE_TTL=300The behavior: merge directive combines these values with any ConfigMap defined in the base, allowing you to override specific keys without redefining the entire configuration. This approach maintains DRY principles—shared configuration lives in the base, environment-specific overrides reside in overlays.
Secrets follow a similar pattern, though actual secret values should never be committed to Git. Use placeholders in overlays and inject real values through CI/CD pipelines or external secret management systems:
apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
resources: - ../../base
namePrefix: prod-
secretGenerator: - name: app-secrets behavior: merge envs: - secrets.envThe secrets.env file (excluded from version control via .gitignore) contains key-value pairs loaded into the Secret resource. In production workflows, CI/CD systems retrieve secrets from vaults like HashiCorp Vault or AWS Secrets Manager and write them to secrets.env before running kubectl kustomize. This keeps sensitive data out of repositories while maintaining the declarative configuration model.
Image Tag Management
Different environments run different versions of your application. Development might track the latest tag for continuous deployment, while production requires explicit version pinning:
apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
resources: - ../../base
namePrefix: prod-
images: - name: myapp newName: registry.example.com/myapp newTag: v2.3.1 - name: sidecar newName: registry.example.com/sidecar newTag: v1.5.0
replicas: - name: myapp count: 5This overlay transforms image references from myapp:latest to registry.example.com/myapp:v2.3.1, ensuring production deployments are deterministic and auditable. The replicas field scales the deployment to five pods, matching production capacity requirements. When managing multiple container images—main application containers plus sidecars for logging, metrics, or service mesh proxies—define transformation rules for each image independently.
💡 Pro Tip: Use semantic versioning for production tags and automate tag updates through CI/CD pipelines. Pin staging to release candidate tags (
v2.3.1-rc1) to test promotion workflows before production deployment. Avoid mutable tags likelatestin any long-lived environment—they compromise reproducibility and complicate rollback operations.
With environment overlays configured, generating final manifests requires a single command: kubectl kustomize overlays/production. The output contains fully merged, environment-specific Kubernetes resources ready for deployment. The next section explores advanced patterns like generators and transformers that further reduce configuration duplication.
Advanced Patterns: Generators and Transformers
Once you’ve mastered base-overlay patterns, Kustomize’s generators and transformers unlock sophisticated configuration management without resorting to templating engines. These features handle the most common pain points in multi-environment Kubernetes deployments: secret rotation, configuration file injection, and surgical YAML modifications.
Automatic Secret Rotation with secretGenerator
The secretGenerator creates ConfigMaps and Secrets with content-hash suffixes, triggering automatic pod rollouts when values change. This eliminates the manual coordination headache of updating secrets and deployments.
apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
secretGenerator: - name: db-credentials literals: - username=app_user - password=base_password options: disableNameSuffixHash: falseapiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
bases: - ../../base
secretGenerator: - name: db-credentials behavior: replace files: - credentials.txt envs: - db.envWhen Kustomize builds this, it generates names like db-credentials-5t8c9m4gh2. Change the secret content, and the hash updates automatically. Your deployments reference the secret by name without the hash—Kustomize rewrites all references throughout your manifests. No manual deployment triggers required.
The hash-based naming solves a critical Kubernetes limitation: ConfigMaps and Secrets are mounted into pods at creation time. Without a name change, updating a Secret doesn’t trigger a pod restart, leaving your application running with stale credentials. The automatic hash suffix forces Kubernetes to recognize the resource as new, triggering your deployment’s rolling update strategy.
For production secret management, combine secretGenerator with external sources rather than checking sensitive values into git. Reference files that your CI/CD pipeline populates from a vault system, or use the envs directive to load environment files generated at build time. The disableNameSuffixHash option exists but defeats the primary benefit—only use it for secrets that truly never change and don’t require pod restarts.
ConfigMap Generators with File Sources
Loading configuration files directly into ConfigMaps beats embedding them in YAML. The configMapGenerator supports multiple source types and merge behaviors.
apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
bases: - ../../base
configMapGenerator: - name: app-config behavior: merge files: - configs/application.properties - configs/logback.xml literals: - LOG_LEVEL=debug - FEATURE_FLAG_NEW_UI=trueThe behavior: merge directive combines this generator with any base-level ConfigMap of the same name. Use replace to completely override base values, or create to ensure the ConfigMap doesn’t exist in the base.
File-based ConfigMap generation shines when managing complex configuration formats—application.properties files, XML configurations, or structured data that’s painful to escape in YAML literals. Each file becomes a separate key in the ConfigMap, with the filename as the key name. This approach maintains the original file format in your repository, making it easier to validate configurations with standard tooling before deployment.
The literals field handles simple key-value pairs, useful for feature flags and environment-specific settings that differ between overlays. Mixing files and literals in the same generator gives you the best of both worlds: complex configurations from files, simple toggles from literals.
Strategic Merge vs JSON Patches
Strategic merge patches work for most overlay scenarios—adding labels, changing replica counts, updating container images. They merge arrays by element name and feel natural for Kubernetes resources.
apiVersion: apps/v1kind: Deploymentmetadata: name: api-serverspec: replicas: 5 template: spec: containers: - name: api resources: limits: memory: "2Gi" cpu: "1000m"Strategic merge patches understand Kubernetes resource semantics. When patching a container array, you specify the container by name, and Kustomize merges your changes into the matching element. This feels intuitive and reads like the final manifest you want to produce.
JSON patches (RFC 6902) handle surgical modifications that strategic merge can’t express—removing array elements by index, conditional operations, or moving values.
apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
bases: - ../../base
patches: - target: kind: Deployment name: api-server patch: |- - op: remove path: /spec/template/spec/containers/0/env/2 - op: add path: /spec/template/spec/nodeSelector/disktype value: ssdUse strategic merge patches as your default. Reach for JSON patches when you need precise path-based operations or when strategic merge produces unexpected array behavior. JSON patches excel at removing specific elements—deleting the third environment variable, removing a volume mount, or stripping out development-only sidecars. The trade-off is brittleness: array index operations break if base manifest ordering changes.
The target selector in the patches array provides fine-grained control over which resources receive which patches. You can target by kind, name, namespace, label selector, or annotation selector. Multiple target criteria combine with AND logic, letting you apply a patch to “all Deployments in the backend namespace with the tier=api label.”
Components for Shared Overlay Logic
Components extract reusable configuration chunks that multiple overlays can reference without inheritance. Perfect for cross-cutting concerns like monitoring sidecars, security policies, or regional configurations.
apiVersion: kustomize.config.k8s.io/v1alpha1kind: Component
patches: - target: kind: Deployment patch: |- apiVersion: apps/v1 kind: Deployment metadata: name: not-important spec: template: spec: containers: - name: prometheus-exporter image: prom/node-exporter:v1.7.0 ports: - containerPort: 9100apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
bases: - ../../base
components: - ../../components/monitoring - ../../components/security-hardeningComponents compose horizontally while overlays stack vertically. An overlay can include zero, one, or many components, and multiple overlays can share the same component without duplication. This pattern shines when your staging and production environments both need monitoring, but only production requires additional compliance components.
Think of components as mix-ins or traits. A monitoring component adds Prometheus exporters, a security-hardening component applies pod security contexts and network policies, a regional-configuration component sets zone-specific node selectors. Your overlays compose these capabilities as needed without creating a tangled inheritance hierarchy.
Components support all the same features as bases—generators, transformers, patches, and resources. The key difference is intent: a base represents a deployment’s core configuration, while a component represents optional, composable behavior. This semantic distinction keeps your kustomization structure clear as complexity grows.
With generators handling dynamic content and transformers providing precise control over YAML mutations, you’re equipped to manage complex Kubernetes configurations declaratively. The next challenge becomes integrating these patterns into automated deployment pipelines.
Integrating Kustomize into CI/CD
Moving Kustomize from local development to production deployments requires integrating it into your continuous delivery pipeline. The beauty of Kustomize’s built-in kubectl support means you can deploy with a single command, but production-grade pipelines need validation, security scanning, and proper version control practices.
Deploying with kubectl apply -k
The most straightforward deployment approach uses kubectl apply -k directly against your overlay directories. Here’s a GitHub Actions workflow that deploys to staging and production environments:
name: Deploy to Kuberneteson: push: branches: [main]
jobs: deploy-staging: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4
- name: Configure kubectl uses: azure/k8s-set-context@v3 with: kubeconfig: ${{ secrets.KUBECONFIG_STAGING }}
- name: Validate Kustomize build run: kubectl kustomize overlays/staging | kubectl apply --dry-run=server -f -
- name: Deploy to staging run: kubectl apply -k overlays/staging
- name: Verify deployment run: kubectl rollout status deployment/my-app -n staging --timeout=5m
deploy-production: needs: deploy-staging runs-on: ubuntu-latest environment: production steps: - uses: actions/checkout@v4
- name: Configure kubectl uses: azure/k8s-set-context@v3 with: kubeconfig: ${{ secrets.KUBECONFIG_PROD }}
- name: Deploy to production run: kubectl apply -k overlays/productionThis pipeline validates staging deployments with server-side dry runs before applying changes, then requires manual approval before promoting to production using GitHub’s environment protection rules.
The server-side dry run (--dry-run=server) is crucial because it validates against your actual cluster’s admission controllers and webhooks, catching issues that client-side validation would miss. For example, if you have a policy controller that enforces resource limits, the server-side dry run will fail if your manifests violate those policies.
For Jenkins pipelines, the same principles apply with a different syntax:
pipeline { agent any stages { stage('Validate') { steps { sh 'kubectl kustomize overlays/staging | kubectl apply --dry-run=server -f -' } } stage('Deploy to Staging') { steps { sh 'kubectl apply -k overlays/staging' sh 'kubectl rollout status deployment/my-app -n staging --timeout=5m' } } stage('Deploy to Production') { when { branch 'main' } steps { input message: 'Deploy to production?' sh 'kubectl apply -k overlays/production' } } }}Pre-Deployment Validation
Beyond dry runs, validate your Kustomize builds with additional tooling. Create a validation script that catches common issues:
#!/bin/bashset -e
OVERLAY_DIR=$1
echo "Building Kustomize overlay: ${OVERLAY_DIR}"kubectl kustomize "${OVERLAY_DIR}" > /tmp/manifests.yaml
echo "Validating Kubernetes schemas with kubeval"kubeval --strict /tmp/manifests.yaml
echo "Checking for security issues with kubesec"kubesec scan /tmp/manifests.yaml
echo "Linting with kube-linter"kube-linter lint /tmp/manifests.yaml
echo "Validation passed for ${OVERLAY_DIR}"Run this script for each overlay in your CI pipeline before any deployment. Tools like kubeval catch schema violations, kubesec identifies security misconfigurations (such as containers running as root or missing resource limits), and kube-linter enforces organizational policies like mandatory labels or probe configurations.
Integrate this validation early in your pipeline. In GitHub Actions, run it as a required check on pull requests so developers get immediate feedback:
validation: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Install validation tools run: | wget https://github.com/instrumenta/kubeval/releases/latest/download/kubeval-linux-amd64.tar.gz tar xf kubeval-linux-amd64.tar.gz sudo mv kubeval /usr/local/bin
- name: Validate all overlays run: | for overlay in overlays/*/; do ./scripts/validate-kustomize.sh "$overlay" done💡 Pro Tip: Pin your Kustomize version in CI/CD by using a specific kubectl image (e.g.,
bitnami/kubectl:1.29) instead of relying on the runner’s default version. Kustomize behavior can change between kubectl releases, and version mismatches between local development and CI can cause unexpected build differences.
Version Control Strategies
Store your base and overlays in the same repository as your application code. This keeps configuration changes atomic with code changes and simplifies rollbacks. Use this directory structure:
deploy/├── base/│ ├── kustomization.yaml│ ├── deployment.yaml│ └── service.yaml└── overlays/ ├── staging/ │ └── kustomization.yaml └── production/ └── kustomization.yamlTag releases when deploying to production. If you need to rollback, checkout the previous tag and redeploy:
git checkout v1.2.3kubectl apply -k overlays/productionFor teams managing multiple applications, consider a monorepo structure where each application has its own deploy directory. This centralizes infrastructure configuration and makes cross-application changes easier to coordinate. Alternatively, use a separate “config repo” that references application images by tag—this pattern works especially well with GitOps tools.
When using image tags in your overlays, prefer immutable tags (like SHA digests) over mutable tags (like latest). Update the image tag in your kustomization.yaml as part of your CI process:
cd overlays/productionkustomize edit set image myapp=myregistry/myapp:${GIT_SHA}git add kustomization.yamlgit commit -m "Deploy ${GIT_SHA} to production"This creates an audit trail of exactly which image version is deployed in each environment.
GitOps with ArgoCD and Flux
For true GitOps workflows, integrate Kustomize with ArgoCD or Flux. Both tools natively support Kustomize and continuously reconcile your cluster state with Git. Here’s an ArgoCD Application manifest:
apiVersion: argoproj.io/v1alpha1kind: Applicationmetadata: name: my-app-production namespace: argocdspec: project: default source: repoURL: https://github.com/myorg/my-app targetRevision: main path: deploy/overlays/production destination: server: https://kubernetes.default.svc namespace: production syncPolicy: automated: prune: true selfHeal: trueArgoCD watches your Git repository and automatically applies changes when you merge to main. The automated sync policy enables self-healing, so manual cluster changes get reverted to match Git.
Flux provides similar functionality with a different architecture. Here’s a Flux Kustomization resource:
apiVersion: kustomize.toolkit.fluxcd.io/v1kind: Kustomizationmetadata: name: my-app-production namespace: flux-systemspec: interval: 5m path: ./deploy/overlays/production prune: true sourceRef: kind: GitRepository name: my-app healthChecks: - apiVersion: apps/v1 kind: Deployment name: my-app namespace: productionThe key advantage of GitOps is declarative drift detection. If someone manually edits a deployment in production, ArgoCD or Flux will detect the difference from Git and either alert you or automatically revert the change. This enforces Git as the single source of truth.
With your pipeline established, the next consideration is understanding when Kustomize makes sense versus reaching for a more feature-rich tool like Helm.
When to Choose Kustomize Over Helm
The decision between Kustomize and Helm isn’t about which tool is superior—it’s about matching the tool to your use case. Each excels in different scenarios, and understanding these distinctions prevents architectural missteps.
Kustomize’s Sweet Spot: Internal Applications
Kustomize shines when you own the entire configuration lifecycle. For internal applications where your team controls the base manifests and all environment variations, Kustomize delivers transparency without ceremony. You’re working directly with standard Kubernetes YAML, making it trivial to understand exactly what will be deployed. There’s no templating language to learn, no values schema to maintain, and no abstraction layer obscuring the actual resources.
The overlay pattern maps naturally to organizational boundaries. Different teams can maintain their own overlays while inheriting from a common base, and changes propagate explicitly rather than through template logic. When a junior engineer reviews a pull request, they see concrete YAML diffs, not Go template conditionals.
Helm’s Domain: Third-Party Software
Helm becomes essential when consuming third-party applications. Installing PostgreSQL, Prometheus, or Kafka from community charts requires Helm—these packages aren’t distributed as Kustomize bases. Helm’s templating complexity becomes a feature here: chart maintainers handle the configuration matrix so you don’t have to.
The package management capabilities matter for external dependencies. Version pinning, rollback functionality, and release tracking provide operational safety when you’re not intimately familiar with the underlying resources.
The Hybrid Approach
Many production platforms use both tools pragmatically. Helm installs third-party infrastructure components, while Kustomize manages custom application deployments. You can even Kustomize Helm output by using helm template to generate base manifests, then applying environment-specific patches through overlays.
💡 Pro Tip: If you find yourself writing complex Helm helpers or nested conditionals for internal apps, you’ve likely chosen the wrong tool. Kustomize’s constraints often lead to cleaner architecture.
The choice becomes clearer when you ask: am I packaging this for distribution, or managing deployment variations? Distribution demands Helm’s flexibility. Internal deployment management benefits from Kustomize’s simplicity. With your configuration strategy established and deployment patterns clear, the final consideration is operational sustainability.
Key Takeaways
- Start with a minimal base configuration containing only truly shared resources, then build overlays incrementally
- Use strategic merge patches for simple field overrides and JSON patches only when you need precise array modifications
- Integrate kustomize build validation into your CI pipeline to catch configuration errors before they reach production
- Leverage ConfigMap and Secret generators with hash suffixes to automatically trigger rolling updates when configuration changes
- Choose Kustomize for internal applications where you control the full lifecycle; use Helm for third-party packages and distribution
- Apply name prefixes and suffixes in overlays to enable safe multi-environment deployments within the same cluster
- Store Kustomize configurations alongside application code to keep deployment changes atomic with code changes